idnits 2.17.1 draft-ietf-dkim-threats-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 14. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1339. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1316. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1323. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1329. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 3 instances of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (April 4, 2006) is 6590 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'Internet' is mentioned on line 152, but not defined == Missing Reference: 'Report' is mentioned on line 163, but not defined == Outdated reference: A later version (-02) exists of draft-allman-dkim-ssp-01 == Outdated reference: A later version (-10) exists of draft-ietf-dkim-base-00 -- Obsolete informational reference (is this intentional?): RFC 2821 (Obsoleted by RFC 5321) -- Obsolete informational reference (is this intentional?): RFC 2822 (Obsoleted by RFC 5322) -- Obsolete informational reference (is this intentional?): RFC 3501 (Obsoleted by RFC 9051) Summary: 3 errors (**), 0 flaws (~~), 7 warnings (==), 10 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DKIM Working Group J. Fenton 3 Internet-Draft Cisco Systems, Inc. 4 Expires: October 6, 2006 April 4, 2006 6 Analysis of Threats Motivating DomainKeys Identified Mail (DKIM) 7 draft-ietf-dkim-threats-02 9 Status of this Memo 11 By submitting this Internet-Draft, each author represents that any 12 applicable patent or other IPR claims of which he or she is aware 13 have been or will be disclosed, and any of which he or she becomes 14 aware will be disclosed, in accordance with Section 6 of BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six months 22 and may be updated, replaced, or obsoleted by other documents at any 23 time. It is inappropriate to use Internet-Drafts as reference 24 material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt. 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 This Internet-Draft will expire on October 6, 2006. 34 Copyright Notice 36 Copyright (C) The Internet Society (2006). 38 Abstract 40 This document provides an analysis of some threats against Internet 41 mail that are intended to be addressed by signature-based mail 42 authentication, in particular DomainKeys Identified Mail. It 43 discusses the nature and location of the bad actors, what their 44 capabilities are, and what they intend to accomplish via their 45 attacks. 47 Table of Contents 49 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 50 1.1. Terminology and Model . . . . . . . . . . . . . . . . . . 4 51 1.2. Document Structure . . . . . . . . . . . . . . . . . . . . 6 52 2. The Bad Actors . . . . . . . . . . . . . . . . . . . . . . . . 6 53 2.1. Characteristics . . . . . . . . . . . . . . . . . . . . . 6 54 2.2. Capabilities . . . . . . . . . . . . . . . . . . . . . . . 7 55 2.3. Location . . . . . . . . . . . . . . . . . . . . . . . . . 8 56 2.3.1. Externally-located Bad Actors . . . . . . . . . . . . 8 57 2.3.2. Within Claimed Originator's Administrative Unit . . . 9 58 2.3.3. Within Recipient's Administrative Unit . . . . . . . . 9 59 3. Representative Bad Acts . . . . . . . . . . . . . . . . . . . 10 60 3.1. Use of Arbitrary Identities . . . . . . . . . . . . . . . 10 61 3.2. Use of Specific Identities . . . . . . . . . . . . . . . . 10 62 3.2.1. Exploitation of Social Relationships . . . . . . . . . 11 63 3.2.2. Identity-Related Fraud . . . . . . . . . . . . . . . . 11 64 3.2.3. Reputation Attacks . . . . . . . . . . . . . . . . . . 12 65 3.2.4. Reflection Attacks . . . . . . . . . . . . . . . . . . 12 66 4. Attacks on Message Signing . . . . . . . . . . . . . . . . . . 12 67 4.1. Attacks Against Message Signatures . . . . . . . . . . . . 13 68 4.1.1. Theft of Private Key for Domain . . . . . . . . . . . 13 69 4.1.2. Theft of Delegated Private Key . . . . . . . . . . . . 14 70 4.1.3. Private Key Recovery via Side Channel Attack . . . . . 14 71 4.1.4. Chosen Message Replay . . . . . . . . . . . . . . . . 15 72 4.1.5. Signed Message Replay . . . . . . . . . . . . . . . . 16 73 4.1.6. Denial-of-Service Attack Against Verifier . . . . . . 17 74 4.1.7. Denial-of-Service Attack Against Key Service . . . . . 17 75 4.1.8. Canonicalization Abuse . . . . . . . . . . . . . . . . 18 76 4.1.9. Body Length Limit Abuse . . . . . . . . . . . . . . . 18 77 4.1.10. Use of Revoked Key . . . . . . . . . . . . . . . . . . 18 78 4.1.11. Compromise of Key Server . . . . . . . . . . . . . . . 19 79 4.1.12. Falsification of Key Service Replies . . . . . . . . . 19 80 4.1.13. Publication of Malformed Key Records and/or 81 Signatures . . . . . . . . . . . . . . . . . . . . . . 20 82 4.1.14. Cryptographic Weaknesses in Signature Generation . . . 20 83 4.1.15. Display Name Abuse . . . . . . . . . . . . . . . . . . 21 84 4.1.16. Compromised System Within Originator's Network . . . . 21 85 4.1.17. Verification Probe Attack . . . . . . . . . . . . . . 22 86 4.1.18. Key Publication by Higher Level Domain . . . . . . . . 22 87 4.2. Attacks Against Message Signing Policy . . . . . . . . . . 23 88 4.2.1. Look-Alike Domain Names . . . . . . . . . . . . . . . 23 89 4.2.2. Internationalized Domain Name Abuse . . . . . . . . . 23 90 4.2.3. Denial-of-Service Attack Against Signing Policy . . . 24 91 4.2.4. Use of Multiple From Addresses . . . . . . . . . . . . 24 92 4.2.5. Abuse of Third-Party Signatures . . . . . . . . . . . 24 93 4.2.6. Falsification of Sender Signing Policy Replies . . . . 25 94 4.3. Other Attacks . . . . . . . . . . . . . . . . . . . . . . 25 95 4.3.1. Packet Amplification Attacks via DNS . . . . . . . . . 25 96 5. Derived Requirements . . . . . . . . . . . . . . . . . . . . . 26 97 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 26 98 7. Security Considerations . . . . . . . . . . . . . . . . . . . 26 99 8. Informative References . . . . . . . . . . . . . . . . . . . . 26 100 Appendix A. Acknowledgements . . . . . . . . . . . . . . . . . . 27 101 Appendix B. Edit History . . . . . . . . . . . . . . . . . . . . 28 102 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 30 103 Intellectual Property and Copyright Statements . . . . . . . . . . 31 105 1. Introduction 107 DomainKeys Identified Mail (DKIM) [I-D.ietf-dkim-base] defines a 108 mechanism by which email messages can be cryptographically signed, 109 permitting a signing domain to claim responsibility for the use of a 110 given email address. Message recipients can verify the signature by 111 querying the signer's domain directly to retrieve the appropriate 112 public key, and thereby confirm that the message was attested to by a 113 party in possession of the private key for the signing domain. 115 Once the attesting party or parties have been established, the 116 recipient may evaluate the message in the context of additional 117 information such as locally-maintained whitelists, shared reputation 118 services, and/or third-party accreditation. The description of these 119 mechanisms is outside the scope of this effort. By applying a 120 signature, a good player enables a verifier to associate a positive 121 reputation with the message, in hopes that it will receive 122 preferential treatment by the recipient. 124 This effort is not intended to address threats associated with 125 message confidentiality nor does it intend to provide a long-term 126 archival signature. 128 1.1. Terminology and Model 130 An administrative unit (AU) is the portion of the path of an email 131 message that is under common administration. The originator and 132 recipient typically develop trust relationships with the 133 administrative units that send and receive their email, respectively, 134 to perform the signing and verification of their messages. 136 The origin address is the address on an email message, typically the 137 RFC 2822 From: address, which is associated with the alleged author 138 of the message and is displayed by the recipient's MUA as the source 139 of the message. 141 The following diagram illustrates a typical usage flowchart for DKIM: 143 +---------------------------------+ 144 | SIGNATURE CREATION | 145 | (Originating or Relaying AU) | 146 | | 147 | Sign (Message, Domain, Key) | 148 | | 149 +---------------------------------+ 150 | - Message (Domain, Key) 151 | 152 [Internet] 153 | 154 V 155 +---------------------------------+ 156 +-----------+ | SIGNATURE VERIFICATION | 157 | | | (Relaying or Delivering AU) | 158 | KEY | | | 159 | QUERY +--->| Verify (Message, Domain, Key) | 160 | | | | 161 +-----------+ +----------------+----------------+ 162 | - Verified Domain 163 +-----------+ V - [Report] 164 | SENDER | +----------------+----------------+ 165 | SIGNING | | | 166 | PRACTICES +--->| SIGNER EVALUATION | 167 | QUERY | | | 168 | | +---------------------------------+ 169 +-----------+ 171 DKIM operates entirely on the content (body and selected header 172 fields) of the message, as defined in RFC 2822 [RFC2822]. The 173 transmission of messages via SMTP, defined in RFC 2821 [RFC2821], and 174 such elements as the envelope-from and envelope-to addresses and the 175 HELO domain are not relevant to DKIM verification. This is an 176 intentional decision made to allow verification of messages via 177 protocols other than SMTP, such as POP [RFC1939] and IMAP [RFC3501] 178 which an MUA acting as a verifier might use. 180 The Sender Signing Practices Query referred to in the diagram above 181 is a means by which the verifier can query the alleged author's 182 domain to determine their practices for signing messages, which in 183 turn may influence their evaluation of the message. If, for example, 184 a message arrives without any valid signatures, and the alleged 185 author's domain advertises that they sign all messages, the verifier 186 might handle that message differently than if a signature was not 187 necessarily to be expected. 189 1.2. Document Structure 191 The remainder of this document describes the problems that DKIM might 192 be expected to address, and the extent to which it may be successful 193 in so doing. These are described in terms of the potential bad 194 actors, their capabilities and location in the network, and in terms 195 of the bad acts that they might wish to commit. 197 This is followed by a description of postulated attacks on DKIM 198 message signing and on the use of Sender Signing Practices to assist 199 in the treatment of unsigned messages. A list of derived 200 requirements is also presented which is intended to guide the DKIM 201 design and review process. 203 The sections dealing with attacks on DKIM each begin with a table 204 summarizing the postulated attacks in each category along with their 205 expected impact and likelihood. The following definitions were used 206 as rough criteria for scoring the attacks: 208 Impact: 210 High: Affects the verification of messages from an entire domain or 211 multiple domains 213 Medium: Affects the verification of messages from specific users, 214 MTAs, and/or bounded time periods 216 Low: Affects the verification of isolated individual messages only 218 Likelihood: 220 High: All email users should expect this attack on a frequent basis 222 Medium: Email users should expect this attack occasionally; 223 frequently for a few users 225 Low: Attack is expected to be rare and/or very infrequent 227 2. The Bad Actors 229 2.1. Characteristics 231 The problem space being addressed by DKIM is characterized by a wide 232 range of attackers in terms of motivation, sophistication, and 233 capabilities. 235 At the low end of the spectrum are bad actors who may simply send 236 email, perhaps using one of many commercially available tools, which 237 the recipient does not want to receive. These tools typically allow 238 one to falsify the origin address of messages, and may, in the 239 future, be capable of generating message signatures as well. 241 At the next tier are what would be considered "professional" senders 242 of unwanted email. These attackers would deploy specific 243 infrastructure, including Mail Transfer Agents (MTAs), registered 244 domains and networks of compromised computers ("zombies") to send 245 messages, and in some cases to harvest addresses to which to send. 246 These senders often operate as commercial enterprises and send 247 messages on behalf of third parties. 249 The most sophisticated and financially-motivated senders of messages 250 are those who stand to receive substantial financial benefit, such as 251 from an email-based fraud scheme. These attackers can be expected to 252 employ all of the above mechanisms and additionally may attack the 253 Internet infrastructure itself, e.g., DNS cache-poisoning attacks; IP 254 routing attacks via compromised network routing elements. 256 2.2. Capabilities 258 In general, the bad actors described above should be expected to have 259 access to the following: 261 1. An extensive corpus of messages from domains they might wish to 262 impersonate 264 2. Knowledge of the business aims and model for domains they might 265 wish to impersonate 267 3. Access to public keys and associated authorization records 268 associated with the domain 270 and the ability to do at least some of the following: 272 1. Submit messages to MTAs and MSAs at multiple locations in the 273 Internet 275 2. Construct arbitrary message header fields, including those 276 claiming to be mailing lists, resenders, and other mail agents 278 3. Sign messages on behalf of domains under their control 280 4. Generate substantial numbers of either unsigned or apparently- 281 signed messages which might be used to attempt a denial of 282 service attack 284 5. Resend messages which may have been previously signed by the 285 domain 287 6. Transmit messages using any envelope information desired 289 As noted above, certain classes of bad actors may have substantial 290 financial motivation for their activities, and therefore should be 291 expected to have more capabilities at their disposal. These include: 293 1. Manipulation of IP routing. This could be used to submit 294 messages from specific IP addresses or difficult-to-trace 295 addresses, or to cause diversion of messages to a specific 296 domain. 298 2. Limited influence over portions of DNS using mechanisms such as 299 cache poisoning. This might be used to influence message 300 routing, or to cause falsification of DNS-based key or policy 301 advertisements. 303 3. Access to significant computing resources, for example through 304 the conscription of worm-infected "zombie" computers. This could 305 allow the bad actor to perform various types of brute-force 306 attacks. 308 4. Ability to "wiretap" some existing traffic, perhaps from a 309 wireless network. 311 Either of the first two of these mechanisms could be used to allow 312 the bad actor to function as a man-in-the-middle between author and 313 recipient, if that attack is useful. 315 2.3. Location 317 Bad actors or their proxies can be located anywhere in the Internet. 318 Certain attacks are possible primarily within the administrative unit 319 of the claimed originator and/or recipient domain have capabilities 320 beyond those elsewhere, as described in the below sections. Bad 321 actors can also collude by acting from multiple locations (a 322 "distributed bad actor"). 324 2.3.1. Externally-located Bad Actors 326 DKIM focuses primarily on bad actors located outside of the 327 administrative units of the claimed originator and the recipient. 328 These administrative units frequently correspond to the protected 329 portions of the network adjacent to the originator and recipient. It 330 is in this area that the trust relationships required for 331 authenticated message submission do not exist and do not scale 332 adequately to be practical. Conversely, within these administrative 333 units, there are other mechanisms such as authenticated message 334 submission that are easier to deploy and more likely to be used than 335 DKIM. 337 External bad actors are usually attempting to exploit the "any to 338 any" nature of email which motivates most recipient MTAs to accept 339 messages from anywhere for delivery to their local domain. They may 340 generate messages without signatures, with incorrect signatures, or 341 with correct signatures from domains with little traceability. They 342 may also pose as mailing lists, greeting cards, or other agents which 343 legitimately send or re-send messages on behalf of others. 345 2.3.2. Within Claimed Originator's Administrative Unit 347 Bad actors in the form of rogue or unauthorized users or malware- 348 infected computers can exist within the administrative unit 349 corresponding to a message's origin address. Since the submission of 350 messages in this area generally occurs prior to the application of a 351 message signature, DKIM is not directly effective against these bad 352 actors. Defense against these bad actors is dependent upon other 353 means, such as proper use of firewalls, and mail submission agents 354 that are configured to authenticate the author. 356 In the special case where the administrative unit is non-contiguous 357 (e.g., a company that communicates between branches over the external 358 Internet), DKIM signatures can be used to distinguish between 359 legitimate externally-originated messages and attempts to spoof 360 addresses in the local domain. 362 2.3.3. Within Recipient's Administrative Unit 364 Bad actors may also exist within the administrative unit of the 365 message recipient. These bad actors may attempt to exploit the trust 366 relationships which exist within the unit. Since messages will 367 typically only have undergone DKIM verification at the administrative 368 unit boundary, DKIM is not effective against messages submitted in 369 this area. 371 For example, the bad actor may attempt to spoof a header field 372 indicating the results of verification. This header field would 373 normally be added by the verifier, which would also detect spoofed 374 header fields on messages it was attempting to verify. This could be 375 used to falsely indicate that the message was authenticated 376 successfully. 378 As in the originator case, these bad actors can be dealt with by 379 controlling the submission of messages within the administrative 380 unit. Since DKIM permits verification to occur anywhere within the 381 recipient's administrative unit, these threats can also be minimized 382 by moving verification closer to the recipient, such as at the mail 383 delivery agent (MDA), or on the recipient's MUA itself. 385 3. Representative Bad Acts 387 One of the most fundamental bad acts being attempted is the delivery 388 of messages which are not intended to have been sent by the alleged 389 originating domain. As described above, these messages might merely 390 be unwanted by the recipient, or might be part of a confidence scheme 391 or a delivery vector for malware. 393 3.1. Use of Arbitrary Identities 395 This class of bad acts includes the sending of messages which aim to 396 obscure the identity of the actual author. In some cases the actual 397 sender might be the bad actor, or in other cases might be a third- 398 party under the control of the bad actor (e.g., a compromised 399 computer). 401 Particularly when coupled with sender signing practices that indicate 402 the domain owner signs all messages, DKIM can be effective in 403 mitigating against the abuse of addresses not controlled by bad 404 actors. DKIM is not effective against the use of addresses 405 controlled by bad actors. In other words, the presence of a valid 406 DKIM signature does not guarantee that the signer is not a bad actor. 407 It also does not guarantee the accountability of the signer, since 408 DKIM does not attempt to identify the signer individually, but rather 409 identifies the domain which they control. Accreditation and 410 reputation systems and locally-maintained whitelists and blacklists 411 can be used to enhance the accountability of DKIM-verified addresses 412 and/or the likelihood that signed messages are desirable. 414 3.2. Use of Specific Identities 416 A second major class of bad acts involves the assertion of specific 417 identities in email. 419 Note that some bad acts involving specific identities can sometimes 420 be accomplished, although perhaps less effectively, with similar 421 looking identities that mislead some recipients. For example, if the 422 bad actor is able to control the domain "examp1e.com" (note the "one" 423 between the p and e), they might be able to convince some recipients 424 that a message from admin@examp1e.com is really admin@example.com. 425 Similar types of attacks using internationalized domain names have 426 been hypothesized where it could be very difficult to see character 427 differences in popular typefaces. Similarly, if example2.com was 428 controlled by a bad actor, the bad actor could sign messages from 429 bigbank.example2.com which might also mislead some recipients. To 430 the extent that these domains are controlled by bad actors, DKIM is 431 not effective against these attacks, although it could support the 432 ability of reputation and/or accreditation systems to aid the user in 433 identifying them. 435 DKIM is effective against the use of specific identities only when 436 there is an expectation that such messages will, in fact, be signed. 437 The primary means for establishing this is the use of Sender Signing 438 Practices (SSP)[I-D.allman-dkim-ssp]. 440 3.2.1. Exploitation of Social Relationships 442 One reason for asserting a specific origin address is to encourage a 443 recipient to read and act on particular email messages by appearing 444 to be an acquaintance or previous correspondent that the recipient 445 might trust. This tactic has been used by email-propagated malware 446 which mail themselves to addresses in the infected host's address 447 book. In this case, however, the author's address may not be 448 falsified, so DKIM would not be effective in defending against this 449 act. 451 It is also possible for address books to be harvested and used by an 452 attacker to post messages from elsewhere. DKIM could be effective in 453 mitigating these acts by limiting the scope of origin addresses for 454 which a valid signature can be obtained when sending the messages 455 from other locations. 457 3.2.2. Identity-Related Fraud 459 Bad acts related to email-based fraud often, but not always, involve 460 the transmission of messages using specific origin addresses of other 461 entities as part of the fraud scheme. The use of a specific address 462 of origin sometimes contributes to the success of the fraud by 463 helping convince the recipient that the message was actually sent by 464 the alleged author. 466 To the extent that the success of the fraud depends on or is enhanced 467 by the use of a specific origin address, the bad actor may have 468 significant financial motivation and resources to circumvent any 469 measures taken to protect specific addresses from unauthorized use. 471 When signatures are verified by or for the recipient, DKIM is 472 effective in defending against the fraudulent use of origin addresses 473 on signed messages. When the published sender signing practices of 474 the origin address indicate that all messages from that address 475 should be signed, DKIM further mitigates against the attempted 476 fraudulent use of the origin address on unsigned messages. 478 3.2.3. Reputation Attacks 480 Another motivation for using a specific origin address in a message 481 is to harm the reputation of another, commonly referred to as a "joe- 482 job". For example, a commercial entity might wish to harm the 483 reputation of a competitor, perhaps by sending unsolicited bulk email 484 on behalf of that competitor. It is for this reason that reputation 485 systems must be based on an identity that is, in practice, fairly 486 reliable. 488 3.2.4. Reflection Attacks 490 A commonly-used tactic by some bad actors is the indirect 491 transmission of messages by intentionally mis-addressing the message 492 and causing it to be "bounced", or sent to the return address 493 (RFC2821 envelope-from address) on the message. In this case, the 494 specific identity asserted in the email is that of the actual target 495 of the message, to whom the message is "returned". 497 DKIM does not, in general, attempt to validate the RFC2821.mailfrom 498 return address on messages, either directly (noting that the mailfrom 499 address is an element of the SMTP protocol, and not the message 500 content on which DKIM operates), or via the optional Return-Path 501 header field. Furthermore, as is noted in section 4.4 of RFC 2821 502 [RFC2821], it is common and useful practice for a message's return 503 path not to correspond to the origin address. For these reasons, 504 DKIM is not effective against reflection attacks. 506 4. Attacks on Message Signing 508 Bad actors can be expected to exploit all of the limitations of 509 message authentication systems. They are also likely to be motivated 510 to degrade the usefulness of message authentication systems in order 511 to hinder their deployment. Both the signature mechanism itself and 512 declarations made regarding use of message signatures (referred to 513 here as Sender Signing Policy, Sender Signing Practices or SSP, as 514 described in [I-D.ietf-dkim-base] ) can be expected to be the target 515 of attacks. 517 4.1. Attacks Against Message Signatures 519 Summary of postulated attacks against DKIM signatures: 521 +---------------------------------------------+--------+------------+ 522 | Attack Name | Impact | Likelihood | 523 +---------------------------------------------+--------+------------+ 524 | Theft of private key for domain | High | Low | 525 | Theft of delegated private key | Medium | Medium | 526 | Private key recovery via side channel | High | Low | 527 | attack | | | 528 | Chosen message replay | Low | M/H | 529 | Signed message replay | Low | High | 530 | Denial-of-service attack against verifier | High | Medium | 531 | Denial-of-service attack against key | High | Medium | 532 | service | | | 533 | Canonicalization abuse | Low | Medium | 534 | Body length limit abuse | Medium | Medium | 535 | Use of revoked key | Medium | Low | 536 | Compromise of key server | High | Low | 537 | Falsification of key service replies | Medium | Medium | 538 | Publication of malformed key records and/or | High | Low | 539 | signatures | | | 540 | Cryptographic weaknesses in signature | High | Low | 541 | generation | | | 542 | Display name abuse | Medium | High | 543 | Compromised system within originator's | High | Medium | 544 | network | | | 545 | Verification probe attack | Medium | Medium | 546 | Key publication by higher level domain | High | Low | 547 +---------------------------------------------+--------+------------+ 549 4.1.1. Theft of Private Key for Domain 551 Message signing technologies such as DKIM are vulnerable to theft of 552 the private keys used to sign messages. This includes "out-of-band" 553 means for this theft, such as burglary, bribery, extortion, and the 554 like, as well as electronic means for such theft, such as a 555 compromise of network and host security around the place where a 556 private key is stored. 558 Keys which are valid for all addresses in a domain typically reside 559 in MTAs which should be located in well-protected sites, such as data 560 centers. Various means should be employed for minimizing access to 561 private keys, such as non-existence of commands for displaying their 562 value, although ultimately memory dumps and the like will probably 563 contain the keys. Due to the unattended nature of MTAs, some 564 countermeasures, such as the use of a pass phrase to "unlock" a key, 565 are not practical to use. Other mechanisms, such as the use of 566 dedicated hardware devices which contain the private key and perform 567 the cryptographic signature operation, would be very effective in 568 denying export of the private key to those without physical access to 569 the device. Such devices would almost certainly make the theft of 570 the key visible, so that appropriate action (revocation of the 571 corresponding public key) can be taken should that happen. 573 4.1.2. Theft of Delegated Private Key 575 There are several circumstances where a domain owner will want to 576 delegate the ability to sign messages for the domain to an individual 577 user or a third-party associated with an outsourced activity such as 578 a corporate benefits administrator or a marketing campaign. Since 579 these keys may exist on less well-protected devices than the domain's 580 own MTAs, they will in many cases be more susceptible to compromise. 582 In order to mitigate this exposure, keys used to sign such messages 583 can be restricted by the domain owner to be valid for signing 584 messages only on behalf of specific addresses in the domain. This 585 maintains protection for the majority of addresses in the domain. 587 A related threat is the exploitation of weaknesses in the delegation 588 process itself. This threat can be mitigated through the use of 589 standard precautions against the theft of private keys and the 590 falsification of public keys in transit. For example, the exposure 591 to theft can be minimized if the delegate generates the keypair to be 592 used, and sends the public key to the domain owner. The exposure to 593 falsification (substitution of a different public key) can be reduced 594 if this transmission is signed by the delegate and verified by the 595 domain owner. 597 4.1.3. Private Key Recovery via Side Channel Attack 599 All popular digital signature algorithms are subject to a variety of 600 side channel attacks. The most well-known of these are timing 601 channels [Kocher96], power analysis [Kocher99], and cache timing 602 analysis [Bernstein04]. Most of these attacks require either 603 physical access to the machine or the ability to run processes 604 directly on the target machine. Defending against these attacks is 605 out of scope for DKIM. 607 However, remote timing analysis (at least on local area networks) is 608 known to be feasible [Boneh03], particularly in server-type platforms 609 where the attacker can inject traffic which will immediately subject 610 to the cryptographic operation in question. With enough samples, 611 these techniques can be used to extract private keys even in the face 612 of modest amounts of noise in the timing measurements. 614 The three commonly proposed countermeasures against timing analysis 615 are: 617 1. Make the operation run in constant time. This turns out in 618 practice to be rather difficult. 620 2. Make the time independent of the input data. This can be 621 difficult but see [Boneh03] for more details. 623 3. Use blinding. This is generally considered the best current 624 practice countermeasure, and while not proved generally secure is 625 a countermeasure against known timing attacks. It adds about 626 2-10% to the cost of the operation and is implemented in many 627 common cryptographic libraries. Unfortunately, Digital Signature 628 Algorithm (DSA) and Elliptic Curve DSA (ECDSA) do not have 629 standard methods though some defenses may exist. 631 Note that adding random delays to the operation is only a partial 632 countermeasure. Because the noise is generally uniformly 633 distributed, a large enough number of samples can be used to average 634 it out and extract an accurate timing signal. 636 4.1.4. Chosen Message Replay 638 Chosen message replay refers to the scenario where the attacker 639 creates a message and obtains a signature for it by sending it 640 through an MTA authorized by the originating domain to him/herself or 641 an accomplice. They then "replay" the signed message by sending it, 642 using different envelope addresses, to a (typically large) number of 643 other recipients. 645 Due to the requirement to get an attacker-generated message signed, 646 chosen message replay would most commonly be experienced by consumer 647 ISPs or others offering email accounts to clients, particularly where 648 there is little or no accountability to the account holder (the 649 attacker in this case). One approach to this problem is for the 650 domain to only sign email for clients that have passed a vetting 651 process to provide traceability to the message originator in the 652 event of abuse. At present, the low cost of email accounts (zero) 653 does not make it practical for any vetting to occur. It remains to 654 be seen whether this will be the model with signed mail as well, or 655 whether a higher level of trust will be required to obtain an email 656 signature. 658 A variation on this attack involves the attacker sending a message 659 with the intent of obtaining a signed reply containing their original 660 message. The reply might come from an innocent user or might be an 661 automatic response such as a "user unknown" bounce message. In some 662 cases, this signed reply message might accomplish the attacker's 663 objectives if replayed. This variation on chosen message replay can 664 be mitigated by limiting the extent to which the original content is 665 quoted in automatic replies, and by the use of complementary 666 mechanisms such as egress content filtering. 668 Revocation of the signature or the associated key is a potential 669 countermeasure. However, the rapid pace at which the message might 670 be replayed (especially with an army of "zombie" computers), compared 671 with the time required to detect the attack and implement the 672 revocation, is likely to be problematic. A related problem is the 673 likelihood that domains will use a small number of signing keys for a 674 large number of customers, which is beneficial from a caching 675 standpoint but is likely to result in a great deal of collateral 676 damage (in the form of signature verification failures) should a key 677 be revoked suddenly. 679 Signature revocation addresses the collateral damage problem at the 680 expense of significant scaling requirements. At the extreme, 681 verifiers could be required to check for revocation of each signature 682 verified, which would result in very significant transaction rates. 683 An alternative, "revocation identifiers", has been proposed which 684 would permit revocation on an intermediate level of granularity, 685 perhaps on a per-account basis. Messages containing these 686 identifiers would result in a query to a revocation database, which 687 might be represented in DNS. 689 Further study is needed to determine if the benefits from revocation 690 (given the potential speed of a replay attack) outweigh the 691 transactional cost of querying a revocation database. 693 4.1.5. Signed Message Replay 695 Signed message replay refers to the retransmission of already-signed 696 messages to additional recipients beyond those intended by the author 697 or the original poster of the message. The attacker arranges to 698 receive a message from the victim, and then retransmits it intact but 699 with different envelope addresses. This might be done, for example, 700 to make it look like a legitimate sender of messages is sending a 701 large amount of spam. When reputation services are deployed, this 702 could damage the author's reputation or that of the author's domain. 704 A larger number of domains are potential victims of signed message 705 replay than chosen message replay because the former does not require 706 the ability for the attacker to send messages from the victim domain. 707 However, the capabilities of the attacker are lower. Unless coupled 708 with another attack such as body length limit abuse, it isn't 709 possible for the attacker to use this, for example, for advertising. 711 Many mailing lists, especially those which do not modify the content 712 of the message and signed header fields and hence do not invalidate 713 the signature, engage in a form of signed message replay. The use of 714 body length limits and other mechanisms to enhance the survivability 715 of messages effectively enhances the ability to do so. The only 716 things that distinguish this case from undesirable forms of signed 717 message replay is the intent of the replayer, which cannot be 718 determined by the network. 720 4.1.6. Denial-of-Service Attack Against Verifier 722 While it takes some compute resources to sign and verify a signature, 723 it takes negligible compute resources to generate an invalid 724 signature. An attacker could therefore construct a "make work" 725 attack against a verifier, by sending a large number of incorrectly- 726 signed messages to a given verifier, perhaps with multiple signatures 727 each. The motivation might be to make it too expensive to verify 728 messages. 730 While this attack is feasible, it can be greatly mitigated by the 731 manner in which the verifier operates. For example, it might decide 732 to accept only a certain number of signatures per message, limit the 733 maximum key size it will accept (to prevent outrageously large 734 signatures from causing unneeded work), and verify signatures in a 735 particular order. The verifier could also maintain state 736 representing the current signature verification failure rate and 737 adopt a defensive posture when attacks may be underway. 739 4.1.7. Denial-of-Service Attack Against Key Service 741 An attacker might also attempt to degrade the availability of an 742 originator's key service, in order to cause that originator's 743 messages to be unverifiable. One way to do this might be to quickly 744 send a large number of messages with signatures which reference a 745 particular key, thereby creating a heavy load on the key server. 746 Other types of DoS attacks on the key server or the network 747 infrastructure serving it are also possible. 749 The best defense against this attack is to provide redundant key 750 servers, preferably on geographically-separate parts of the Internet. 751 Caching also helps a great deal, by decreasing the load on 752 authoritative key servers when there are many simultaneous key 753 requests. The use of a key service protocol which minimizes the 754 transactional cost of key lookups is also beneficial. It is noted 755 that the Domain Name System has all these characteristics. 757 4.1.8. Canonicalization Abuse 759 Canonicalization algorithms represent a tradeoff between the survival 760 of the validity of a message signature and the desire not to allow 761 the message to be altered inappropriately. In the past, 762 canonicalization algorithms have been proposed which would have 763 permitted attackers, in some cases, to alter the meaning of a 764 message. 766 Message signatures which support multiple canonicalization algorithms 767 give the signer the ability to decide the relative importance of 768 signature survivability and immutability of the signed content. If 769 an unexpected vulnerability appears in a canonicalization algorithm 770 in general use, new algorithms can be deployed, although it will be a 771 slow process because the signer can never be sure which algorithm(s) 772 the verifier supports. For this reason, canonicalization algorithms, 773 like cryptographic algorithms, should undergo a wide and careful 774 review process. 776 4.1.9. Body Length Limit Abuse 778 A body length limit is an optional indication from the signer how 779 much content has been signed. The verifier can either ignore the 780 limit, verify the specified portion of the message, or truncate the 781 message to the specified portion and verify it. The motivation for 782 this feature is the behavior of many mailing lists which add a 783 trailer, perhaps identifying the list, at the end of messages. 785 When body length limits are used, there is the potential for an 786 attacker to add content to the message. It has been shown that this 787 content, although at the end, can cover desirable content, especially 788 in the case of HTML messages. 790 If the body length isn't specified, or if the verifier decides to 791 ignore the limit, body length limits are moot. If the verifier or 792 recipient truncates the message at the signed content, there is no 793 opportunity for the attacker to add anything. 795 If the verifier observes body length limits when present, there is 796 the potential that an attacker can make undesired content visible to 797 the recipient. The size of the appended content makes little 798 difference, because it can simply be a URL reference pointing to the 799 actual content. Receiving MUAs can mitigate this threat by, at a 800 minimum, identifying the unsigned content in the message. 802 4.1.10. Use of Revoked Key 804 The benefits obtained by caching of key records opens the possibility 805 that keys which have been revoked may be used for some period of time 806 after their revocation. The best examples of this occur when a 807 holder of a key delegated by the domain administrator must be 808 unexpectedly deauthorized from sending mail on behalf of one or more 809 addresses in the domain. 811 The caching of key records is normally short-lived, on the order of 812 hours to days. In many cases, this threat can be mitigated simply by 813 setting a short time-to-live for keys not under the domain 814 administrator's direct control (assuming, of course, that control of 815 the time-to-live value may be specified for each record, as it can 816 with DNS). In some cases, such as the recovery following a stolen 817 private key belonging to one of the domain's MTAs, the possibility of 818 theft and the effort required to revoke the key authorization must be 819 considered when choosing a TTL. The chosen TTL must be long enough 820 to mitigate denial-of-service attacks and provide reasonable 821 transaction efficiency, and no longer. 823 4.1.11. Compromise of Key Server 825 Rather than by attempting to obtain a private key, an attacker might 826 instead focus efforts on the server used to publish public keys for a 827 domain. As in the key theft case, the motive might be to allow the 828 attacker to sign messages on behalf of the domain. This attack 829 provides the attacker with the additional capability to remove 830 legitimate keys from publication, thereby denying the domain the 831 ability for the signatures on its mail to verify correctly. 833 The host which is the primary key server, such as a DNS master server 834 for the domain, might be compromised. Another approach might be to 835 change the delegation of key servers at the next higher domain level. 837 This attack can be mitigated somewhat by independent monitoring to 838 audit the key service. Such auditing of the key service should occur 839 by means of zone transfers rather than queries to the zone's primary 840 server, so that the addition of records to the zone can be detected. 842 4.1.12. Falsification of Key Service Replies 844 Replies from the key service may also be spoofed by a suitably 845 positioned attacker. For DNS, one such way to do this is "cache 846 poisoning", in which the attacker provides unnecessary (and 847 incorrect) additional information in DNS replies, which is cached. 849 DNSSEC [RFC4033] is the preferred means of mitigating this threat, 850 but the current uptake rate for DNSSEC is slow enough that one would 851 not like to create a dependency on its deployment. In the case of a 852 cache poisoning attack, the vulnerabilities created by this attack 853 are both localized and of limited duration, although records with 854 relatively long TTL may be persist beyond the attack itself. 856 4.1.13. Publication of Malformed Key Records and/or Signatures 858 In this attack, the attacker publishes suitably crafted key records 859 or sends mail with intentionally malformed signatures, in an attempt 860 to confuse the verifier and perhaps disable verification altogether. 861 This attack is really a characteristic of an implementation 862 vulnerability, a buffer overflow or lack of bounds checking, for 863 example, rather than a vulnerability of the signature mechanism 864 itself. This threat is best mitigated by careful implementation and 865 creation of test suites that challenge the verification process. 867 4.1.14. Cryptographic Weaknesses in Signature Generation 869 The cryptographic algorithms used to generate mail signatures, 870 specifically the hash algorithm and the public-key encryption/ 871 decryption operations, may over time be subject to mathematical 872 techniques that degrade their security. At this writing, the SHA-1 873 hash algorithm is the subject of extensive mathematical analysis 874 which has considerably lowered the time required to create two 875 messages with the same hash value. This trend can be expected to 876 continue. 878 One consequence of a weakness in the hash algorithm is a hash 879 collision attack. Hash collision attacks in message signing systems 880 involve the same person creating two different messages that have the 881 same hash value, where only one of the two messages would normally be 882 signed. The attack is based on the second message inheriting the 883 signature of the first. For DKIM, this means that a sender might 884 create a "good" message and a "bad" message, where some filter at the 885 signing party's site would sign the good message but not the bad 886 message. The attacker gets the good message signed, and then 887 incorporates that signature in the bad message. This scenario is not 888 common, but could happen, for example, at a site that does content 889 analysis on messages before signing them. 891 The message signature system must be designed to support multiple 892 signature and hash algorithms, and the signing domain must be able to 893 specify which algorithms it uses to sign messages. The choice of 894 algorithms must be published in key records, rather than in the 895 signature itself, to ensure that an attacker is not able to create 896 signatures using algorithms weaker than the domain wishes to permit. 898 Due to the fact that the signer and verifier of email do not, in 899 general, communicate directly, negotiation of the algorithms used for 900 signing cannot occur. In other words, a signer has no way of knowing 901 which algorithm(s) a verifier supports, nor (due to mail forwarding) 902 where the verifier is. For this reason, it is expected that once 903 message signing is widely deployed, algorithm change will occur 904 slowly, and legacy algorithms will need to be supported for a 905 considerable period. Algorithms used for message signatures 906 therefore need to be secure against expected cryptographic 907 developments several years into the future. 909 4.1.15. Display Name Abuse 911 Message signatures only relate to the address-specification portion 912 of an email address, while some MUAs only display (or some recipients 913 only pay attention to) the display name portion of the address. This 914 inconsistency leads to an attack where the attacker uses a From 915 header field such as: 917 From: "Dudley DoRight" 919 In this example, the attacker, whiplash@example.org, can sign the 920 message and still convince some recipients that the message is from 921 Dudley DoRight, who is presumably a trusted individual. Coupled with 922 the use of a throw-away domain or email address, it may be difficult 923 to bring the attacker to account for the use of another's display 924 name. 926 This is an attack which must be dealt with in the recipient's MUA. 927 One approach is to require that the signer's address specification 928 (and not just the display name) be visible to the recipient. 930 4.1.16. Compromised System Within Originator's Network 932 In many cases, MTAs may be configured to accept, and sign, messages 933 which originate within the topological boundaries of the originator's 934 network (i.e., within a firewall). The increasing use of compromised 935 systems to send email presents a problem for such policies, because 936 the attacker, using a compromised system as a proxy, can generate 937 signed mail at will. 939 Several approaches exist for mitigating this attack. The use of 940 authenticated submission, even within the network boundaries, can be 941 used to limit the addresses for which the attacker may obtain a 942 signature. It may also help locate the compromised system that is 943 the source of the messages more quickly. Content analysis of 944 outbound mail to identify undesirable and malicious content, as well 945 as monitoring of the volume of messages being sent by users, may also 946 prevent arbitrary messages from being signed and sent. 948 4.1.17. Verification Probe Attack 950 As noted above, bad actors (attackers) can sign messages on behalf of 951 domains they control. Since they may also control the key service 952 (e.g., the authoritative DNS name servers for the _domainkey 953 subdomain), it is possible for them to observe public key lookups, 954 and their source, when messages are verified. 956 One such attack, which we will refer to as a "verification probe", is 957 to send a message with a DKIM signature to each of many addresses in 958 a mailing list. The messages need not contain valid signatures, and 959 each instance of the message would typically use a different 960 selector. The attacker could then monitor key service requests and 961 determine which selectors had been accessed, and correspondingly 962 which addressees used DKIM verification. This could be used to 963 target future mailings at recipients who do not use DKIM 964 verification, on the premise that these addressees are more likely to 965 act on the message contents. 967 4.1.18. Key Publication by Higher Level Domain 969 In order to support the ability of a domain to sign for subdomains 970 under its administrative control, DKIM permits the domain of a 971 signature (d= tag) to be any higher-level domain than the signature's 972 address (i= or equivalent). However, since there is no mechanism for 973 determining common administrative control of a subdomain, it is 974 possible for a parent to publish keys which are valid for any domain 975 below them in the DNS hierarchy. In other words, mail from the 976 domain example.anytown.ny.us could be signed using keys published by 977 anytown.ny.us, ny.us, or us, in addition to the domain itself. 979 Operation of a domain always requires a trust relationship with 980 higher level domains. Higher level domains already have ultimate 981 power over their subdomains: they could change the name server 982 delegation for the domain or disenfranchise it entirely. So it is 983 unlikely that a higher level domain would intentionally compromise a 984 subdomain in this manner. However, if higher level domains send mail 985 on their own behalf, they may wish to publish keys at their own 986 level. Higher level domains must employ special care in the 987 delegation of keys they publish to ensure that any of their 988 subdomains are not compromised by misuse of such keys. 990 4.2. Attacks Against Message Signing Policy 992 Summary of postulated attacks against signing policy: 994 +---------------------------------------------+--------+------------+ 995 | Attack Name | Impact | Likelihood | 996 +---------------------------------------------+--------+------------+ 997 | Look-alike domain names | High | High | 998 | Internationalized domain name abuse | High | Medium | 999 | Denial-of-service attack against signing | Medium | Medium | 1000 | policy | | | 1001 | Use of multiple From addresses | Low | Medium | 1002 | Abuse of third-party signatures | Medium | High | 1003 | Falsification of Sender Signing Policy | Medium | Medium | 1004 | replies | | | 1005 +---------------------------------------------+--------+------------+ 1007 4.2.1. Look-Alike Domain Names 1009 Attackers may attempt to circumvent signing policy of a domain by 1010 using a domain name which is close to, but not the same as the domain 1011 with a signing policy. For instance, "example.com" might be replaced 1012 by "examp1e.com". If the message is not to be signed, DKIM does not 1013 require that the domain used actually exist (although other 1014 mechanisms may make this a requirement). Services exist to monitor 1015 domain registrations to identify potential domain name abuse, but 1016 naturally do not identify the use of unregistered domain names. 1018 A related attack is possible when the MUA does not render the domain 1019 name in an easily recognizable format. If, for example, a Chinese 1020 domain name is rendered in "punycode" as xn--cjsp26b3obxw7f.com, the 1021 unfamiliarity of that representation may enable other domains to more 1022 easily be mis-recognized as the expected domain. 1024 Users that are unfamiliar with internet naming conventions may also 1025 mis-recognize certain names. For example, users may confuse 1026 online.example.com with online-example.com, the latter of which may 1027 have been registered by an attacker. 1029 4.2.2. Internationalized Domain Name Abuse 1031 Internationalized domain names present a special case of the look- 1032 alike domain name attack described above. Due to similarities in the 1033 appearance of many Unicode characters, domains (particularly those 1034 drawing characters from different groups) may be created which are 1035 visually indistinguishable from other, possibly high-value domains. 1036 This is discussed in detail in Unicode TR 36 [UTR36]. Surveillance 1037 of domain registration records may point out some of these, but there 1038 are many such similarities. As in the look-alike domain attack 1039 above, this technique may also be used to circumvent sender signing 1040 policy of other domains. 1042 4.2.3. Denial-of-Service Attack Against Signing Policy 1044 Just as the publication of public keys by a domain can be impacted by 1045 an attacker, so can the publication of Sender Signing Policy (SSP) by 1046 a domain. In the case of SSP, the transmission of large amounts of 1047 unsigned mail purporting to come from the domain can result in a 1048 heavy transaction load requesting the SSP record. More general DoS 1049 attacks against the servers providing the SSP records are possible as 1050 well. This is of particular concern since the default signing policy 1051 is "we don't sign everything", which means that SSP, in effect, fails 1052 open. 1054 As with defense against DoS attacks for key servers, the best defense 1055 against this attack is to provide redundant servers, preferably on 1056 geographically-separate parts of the Internet. Caching again helps a 1057 great deal, and signing policy should rarely change, so TTL values 1058 can be relatively large. 1060 4.2.4. Use of Multiple From Addresses 1062 Although this usage is never seen by most recipients, RFC 2822 1063 [RFC2822] permits the From address to contain multiple address 1064 specifications. The lookup of Sender Signing Policy is based on the 1065 From address, so if addresses from multiple domains are in the From 1066 address, the question arises which signing policy to use. A rule 1067 (say, "use the first address") could be specified, but then an 1068 attacker could put a throwaway address prior to that of a high-value 1069 domain. It is also possible for SSP to look at all addresses, and 1070 choose the most restrictive rule. This is an area in need of further 1071 study. 1073 4.2.5. Abuse of Third-Party Signatures 1075 In a number of situations, including mailing lists, event 1076 invitations, and "send this article to a friend" services, the DKIM 1077 signature on a message may not come from the originating address 1078 domain. For this reason, "third-party" signatures, those attached by 1079 the mailing list, invitation service, or news service, frequently 1080 need to be regarded as having some validity. Since this effectively 1081 makes it possible for any domain to sign any message, a sending 1082 domain may publish sender signing practices stating that it does not 1083 use such services, and accordingly that verifiers should view such 1084 signatures with suspicion. 1086 However, the restrictions placed on a domain by publishing "no third- 1087 party" signing practices effectively disallows many existing uses of 1088 e-mail. For the majority of domains that are unable to adopt these 1089 practices, an attacker may with some degree of success sign messages 1090 purporting to come from the domain. For this reason, accreditation 1091 and reputation services, as well as locally-maintained whitelists and 1092 blacklists, will need to play a significant role in evaluating 1093 messages that have been signed by third parties. 1095 4.2.6. Falsification of Sender Signing Policy Replies 1097 In an analogous manner to the falsification of key service replies 1098 described above, replies to sender signing policy queries can also be 1099 falsified. One such attack would be to weaken the signing policy to 1100 make unsigned messages allegedly from a given domain appear less 1101 suspicious. Another attack on a victim domain that is not signing 1102 messages could attempt to make the domain's messages look more 1103 suspicious, in order to interfere with the victim's ability to send 1104 mail. 1106 As with the falsification of key service replies, DNSSEC is the 1107 preferred means of mitigating this attack. Even in the absence of 1108 DNSSEC, vulnerabilities due to cache poisoning are localized. 1110 4.3. Other Attacks 1112 This section describes attacks against other internet infrastructure 1113 which are enabled by deployment of DKIM. A summary of these 1114 postulated attacks is as follows: 1116 +--------------------------------------+--------+------------+ 1117 | Attack Name | Impact | Likelihood | 1118 +--------------------------------------+--------+------------+ 1119 | Packet amplification attacks via DNS | N/A | Medium | 1120 +--------------------------------------+--------+------------+ 1122 4.3.1. Packet Amplification Attacks via DNS 1124 Recently [US-CERT-DNS], an increase in denial-of-service attacks 1125 involving the transmission of spoofed UDP DNS requests to openly- 1126 accessible domain name servers. To the extent that the response from 1127 the name server is larger than the request, the name server functions 1128 as an amplifier for such an attack. 1130 DKIM contributes indirectly to this attack by requiring the 1131 publication of fairly large DNS records for distributing public keys. 1132 The names of these records are also well known, since the record 1133 names can be determined by examining properly-signed messages. This 1134 attack does not have an impact on DKIM itself. DKIM, however, is not 1135 the only application which uses large DNS records, and a DNS-based 1136 solution to this problem will likely be required. 1138 5. Derived Requirements 1140 This section lists requirements for DKIM not explicitly stated in the 1141 above discussion. These requirements include: 1143 The store for key and SSP records must be capable of utilizing 1144 multiple geographically-dispersed servers. 1146 Key and SSP records must be cacheable, either by the verifier 1147 requesting them or by other infrastructure. 1149 The cache time-to-live for key records must be specifiable on a 1150 per-record basis. 1152 The signature algorithm(s) used by the signing domain must be 1153 specified independently of the message being verified, such as in 1154 the key record. 1156 The algorithm(s) used for message signatures need to be secure 1157 against expected cryptographic developments several years in the 1158 future. 1160 6. IANA Considerations 1162 This document defines no items requiring IANA assignment. 1164 7. Security Considerations 1166 This document describes the security threat environment in which 1167 DomainKeys Identified Mail (DKIM) is expected to provide some 1168 benefit, and presents a number of attacks relevant to its deployment. 1170 8. Informative References 1172 [Bernstein04] 1173 Bernstein, D., "Cache Timing Attacks on AES", April 2004. 1175 [Boneh03] Boneh, D. and D. Brumley, "Remote Timing Attacks are 1176 Practical", Proc. 12th USENIX Security Symposium, 2003. 1178 [I-D.allman-dkim-ssp] 1179 Allman, E., "DKIM Sender Signing Policy", 1180 draft-allman-dkim-ssp-01 (work in progress), October 2005. 1182 [I-D.ietf-dkim-base] 1183 Allman, E., "DomainKeys Identified Mail Signatures 1184 (DKIM)", draft-ietf-dkim-base-00 (work in progress), 1185 February 2006. 1187 [Kocher96] 1188 Kocher, P., "Timing Attacks on Implementations of Diffie- 1189 Hellman, RSA, and other Cryptosystems", Advances in 1190 Cryptology, pages 104-113, 1996. 1192 [Kocher99] 1193 Kocher, P., Joffe, J., and B. Yun, "Differential Power 1194 Analysis: Leaking Secrets", Crypto '99, pages 388-397, 1195 1999. 1197 [RFC1939] Myers, J. and M. Rose, "Post Office Protocol - Version 3", 1198 STD 53, RFC 1939, May 1996. 1200 [RFC2821] Klensin, J., "Simple Mail Transfer Protocol", RFC 2821, 1201 April 2001. 1203 [RFC2822] Resnick, P., "Internet Message Format", RFC 2822, 1204 April 2001. 1206 [RFC3501] Crispin, M., "INTERNET MESSAGE ACCESS PROTOCOL - VERSION 1207 4rev1", RFC 3501, March 2003. 1209 [RFC4033] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1210 Rose, "DNS Security Introduction and Requirements", 1211 RFC 4033, March 2005. 1213 [US-CERT-DNS] 1214 US-CERT, "The Continuing Denial of Service Threat Posed by 1215 DNS Recursion". 1217 [UTR36] Davis, M. and M. Suignard, "Unicode Technical Report #36: 1218 Unicode Security Considerations", UTR 36, July 2005. 1220 Appendix A. Acknowledgements 1222 The author wishes to thank Phillip Hallam-Baker, Eliot Lear, Tony 1223 Finch, Dave Crocker, Barry Leiba, Arvel Hathcock, Eric Allman, Jon 1224 Callas, Stephen Farrell, Doug Otis, Frank Ellermann, Eric Rescorla, 1225 Paul Hoffman, and numerous others on the ietf-dkim mailing list for 1226 valuable suggestions and constructive criticism of earlier versions 1227 of this draft. 1229 Appendix B. Edit History 1231 Changes since draft-fenton-dkim-threats-00 draft: 1233 o Changed beginning of introduction to make it consistent with -base 1234 draft. 1236 o Clarified reasons for focus on externally-located bad actors. 1238 o Elaborated on reasons for effectiveness of address book attacks. 1240 o Described attack time windows with respect to replay attacks. 1242 o Added discussion of attacks using look-alike domains. 1244 o Added section on key management attacks. 1246 Changes since draft-fenton-dkim-threats-01 draft: 1248 o Reorganized description of bad actors. 1250 o Greatly expanded description of attacks against DKIM and SSP. 1252 o Added "derived requirements" section. 1254 Changes since draft-fenton-dkim-threats-02 draft: 1256 o Added description of reflection attack, verification probe attack, 1257 and abuse of third-party signatures. 1259 o Expanded description of key delegation attacks and look-alike 1260 domain names. 1262 o Numerous changes suggested by ietf-dkim mailing list participants. 1264 Changes since draft-ietf-dkim-threats-00 draft: 1266 o Added description of key publication by higher level domain 1267 attack. 1269 o Added description of falsification of SSP replies. 1271 o Added section on other threats and description of packet 1272 amplification attacks via DNS. 1274 Changes since draft-ietf-dkim-threats-01 draft: 1276 o Reworded document structure introduction. 1278 o Less normative wording for mitigation of theft of delegated 1279 private key and body length limit abuse. 1281 o Added description of reply variant of chosen message replay. 1283 o Terminology changes to avoid ambiguous use of the word "sender" 1284 suggested by D. Crocker.. 1286 o Additional description of hash collision attacks provided by P. 1287 Hoffman. 1289 o Replaced section on side channel attacks with text provided by E. 1290 Rescorla. 1292 o Numerous minor edits and clarifications. 1294 Author's Address 1296 Jim Fenton 1297 Cisco Systems, Inc. 1298 MS SJ-24/2 1299 170 W. Tasman Drive 1300 San Jose, CA 95134-1706 1301 USA 1303 Phone: +1 408 526 5914 1304 Email: fenton@cisco.com 1305 URI: 1307 Intellectual Property Statement 1309 The IETF takes no position regarding the validity or scope of any 1310 Intellectual Property Rights or other rights that might be claimed to 1311 pertain to the implementation or use of the technology described in 1312 this document or the extent to which any license under such rights 1313 might or might not be available; nor does it represent that it has 1314 made any independent effort to identify any such rights. Information 1315 on the procedures with respect to rights in RFC documents can be 1316 found in BCP 78 and BCP 79. 1318 Copies of IPR disclosures made to the IETF Secretariat and any 1319 assurances of licenses to be made available, or the result of an 1320 attempt made to obtain a general license or permission for the use of 1321 such proprietary rights by implementers or users of this 1322 specification can be obtained from the IETF on-line IPR repository at 1323 http://www.ietf.org/ipr. 1325 The IETF invites any interested party to bring to its attention any 1326 copyrights, patents or patent applications, or other proprietary 1327 rights that may cover technology that may be required to implement 1328 this standard. Please address the information to the IETF at 1329 ietf-ipr@ietf.org. 1331 Disclaimer of Validity 1333 This document and the information contained herein are provided on an 1334 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1335 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 1336 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 1337 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 1338 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1339 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1341 Copyright Statement 1343 Copyright (C) The Internet Society (2006). This document is subject 1344 to the rights, licenses and restrictions contained in BCP 78, and 1345 except as set forth therein, the authors retain all their rights. 1347 Acknowledgment 1349 Funding for the RFC Editor function is currently provided by the 1350 Internet Society.