idnits 2.17.1 draft-ietf-dkim-threats-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 14. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1174. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1151. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1158. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1164. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (January 20, 2006) is 6671 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'Internet' is mentioned on line 141, but not defined == Missing Reference: 'Report' is mentioned on line 152, but not defined == Unused Reference: 'I-D.allman-dkim-ssp' is defined on line 1048, but no explicit reference was found in the text == Outdated reference: A later version (-02) exists of draft-allman-dkim-ssp-01 -- Obsolete informational reference (is this intentional?): RFC 2821 (Obsoleted by RFC 5321) -- Obsolete informational reference (is this intentional?): RFC 2822 (Obsoleted by RFC 5322) -- Obsolete informational reference (is this intentional?): RFC 3501 (Obsoleted by RFC 9051) Summary: 3 errors (**), 0 flaws (~~), 7 warnings (==), 10 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DKIM Working Group J. Fenton 3 Internet-Draft Cisco Systems, Inc. 4 Expires: July 24, 2006 January 20, 2006 6 Analysis of Threats Motivating DomainKeys Identified Mail (DKIM) 7 draft-ietf-dkim-threats-00 9 Status of this Memo 11 By submitting this Internet-Draft, each author represents that any 12 applicable patent or other IPR claims of which he or she is aware 13 have been or will be disclosed, and any of which he or she becomes 14 aware will be disclosed, in accordance with Section 6 of BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six months 22 and may be updated, replaced, or obsoleted by other documents at any 23 time. It is inappropriate to use Internet-Drafts as reference 24 material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt. 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 This Internet-Draft will expire on July 24, 2006. 34 Copyright Notice 36 Copyright (C) The Internet Society (2006). 38 Abstract 40 This document provides an analysis of some threats against Internet 41 mail that are intended to be addressed by signature-based mail 42 authentication, in particular DomainKeys Identified Mail. It 43 discusses the nature and location of the bad actors, what their 44 capabilities are, and what they intend to accomplish via their 45 attacks. 47 Table of Contents 49 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 50 1.1. Terminology and Model . . . . . . . . . . . . . . . . . . 4 51 1.2. Document Structure . . . . . . . . . . . . . . . . . . . . 6 52 2. The Bad Actors . . . . . . . . . . . . . . . . . . . . . . . . 6 53 2.1. Characteristics . . . . . . . . . . . . . . . . . . . . . 6 54 2.2. Capabilities . . . . . . . . . . . . . . . . . . . . . . . 7 55 2.3. Location . . . . . . . . . . . . . . . . . . . . . . . . . 8 56 2.3.1. Externally-located Bad Actors . . . . . . . . . . . . 8 57 2.3.2. Within Claimed Originator's Administrative Unit . . . 9 58 2.3.3. Within Recipient's Administrative Unit . . . . . . . . 9 59 3. Representative Bad Acts . . . . . . . . . . . . . . . . . . . 10 60 3.1. Use of Arbitrary Identities . . . . . . . . . . . . . . . 10 61 3.2. Use of Specific Identities . . . . . . . . . . . . . . . . 10 62 3.2.1. Exploitation of Social Relationships . . . . . . . . . 11 63 3.2.2. Identity-Related Fraud . . . . . . . . . . . . . . . . 11 64 3.2.3. Reputation Attacks . . . . . . . . . . . . . . . . . . 12 65 3.2.4. Reflection Attacks . . . . . . . . . . . . . . . . . . 12 66 4. Attacks on Message Signing . . . . . . . . . . . . . . . . . . 12 67 4.1. Attacks Against Message Signatures . . . . . . . . . . . . 13 68 4.1.1. Theft of Private Key for Domain . . . . . . . . . . . 13 69 4.1.2. Theft of Delegated Private Key . . . . . . . . . . . . 14 70 4.1.3. Private Key Recovery via Side-Channel Attack . . . . . 14 71 4.1.4. Chosen Message Replay . . . . . . . . . . . . . . . . 15 72 4.1.5. Signed Message Replay . . . . . . . . . . . . . . . . 16 73 4.1.6. Denial-of-Service Attack Against Verifier . . . . . . 16 74 4.1.7. Denial-of-Service Attack Against Key Service . . . . . 16 75 4.1.8. Canonicalization Abuse . . . . . . . . . . . . . . . . 17 76 4.1.9. Body Length Limit Abuse . . . . . . . . . . . . . . . 17 77 4.1.10. Use of Revoked Key . . . . . . . . . . . . . . . . . . 18 78 4.1.11. Compromise of Key Server . . . . . . . . . . . . . . . 18 79 4.1.12. Falsification of Key Service Replies . . . . . . . . . 19 80 4.1.13. Publication of Malformed Key Records and/or 81 Signatures . . . . . . . . . . . . . . . . . . . . . . 19 82 4.1.14. Cryptographic Weaknesses in Signature Generation . . . 19 83 4.1.15. Display Name Abuse . . . . . . . . . . . . . . . . . . 20 84 4.1.16. Compromised System Within Originator's Network . . . . 20 85 4.1.17. Verification Probe Attack . . . . . . . . . . . . . . 21 86 4.2. Attacks Against Message Signing Policy . . . . . . . . . . 21 87 4.2.1. Look-Alike Domain Names . . . . . . . . . . . . . . . 21 88 4.2.2. Internationalized Domain Name Abuse . . . . . . . . . 22 89 4.2.3. Denial-of-Service Attack Against Signing Policy . . . 22 90 4.2.4. Use of Multiple From Addresses . . . . . . . . . . . . 22 91 4.2.5. Abuse of Third-Party Signatures . . . . . . . . . . . 23 92 5. Derived Requirements . . . . . . . . . . . . . . . . . . . . . 23 93 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 24 94 7. Security Considerations . . . . . . . . . . . . . . . . . . . 24 95 8. Informative References . . . . . . . . . . . . . . . . . . . . 24 96 Appendix A. Glossary . . . . . . . . . . . . . . . . . . . . . . 24 97 Appendix B. Acknowledgements . . . . . . . . . . . . . . . . . . 25 98 Appendix C. Edit History . . . . . . . . . . . . . . . . . . . . 25 99 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 27 100 Intellectual Property and Copyright Statements . . . . . . . . . . 28 102 1. Introduction 104 DomainKeys Identified Mail (DKIM) [I-D.allman-dkim-base] defines a 105 mechanism by which email messages can be cryptographically signed, 106 permitting a signing domain to claim responsibility for the use of a 107 given email address. Message recipients can verify the signature by 108 querying the signer's domain directly to retrieve the appropriate 109 public key, and thereby confirm that the message was attested to by a 110 party in possession of the private key for the signing domain. 112 Once the attesting party or parties have been established, the 113 recipient may evaluate the message in the context of additional 114 information such as locally-maintained whitelists, shared reputation 115 services, and/or third-party accreditation. The description of these 116 mechanisms is outside the scope of this effort. By applying a 117 signature, a good player enables a verifier to associate a positive 118 reputation with the message, in hopes that it will receive 119 preferential treatment by the recipient. 121 This effort is not intended to address threats associated with 122 message confidentiality nor does it intend to provide a long-term 123 archival signature. 125 1.1. Terminology and Model 127 Definitions of some terms used in this document may be found in 128 Appendix A. 130 The following diagram illustrates a typical usage flowchart for DKIM: 132 +---------------------------------+ 133 | SIGNATURE CREATION | 134 | (Originating or Relaying AU) | 135 | | 136 | Sign (Message, Domain, Key) | 137 | | 138 +---------------------------------+ 139 | - Message (Domain, Key) 140 | 141 [Internet] 142 | 143 V 144 +---------------------------------+ 145 +-----------+ | SIGNATURE VERIFICATION | 146 | | | (Relaying or Delivering AU) | 147 | KEY | | | 148 | QUERY +...>| Verify (Message, Domain, Key) | 149 | | | | 150 +-----------+ +----------------+----------------+ 151 | - Verified Domain 152 +-----------+ V - [Report] 153 | SENDER | +----------------+----------------+ 154 | SIGNING | | | 155 | PRACTICES +...>| SIGNER EVALUATION | 156 | QUERY | | | 157 | | +---------------------------------+ 158 +-----------+ 160 DKIM operates entirely on the content of the message, as defined in 161 RFC 2822 [RFC2822]. The transmission of messages via SMTP, defined 162 in RFC 2821 [RFC2821], and such elements as the envelope-from and 163 envelope-to addresses and the HELO domain are not relevant to DKIM 164 verification. This is an intentional decision made to allow 165 verification of messages via protocols other than SMTP, such as POP 166 [RFC1939] and IMAP [RFC3501] which an MUA acting as a verifier might 167 use. 169 The Sender Signing Practices Query referred to in the diagram above 170 is a means by which the verifier can query the alleged author's 171 domain to determine their practices for signing messages, which in 172 turn may influence their evaluation of the message. If, for example, 173 a message arrives without any valid signatures, and the alleged 174 author's domain advertises that they sign all messages, the verifier 175 might handle that message differently than if a signature was not 176 necessarily to be expected. 178 1.2. Document Structure 180 The remainder of this document begins by describing problems that 181 DKIM might be expected to address, and the extent that it is 182 successful in doing so. These are described in terms of who the bad 183 actors are, their capabilities and location in the network, and what 184 the bad acts are that they might wish to commit. 186 This is followed by a description of postulated attacks on DKIM 187 message signing and on the use of Sender Signing Practices to assist 188 in the treatment of unsigned messages. A list of derived 189 requirements is also presented which is intended to guide the DKIM 190 design and review process. 192 The sections dealing with attacks on DKIM each begin with a table 193 summarizing the postulated attacks in each category along with their 194 expected impact and likelihood. The following definitions were used 195 as rough criteria for scoring the attacks: 197 Impact: 199 High: Affects the verification of messages from an entire domain or 200 multiple domains 202 Medium: Affects the verification of messages from specific users, 203 MTAs, and/or bounded time periods 205 Low: Affects the verification of isolated individual messages only 207 Likelihood: 209 High: All email users should expect this attack on a frequent basis 211 Medium: Email users should expect this attack occasionally; 212 frequently for a few users 214 Low: Attack is expected to be rare and/or very infrequent 216 2. The Bad Actors 218 2.1. Characteristics 220 The problem space being addressed by DKIM is characterized by a wide 221 range of attackers in terms of motivation, sophistication, and 222 capabilities. 224 At the low end of the spectrum are bad actors who may simply send 225 email, perhaps using one of many commercially available tools, which 226 the recipient does not want to receive. These tools typically allow 227 one to falsify the origin address of messages, and may, in the 228 future, be capable of generating message signatures as well. 230 At the next tier are what would be considered "professional" senders 231 of unwanted email. These attackers would deploy specific 232 infrastructure, including Mail Transfer Agents (MTAs), registered 233 domains and networks of compromised computers ("zombies") to send 234 messages, and in some cases to harvest addresses to which to send. 235 These senders often operate as commercial enterprises and send 236 messages on behalf of third parties. 238 The most sophisticated and financially-motivated senders of messages 239 are those who stand to receive substantial financial benefit, such as 240 from an email-based fraud scheme. These attackers can be expected to 241 employ all of the above mechanisms and additionally may attack the 242 Internet infrastructure itself, e.g., DNS cache-poisoning attacks; IP 243 routing attacks via compromised network routing elements. 245 2.2. Capabilities 247 In general, the bad actors described above should be expected to have 248 access to the following: 250 1. An extensive corpus of messages from domains they might wish to 251 impersonate 253 2. Knowledge of the business aims and model for domains they might 254 wish to impersonate 256 3. Access to public keys and associated authorization records 257 associated with the domain 259 and the ability to do at least some of the following: 261 1. Submit messages to MTAs and MSAs at multiple locations in the 262 Internet 264 2. Construct arbitrary message header fields, including those 265 claiming to be mailing lists, resenders, and other mail agents 267 3. Sign messages on behalf of domains under their control 269 4. Generate substantial numbers of either unsigned or apparently- 270 signed messages which might be used to attempt a denial of 271 service attack 273 5. Resend messages which may have been previously signed by the 274 domain 276 6. Transmit messages using any envelope information desired 278 As noted above, certain classes of bad actors may have substantial 279 financial motivation for their activities, and therefore should be 280 expected to have more capabilities at their disposal. These include: 282 1. Manipulation of IP routing. This could be used to submit 283 messages from specific IP addresses or difficult-to-trace 284 addresses, or to cause diversion of messages to a specific 285 domain. 287 2. Limited influence over portions of DNS using mechanisms such as 288 cache poisoning. This might be used to influence message 289 routing, or to cause falsification of DNS-based key or policy 290 advertisements. 292 3. Access to significant computing resources, for example through 293 the conscription of worm-infected "zombie" computers. This could 294 allow the bad actor to perform various types of brute-force 295 attacks. 297 4. Ability to "wiretap" some existing traffic, perhaps from a 298 wireless network. 300 Either of the first two of these mechanisms could be used to allow 301 the bad actor to function as a man-in-the-middle between sender and 302 recipient, if that attack is useful. 304 2.3. Location 306 Bad actors or their proxies can be located anywhere in the Internet. 307 Certain attacks are possible primarily within the administrative unit 308 of the claimed originator and/or recipient domain have capabilities 309 beyond those elsewhere, as described in the below sections. Bad 310 actors can also collude by acting from multiple locations (a 311 "distributed bad actor"). 313 2.3.1. Externally-located Bad Actors 315 DKIM focuses primarily on bad actors located outside of the 316 administrative units of the claimed originator and the recipient. 317 These administrative units frequently correspond to the protected 318 portions of the network adjacent to the originator and recipient. It 319 is in this area that the trust relationships required for 320 authenticated message submission do not exist and do not scale 321 adequately to be practical. Conversely, within these administrative 322 units, there are other mechanisms such as authenticated message 323 submission that are easier to deploy and more likely to be used than 324 DKIM. 326 External bad actors are usually attempting to exploit the "any to 327 any" nature of email which motivates most recipient MTAs to accept 328 messages from anywhere for delivery to their local domain. They may 329 generate messages without signatures, with incorrect signatures, or 330 with correct signatures from domains with little traceability. They 331 may also pose as mailing lists, greeting cards, or other agents which 332 legitimately send or re-send messages on behalf of others. 334 2.3.2. Within Claimed Originator's Administrative Unit 336 Bad actors in the form of rogue or unauthorized users or malware- 337 infected computers can exist within the administrative unit 338 corresponding to a message's origin address. Since the submission of 339 messages in this area generally occurs prior to the application of a 340 message signature, DKIM is not directly effective against these bad 341 actors. Defense against these bad actors is dependent upon other 342 means, such as proper use of firewalls, and mail submission agents 343 that are configured to authenticate the sender. 345 In the special case where the administrative unit is non-contiguous 346 (e.g., a company that communicates between branches over the external 347 Internet), DKIM signatures can be used to distinguish between 348 legitimate externally-originated messages and attempts to spoof 349 addresses in the local domain. 351 2.3.3. Within Recipient's Administrative Unit 353 Bad actors may also exist within the administrative unit of the 354 message recipient. These bad actors may attempt to exploit the trust 355 relationships which exist within the unit. Since messages will 356 typically only have undergone DKIM verification at the administrative 357 unit boundary, DKIM is not effective against messages submitted in 358 this area. 360 For example, the bad actor may attempt to spoof a header field 361 indicating the results of verification. This header field would 362 normally be added by the verifier, which would also detect spoofed 363 header fields on messages it was attempting to verify. This could be 364 used to falsely indicate that the message was authenticated 365 successfully. 367 As in the originator case, these bad actors can be dealt with by 368 controlling the submission of messages within the administrative 369 unit. Since DKIM permits verification to occur anywhere within the 370 recipient's administrative unit, these threats can also be minimized 371 by moving verification closer to the recipient, such as at the mail 372 delivery agent (MDA), or on the recipient's MUA itself. 374 3. Representative Bad Acts 376 One of the most fundamental bad acts being attempted is the delivery 377 of messages which are not intended to have been sent by the alleged 378 originating domain. As described above, these messages might merely 379 be unwanted by the recipient, or might be part of a confidence scheme 380 or a delivery vector for malware. 382 3.1. Use of Arbitrary Identities 384 This class of bad acts includes the sending of messages which aim to 385 obscure the identity of the actual sender. In some cases the actual 386 sender might be the bad actor, or in other cases might be a third- 387 party under the control of the bad actor (e.g., a compromised 388 computer). 390 Particularly when coupled with sender signing practices that indicate 391 the domain owner signs all messages, DKIM can be effective in 392 mitigating against the abuse of addresses not controlled by bad 393 actors. DKIM is not effective against the use of addresses 394 controlled by bad actors. In other words, the presence of a valid 395 DKIM signature does not guarantee that the signer is not a bad actor. 396 It also does not guarantee the accountability of the signer, since 397 DKIM does not attempt to identify the signer individually, but rather 398 identifies the domain which they control. Accreditation and 399 reputation systems and locally-maintained whitelists and blacklists 400 can be used to enhance the accountability of DKIM-verified addresses 401 and/or the likelihood that signed messages are desirable. 403 3.2. Use of Specific Identities 405 A second major class of bad acts involves the assertion of specific 406 identities in email. 408 Note that some bad acts involving specific identities can sometimes 409 be accomplished, although perhaps less effectively, with similar 410 looking identities that mislead some recipients. For example, if the 411 bad actor is able to control the domain "examp1e.com" (note the "one" 412 between the p and e), they might be able to convince some recipients 413 that a message from admin@examp1e.com is really admin@example.com. 414 Similar types of attacks using internationalized domain names have 415 been hypothesized where it could be very difficult to see character 416 differences in popular typefaces. Similarly, if example2.com was 417 controlled by a bad actor, the bad actor could sign messages from 418 bigbank.example2.com which might also mislead some recipients. To 419 the extent that these domains are controlled by bad actors, DKIM is 420 not effective against these attacks, although it could support the 421 ability of reputation and/or accreditation systems to aid the user in 422 identifying them. 424 3.2.1. Exploitation of Social Relationships 426 One reason for asserting a specific origin address is to encourage a 427 recipient to read and act on particular email messages by appearing 428 to be an acquaintance or previous correspondent that the recipient 429 might trust. This tactic has been used by email-propagated malware 430 which mail themselves to addresses in the infected host's address 431 book. In this case, however, the sender's address may not be 432 falsified, so DKIM would not be effective in defending against this 433 act. 435 It is also possible for address books to be harvested and used by an 436 attacker to send messages from elsewhere. DKIM could be effective in 437 mitigating these acts by limiting the scope of origin addresses for 438 which a valid signature can be obtained when sending the messages 439 from other locations. 441 3.2.2. Identity-Related Fraud 443 Bad acts related to email-based fraud often, but not always, involve 444 the transmission of messages using specific origin addresses of other 445 entities as part of the fraud scheme. The use of a specific address 446 of origin sometimes contributes to the success of the fraud by 447 helping convince the recipient that the message was actually sent by 448 the alleged sender. 450 To the extent that the success of the fraud depends on or is enhanced 451 by the use of a specific origin address, the bad actor may have 452 significant financial motivation and resources to circumvent any 453 measures taken to protect specific addresses from unauthorized use. 455 When signatures are verified by or for the recipient, DKIM is 456 effective in defending against the fraudulent use of origin addresses 457 on signed messages. When the published sender signing practices of 458 the origin address indicate that all messages from that address 459 should be signed, DKIM further mitigates against the attempted 460 fraudulent use of the origin address on unsigned messages. 462 3.2.3. Reputation Attacks 464 Another motivation for using a specific origin address in a message 465 is to harm the reputation of another, commonly referred to as a "joe- 466 job". For example, a commercial entity might wish to harm the 467 reputation of a competitor, perhaps by sending unsolicited bulk email 468 on behalf of that competitor. It is for this reason that reputation 469 systems must be based on an identity that is, in practice, fairly 470 reliable. 472 3.2.4. Reflection Attacks 474 A commonly-used tactic by some bad actors is the indirect 475 transmission of messages by intentionally mis-addressing the message 476 and causing it to be "bounced", or sent to the return address 477 (RFC2821 envelope-from address) on the message. In this case, the 478 specific identity asserted in the email is that of the actual target 479 of the message, to whom the message is "returned". 481 DKIM does not, in general, attempt to validate the return address on 482 messages, either directly (noting that the envelope-from address is 483 an element of the SMTP protocol, and not the message content on which 484 DKIM operates), or via the optional Return-Path header field. 485 Furthermore, as is noted in section 4.4 of RFC 2821 [RFC2821], it is 486 common and useful practice for a message's return path not to 487 correspond to the message sender. For these reasons, DKIM is not 488 effective against reflection attacks. 490 4. Attacks on Message Signing 492 Bad actors can be expected to exploit all of the limitations of 493 message authentication systems. They are also likely to be motivated 494 to degrade the usefulness of message authentication systems in order 495 to hinder their deployment. Both the signature mechanism itself and 496 declarations made regarding use of message signatures (referred to 497 here as Sender Signing Policy, Sender Signing Practices or SSP, as 498 described in [I-D.allman-dkim-base] ) can be expected to be the 499 target of attacks. 501 4.1. Attacks Against Message Signatures 503 Summary of postulated attacks against DKIM signatures: 505 +---------------------------------------------+--------+------------+ 506 | Attack Name | Impact | Likelihood | 507 +---------------------------------------------+--------+------------+ 508 | Theft of private key for domain | High | Low | 509 | Theft of delegated private key | Medium | Medium | 510 | Private key recovery via side-channel | High | Low | 511 | attack | | | 512 | Chosen message replay | Low | M/H | 513 | Signed message replay | Low | High | 514 | Denial-of-service attack against verifier | High | Medium | 515 | Denial-of-service attack against key | High | Medium | 516 | service | | | 517 | Canonicalization abuse | Low | Medium | 518 | Body length limit abuse | Medium | Medium | 519 | Use of revoked key | Medium | Low | 520 | Compromise of key server | High | Low | 521 | Falsification of key service replies | Medium | Medium | 522 | Publication of malformed key records and/or | High | Low | 523 | signatures | | | 524 | Cryptographic weaknesses in signature | High | Low | 525 | generation | | | 526 | Display name abuse | Medium | High | 527 | Compromised system within originator's | High | Medium | 528 | network | | | 529 | Verification probe attack | Medium | Medium | 530 +---------------------------------------------+--------+------------+ 532 4.1.1. Theft of Private Key for Domain 534 Message signing technologies such as DKIM are vulnerable to theft of 535 the private keys used to sign messages. This includes "out-of-band" 536 means for this theft, such as burglary, bribery, extortion, and the 537 like, as well as electronic means for such theft, such as a 538 compromise of network and host security around the place where a 539 private key is stored. 541 Keys which are valid for all addresses in a domain typically reside 542 in MTAs which should be located in well-protected sites, such as data 543 centers. Various means should be employed for minimizing access to 544 private keys, such as non-existence of commands for displaying their 545 value, although ultimately memory dumps and the like will probably 546 contain the keys. Due to the unattended nature of MTAs, some 547 countermeasures, such as the use of a pass phrase to "unlock" a key, 548 are not practical to use. Other mechanisms, such as the use of 549 dedicated hardware devices which contain the private key and perform 550 the cryptographic signature operation, would be very effective in 551 denying access to the private key to those without physical access to 552 the device. Such devices would almost certainly make the theft of 553 the key visible, so that appropriate action (revocation of the 554 corresponding public key) can be taken should that happen. 556 4.1.2. Theft of Delegated Private Key 558 There are several circumstances where a domain owner will want to 559 delegate the ability to sign messages for the domain to an individual 560 user or a third-party associated with an outsourced activity such as 561 a corporate benefits administrator or a marketing campaign. Since 562 these keys may exist on less well-protected devices than the domain's 563 own MTAs, they will in many cases be more susceptible to compromise. 565 In order to mitigate this exposure, keys used to sign such messages 566 can be restricted by the domain owner to be valid for signing 567 messages only on behalf of specific addresses in the domain. This 568 maintains protection for the majority of addresses in the domain. 570 A related threat is the exploitation of weaknesses in the delegation 571 process itself. Standard precautions need to be used when handling 572 delegated keys to minimize their exposure to theft. In particular, 573 the delegate should generate the keypair to be used, and send the 574 public key to the domain owner. This transmission should be signed 575 in order to minimize the possibility of an attacker substituting a 576 different public key. 578 4.1.3. Private Key Recovery via Side-Channel Attack 580 Side-channel attacks are techniques whereby the private key is 581 recovered by observing characteristics of the signing process, such 582 as the time required, power consumed, and other externally-observable 583 factors. It requires both the ability to submit messages for signing 584 as well as the ability to accurately measure observable factor being 585 used. 587 An MTA probably has are enough variables (system load, clock 588 resolution, queuing delays, co-location with other equipment, etc.) 589 to prevent useful observable factors from being measured accurately 590 enough to be useful for a side-channel attack. Furthermore, while 591 some domains, e.g., consumer ISPs, would allow an attacker to submit 592 messages for signature, with many other domains this is difficult. 593 Other mechanisms, such as mailing lists hosted by the domain, might 594 be paths by which an attacker might submit messages for signature, 595 and should also be considered as possible vectors for side-channel 596 attacks. 598 4.1.4. Chosen Message Replay 600 Chosen Message Replay (CMR) refers to the scenario where the attacker 601 creates a message and obtains a signature for it by sending it 602 through an MTA authorized by the originating domain to him/herself or 603 an accomplice. They then "replay" the signed message by sending it, 604 using different envelope addresses, to a (typically large) number of 605 other recipients. 607 Due to the requirement to get an attacker-generated message signed, 608 Chosen Message Replay would most commonly be experienced by consumer 609 ISPs or others offering email accounts to clients, particularly where 610 there is little or no accountability to the account holder (the 611 attacker in this case). One approach to this problem is for the 612 domain to only sign email for clients that have passed a vetting 613 process to provide traceability to the message originator in the 614 event of abuse. At present, the low cost of email accounts (zero) 615 does not make it practical for any vetting to occur. It remains to 616 be seen whether this will be the model with signed mail as well, or 617 whether a higher level of trust will be required to obtain an email 618 signature. 620 Revocation of the signature or the associated key is a potential 621 countermeasure. However, the rapid pace at which the message might 622 be replayed (especially with an army of "zombie" computers), compared 623 with the time required to detect the attack and implement the 624 revocation, is likely to be problematic. A related problem is the 625 likelihood that domains will use a small number of signing keys for a 626 large number of customers, which is beneficial from a caching 627 standpoint but is likely to result in a great deal of collateral 628 damage (in the form of signature verification failures) should a key 629 be revoked suddenly. 631 Signature revocation addresses the collateral damage problem at the 632 expense of significant scaling requirements. At the extreme, 633 verifiers could be required to check for revocation of each signature 634 verified, which would result in very significant transaction rates. 635 An alternative, "revocation identifiers", has been proposed which 636 would permit revocation on an intermediate level of granularity, 637 perhaps on a per-account basis. Messages containing these 638 identifiers would result in a query to a revocation database, which 639 might be represented in DNS. 641 Further study is needed to determine if the benefits from revocation 642 (given the potential speed of a replay attack) outweigh the 643 transactional cost of querying a revocation database. 645 4.1.5. Signed Message Replay 647 Signed Message Replay (SMR) refers to the retransmission of already- 648 signed messages to additional recipients beyond those intended by the 649 sender. The attacker arranges to receive a message from the victim, 650 and then retransmits it intact but with different envelope addresses. 651 This might be done, for example, to make it look like a legitimate 652 sender of messages is sending a large amount of spam. When 653 reputation services are deployed, this could damage the originator's 654 reputation. 656 A larger number of domains are potential victims of SMR than of CMR, 657 because the former does not require the ability for the attacker to 658 send messages from the victim domain. However, the capabilities of 659 the attacker are lower. Unless coupled with another attack such as 660 body length limit abuse, it isn't possible for the attacker to use 661 this, for example, for advertising. 663 Many mailing lists, especially those which do not modify the content 664 of the message and signed header fields and hence do not invalidate 665 the signature, engage in a form of SMR. The use of body length 666 limits and other mechanisms to enhance the survivability of messages 667 effectively enhances the ability to do so. The only things that 668 distinguish this case from undesirable forms of SMR is the intent of 669 the replayer, which cannot be determined by the network. 671 4.1.6. Denial-of-Service Attack Against Verifier 673 While it takes some compute resources to sign and verify a signature, 674 it takes negligible compute resources to generate an invalid 675 signature. An attacker could therefore construct a "make work" 676 attack against a verifier, by sending a large number of incorrectly- 677 signed messages to a given verifier, perhaps with multiple signatures 678 each. The motivation might be to make it too expensive to verify 679 messages. 681 While this attack is feasible, it can be greatly mitigated by the 682 manner in which the verifier operates. For example, it might decide 683 to accept only a certain number of signatures per message, limit the 684 maximum key size it will accept (to prevent outrageously large 685 signatures from causing unneeded work), and verify signatures in a 686 particular order. The verifier could also maintain state 687 representing the current signature verification failure rate and 688 adopt a defensive posture when attacks may be underway. 690 4.1.7. Denial-of-Service Attack Against Key Service 692 An attacker might also attempt to degrade the availability of an 693 originator's key service, in order to cause that originator's 694 messages to be unverifiable. One way to do this might be to quickly 695 send a large number of messages with signatures which reference a 696 particular key, thereby creating a heavy load on the key server. 697 Other types of DoS attacks on the key server or the network 698 infrastructure serving it are also possible. 700 The best defense against this attack is to provide redundant key 701 servers, preferably on geographically-separate parts of the Internet. 702 Caching also helps a great deal, by decreasing the load on 703 authoritative key servers when there are many simultaneous key 704 requests. The use of a key service protocol which minimizes the 705 transactional cost of key lookups is also beneficial. It is noted 706 that the Domain Name System has all these characteristics. 708 4.1.8. Canonicalization Abuse 710 Canonicalization algorithms represent a tradeoff between the survival 711 of the validity of a message signature and the desire not to allow 712 the message to be altered inappropriately. In the past, 713 canonicalization algorithms have been proposed which would have 714 permitted attackers, in some cases, to alter the meaning of a 715 message. 717 Message signatures which support multiple canonicalization algorithms 718 give the signer the ability to decide the relative importance of 719 signature survivability and immutability of the signed content. If 720 an unexpected vulnerability appears in a canonicalization algorithm 721 in general use, new algorithms can be deployed, although it will be a 722 slow process because the signer can never be sure which algorithm(s) 723 the verifier supports. For this reason, canonicalization algorithms, 724 like cryptographic algorithms, should undergo a wide and careful 725 review process. 727 4.1.9. Body Length Limit Abuse 729 A body length limit is an optional indication from the signer how 730 much content has been signed. The verifier can either ignore the 731 limit, verify the specified portion of the message, or truncate the 732 message to the specified portion and verify it. The motivation for 733 this feature is the behavior of many mailing lists which add a 734 trailer, perhaps identifying the list, at the end of messages. 736 When body length limits are used, there is the potential for an 737 attacker to add content to the message. It has been shown that this 738 content, although at the end, can cover desirable content, especially 739 in the case of HTML messages. 741 If the body length isn't specified, or if the verifier decides to 742 ignore the limit, body length limits are moot. If the verifier or 743 recipient truncates the message at the signed content, there is no 744 opportunity for the attacker to add anything. 746 If the verifier observes body length limits when present, there is 747 the potential that an attacker can make undesired content visible to 748 the recipient. The size of the appended content makes little 749 difference, because it can simply be a URL reference pointing to the 750 actual content. Recipients need to use means to, at a minimum, 751 identify the unsigned content in the message. 753 4.1.10. Use of Revoked Key 755 The benefits obtained by caching of key records opens the possibility 756 that keys which have been revoked may be used for some period of time 757 after their revocation. The best examples of this occur when a 758 holder of a key delegated by the domain administrator must be 759 unexpectedly deauthorized from sending mail on behalf of one or more 760 addresses in the domain. 762 The caching of key records is normally short-lived, on the order of 763 hours to days. In many cases, this threat can be mitigated simply by 764 setting a short time-to-live for keys not under the domain 765 administrator's direct control (assuming, of course, that control of 766 the time-to-live value may be specified for each record, as it can 767 with DNS). In some cases, such as the recovery following a stolen 768 private key belonging to one of the domain's MTAs, the possibility of 769 theft and the time required to revoke the key authorization must be 770 considered when choosing a TTL. The chosen TTL must be long enough 771 to mitigate denial-of-service attacks and provide reasonable 772 transaction efficiency, and no longer. 774 4.1.11. Compromise of Key Server 776 Rather than by attempting to obtain a private key, an attacker might 777 instead focus efforts on the server used to publish public keys for a 778 domain. As in the key theft case, the motive might be to allow the 779 attacker to sign messages on behalf of the domain. This attack 780 provides the attacker with the additional capability to remove 781 legitimate keys from publication, thereby denying the domain the 782 ability for the signatures on its mail to verify correctly. 784 The host which is the primary key server, such as a DNS master server 785 for the domain, might be compromised. Another approach might be to 786 change the delegation of key servers at the next higher domain level. 788 This attack can be mitigated somewhat by independent monitoring to 789 audit the key service. Such auditing of the key service should occur 790 by means of zone transfers rather than queries to the zone's primary 791 server, so that the addition of records to the zone can be detected. 793 4.1.12. Falsification of Key Service Replies 795 Replies from the key service may also be spoofed by a suitably 796 positioned attacker. For DNS, one such way to do this is "cache 797 poisoning", in which the attacker provides unnecessary (and 798 incorrect) additional information in DNS replies, which is cached. 800 DNSSEC [RFC4033] is the preferred means of mitigating this threat, 801 but the current uptake rate for DNSSEC is slow enough that one would 802 not like to create a dependency on its deployment. Fortunately, the 803 vulnerabilities created by this attack are both localized and of 804 limited duration, although records with relatively long TTL may be 805 created with cache poisoning. 807 4.1.13. Publication of Malformed Key Records and/or Signatures 809 In this attack, the attacker publishes suitably crafted key records 810 or sends mail with intentionally malformed signatures, in an attempt 811 to confuse the verifier and perhaps disable verification altogether. 812 This attack is really a characteristic of an implementation 813 vulnerability, a buffer overflow or lack of bounds checking, for 814 example, rather than a vulnerability of the signature mechanism 815 itself. This threat is best mitigated by careful implementation and 816 creation of test suites that challenge the verification process. 818 4.1.14. Cryptographic Weaknesses in Signature Generation 820 The cryptographic algorithms used to generate mail signatures, 821 specifically the hash algorithm and the public-key encryption/ 822 decryption operations, may over time be subject to mathematical 823 techniques that degrade their security. At this writing, the SHA-1 824 hash algorithm is the subject of extensive mathematical analysis 825 which has considerably lowered the time required to create two 826 messages with the same hash value. This trend can be expected to 827 continue. 829 The message signature system must be designed to support multiple 830 signature and hash algorithms, and the signing domain must be able to 831 specify which algorithms it uses to sign messages. The choice of 832 algorithms must be published in key records, rather than in the 833 signature itself, to ensure that an attacker is not able to create 834 signatures using algorithms weaker than the domain wishes to permit. 836 Due to the fact that the signer and verifier of email do not, in 837 general, communicate directly, negotiation of the algorithms used for 838 signing cannot occur. In other words, a signer has no way of knowing 839 which algorithm(s) a verifier supports, nor (due to mail forwarding) 840 where the verifier is. For this reason, it is expected that once 841 message signing is widely deployed, algorithm change will occur 842 slowly, and legacy algorithms will need to be supported for a 843 considerable period. Algorithms used for message signatures 844 therefore need to be secure against expected cryptographic 845 developments several years into the future. 847 4.1.15. Display Name Abuse 849 Message signatures only relate to the address-specification portion 850 of an email address, which some MUAs only display (or some recipients 851 only pay attention to) the display name portion of the address. This 852 inconsistency leads to an attack where the attacker uses an From 853 header field such as: 855 From: "Dudley DoRight" 857 In this example, the attacker, whiplash@example.org, can sign the 858 message and still convince some recipients that the message is from 859 Dudley DoRight, who is presumably a trusted individual. Coupled with 860 the use of a throw-away domain or email address, it may be difficult 861 to bring the attacker to account for the use of another's display 862 name. 864 This is an attack which must be dealt with in the recipient's MUA. 865 One approach is to require that the signer's address specification 866 (and not just the display name) be visible to the recipient. 868 4.1.16. Compromised System Within Originator's Network 870 In many cases, MTAs may be configured to accept, and sign, messages 871 which originate within the topological boundaries of the originator's 872 network (i.e., within a firewall). The increasing use of compromised 873 systems to send email presents a problem for such policies, because 874 the attacker, using a compromised system as a proxy, can generate 875 signed mail at will. 877 Several approaches exist for mitigating this attack. The use of 878 authenticated submission, even within the network boundaries, can be 879 used to limit the addresses for which the attacker may obtain a 880 signature. It may also help locate the compromised system that is 881 the source of the messages more quickly. Content analysis of 882 outbound mail to identify undesirable and malicious content, as well 883 as monitoring of the volume of messages being sent by users, may also 884 prevent arbitrary messages from being signed and sent. 886 4.1.17. Verification Probe Attack 888 As noted above, bad actors (attackers) can sign messages on behalf of 889 domains they control. Since they may also control the key service 890 (e.g., the authoritative DNS name servers for the _domainkey 891 subdomain), it is possible for them to observe public key lookups, 892 and their source, when messages are verified. 894 One such attack, which we will refer to as a "verification probe", is 895 to send a message with a DKIM signature to each of many addresses in 896 a mailing list. The messages need not contain valid signatures, and 897 each instance of the message would typically use a different 898 selector. The attacker could then monitor key service requests and 899 determine which selectors had been accessed, and correspondingly 900 which addressees used DKIM verification. This could be used to 901 target future mailings at recipients who do not use DKIM 902 verification, on the premise that these addressees are more likely to 903 act on the message contents. 905 4.2. Attacks Against Message Signing Policy 907 Summary of postulated attacks against signing policy: 909 +---------------------------------------------+--------+------------+ 910 | Attack Name | Impact | Likelihood | 911 +---------------------------------------------+--------+------------+ 912 | Look-alike domain names | High | High | 913 | Internationalized domain name abuse | High | Medium | 914 | Denial-of-service attack against signing | Medium | Medium | 915 | policy | | | 916 | Use of multiple From addresses | Low | Medium | 917 | Abuse of third-party signatures | Medium | High | 918 +---------------------------------------------+--------+------------+ 920 4.2.1. Look-Alike Domain Names 922 Attackers may attempt to circumvent signing policy of a domain by 923 using a domain name which is close to, but not the same as the domain 924 with a signing policy. For instance, "example.com" might be replaced 925 by "examp1e.com". If the message is not to be signed, DKIM does not 926 require that the domain used actually exist (although other 927 mechanisms may make this a requirement). Services exist to monitor 928 domain registrations to identify potential domain name abuse, but 929 naturally do not identify the use of unregistered domain names. 931 A related attack is possible when the MUA does not render the domain 932 name in an easily recognizable format. If, for example, a Chinese 933 domain name is rendered in "punycode" as xn--cjsp26b3obxw7f.com, the 934 unfamiliarity of that representation may enable other domains to more 935 easily be mis-recognized as the expected domain. 937 Users that are unfamiliar with internet naming conventions may also 938 mis-recognize certain names. For example, users may confuse 939 online.example.com with online-example.com, the latter of which may 940 have been registered by an attacker. 942 4.2.2. Internationalized Domain Name Abuse 944 Internationalized domain names present a special case of the look- 945 alike domain name attack described above. Due to similarities in the 946 appearance of many Unicode characters, domains (particularly those 947 drawing characters from different groups) may be created which are 948 visually indistinguishable from other, possibly high-value domains. 949 This is discussed in detail in Unicode TR 36 [UTR36]. Surveillance 950 of domain registration records may point out some of these, but there 951 are many such similarities. As in the look-alike domain attack 952 above, this technique may also be used to circumvent sender signing 953 policy of other domains. 955 4.2.3. Denial-of-Service Attack Against Signing Policy 957 Just as the publication of public keys by a domain can be impacted by 958 an attacker, so can the publication of Sender Signing Policy (SSP) by 959 a domain. In the case of SSP, the transmission of large amounts of 960 unsigned mail purporting to come from the domain can result in a 961 heavy transaction load requesting the SSP record. More general DoS 962 attacks against the servers providing the SSP records are possible as 963 well. This is of particular concern since the default signing policy 964 is "we don't sign everything", which means that SSP, in effect, fails 965 open. 967 As with defense against DoS attacks for key servers, the best defense 968 against this attack is to provide redundant servers, preferably on 969 geographically-separate parts of the Internet. Caching again helps a 970 great deal, and signing policy should rarely change, so TTL values 971 can be relatively large. 973 4.2.4. Use of Multiple From Addresses 975 Although this usage is never seen by most recipients, RFC 2822 976 [RFC2822] permits the From address to contain multiple address 977 specifications. The lookup of Sender Signing Policy is based on the 978 From address, so if addresses from multiple domains are in the From 979 address, the question arises which signing policy to use. A rule 980 (say, "use the first address") could be specified, but then an 981 attacker could put a throwaway address prior to that of a high-value 982 domain. It is also possible for SSP to look at all addresses, and 983 choose the most restrictive rule. This is an area in need of further 984 study. 986 4.2.5. Abuse of Third-Party Signatures 988 In a number of situations, including mailing lists, event 989 invitations, and "send this article to a friend" services, the DKIM 990 signature on a message may not come from the originating address 991 domain. For this reason, "third-party" signatures, those attached by 992 the mailing list, invitation service, or news service, frequently 993 need to be regarded as having some validity. Since this effectively 994 makes it possible for any domain to sign any message, a sending 995 domain may publish sender signing practices stating that it does not 996 use such services, and accordingly that verifiers should view such 997 signatures with suspicion. 999 However, the restrictions placed on a domain by publishing "no third- 1000 party" signing practices effectively disallows many existing uses of 1001 e-mail. For the majority of domains that are unable to adopt these 1002 practices, an attacker may with some degree of success sign messages 1003 purporting to come from the domain. For this reason, accreditation 1004 and reputation services, as well as locally-maintained whitelists and 1005 blacklists, will need to play a significant role in evaluating 1006 messages that have been signed by third parties. 1008 5. Derived Requirements 1010 This section, as yet incomplete, is an attempt to capture a set of 1011 requirements for DKIM from the above discussion. These requirements 1012 include: 1014 The store for key and SSP records must be capable of utilizing 1015 multiple geographically-dispersed servers. 1017 Key and SSP records must be cacheable, either by the verifier 1018 requesting them or by other infrastructure. 1020 The cache time-to-live for key records must be specifiable on a 1021 per-record basis. 1023 The signature algorithm(s) used by the signing domain must be 1024 specified independently of the message being verified, such as in 1025 the key record. 1027 The algorithm(s) used for message signatures need to be secure 1028 against expected cryptographic developments several years in the 1029 future. 1031 6. IANA Considerations 1033 This document defines no items requiring IANA assignment. 1035 7. Security Considerations 1037 This document describes the security threat environment in which 1038 DomainKeys Identified Mail (DKIM) is expected to provide some 1039 benefit, and presents a number of attacks relevant to its deployment. 1041 8. Informative References 1043 [I-D.allman-dkim-base] 1044 Allman, E., "DomainKeys Identified Mail (DKIM)", 1045 draft-allman-dkim-base-01 (work in progress), 1046 October 2005. 1048 [I-D.allman-dkim-ssp] 1049 Allman, E., "DKIM Sender Signing Policy", 1050 draft-allman-dkim-ssp-01 (work in progress), October 2005. 1052 [RFC1939] Myers, J. and M. Rose, "Post Office Protocol - Version 3", 1053 STD 53, RFC 1939, May 1996. 1055 [RFC2821] Klensin, J., "Simple Mail Transfer Protocol", RFC 2821, 1056 April 2001. 1058 [RFC2822] Resnick, P., "Internet Message Format", RFC 2822, 1059 April 2001. 1061 [RFC3501] Crispin, M., "INTERNET MESSAGE ACCESS PROTOCOL - VERSION 1062 4rev1", RFC 3501, March 2003. 1064 [RFC4033] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1065 Rose, "DNS Security Introduction and Requirements", 1066 RFC 4033, March 2005. 1068 [UTR36] Davis, M. and M. Suignard, "Unicode Security 1069 Considerations", UTR 36, July 2005. 1071 Appendix A. Glossary 1073 Administrative Unit (AU) - The portion of the path of an email 1074 message that is under common administration. The originator and 1075 recipient typically develop trust relationships with the 1076 administrative units that send and receive their email, respectively, 1077 to perform the signing and verification of their messages. 1079 Origin address - The address on an email message, typically the RFC 1080 2822 From: address, which is associated with the alleged author of 1081 the message and is displayed by the recipient's MUA as the source of 1082 the message. 1084 More definitions to be added. 1086 Appendix B. Acknowledgements 1088 The author wishes to thank Phillip Hallam-Baker, Eliot Lear, Tony 1089 Finch, Dave Crocker, Barry Leiba, Arvel Hathcock, Eric Allman, Jon 1090 Callas, Stephen Farrell, Doug Otis, Frank Ellermann, and numerous 1091 others on the ietf-dkim mailing list for valuable suggestions and 1092 constructive criticism of earlier versions of this draft. 1094 Appendix C. Edit History 1096 Changes since draft-fenton-dkim-threats-00 draft: 1098 o Changed beginning of introduction to make it consistent with -base 1099 draft. 1101 o Clarified reasons for focus on externally-located bad actors. 1103 o Elaborated on reasons for effectiveness of address book attacks. 1105 o Described attack time windows with respect to replay attacks. 1107 o Added discussion of attacks using look-alike domains. 1109 o Added section on key management attacks. 1111 Changes since draft-fenton-dkim-threats-01 draft: 1113 o Reorganized description of bad actors. 1115 o Greatly expanded description of attacks against DKIM and SSP. 1117 o Added "derived requirements" section. 1119 Changes since draft-fenton-dkim-threats-02 draft: 1121 o Added description of reflection attack, verification probe attack, 1122 and abuse of third-party signatures. 1124 o Expanded description of key delegation attacks and look-alike 1125 domain names. 1127 o Numerous changes suggested by ietf-dkim mailing list participants. 1129 Author's Address 1131 Jim Fenton 1132 Cisco Systems, Inc. 1133 MS SJ-24/2 1134 170 W. Tasman Drive 1135 San Jose, CA 95134-1706 1136 USA 1138 Phone: +1 408 526 5914 1139 Email: fenton@cisco.com 1140 URI: 1142 Intellectual Property Statement 1144 The IETF takes no position regarding the validity or scope of any 1145 Intellectual Property Rights or other rights that might be claimed to 1146 pertain to the implementation or use of the technology described in 1147 this document or the extent to which any license under such rights 1148 might or might not be available; nor does it represent that it has 1149 made any independent effort to identify any such rights. Information 1150 on the procedures with respect to rights in RFC documents can be 1151 found in BCP 78 and BCP 79. 1153 Copies of IPR disclosures made to the IETF Secretariat and any 1154 assurances of licenses to be made available, or the result of an 1155 attempt made to obtain a general license or permission for the use of 1156 such proprietary rights by implementers or users of this 1157 specification can be obtained from the IETF on-line IPR repository at 1158 http://www.ietf.org/ipr. 1160 The IETF invites any interested party to bring to its attention any 1161 copyrights, patents or patent applications, or other proprietary 1162 rights that may cover technology that may be required to implement 1163 this standard. Please address the information to the IETF at 1164 ietf-ipr@ietf.org. 1166 Disclaimer of Validity 1168 This document and the information contained herein are provided on an 1169 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1170 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 1171 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 1172 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 1173 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1174 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1176 Copyright Statement 1178 Copyright (C) The Internet Society (2006). This document is subject 1179 to the rights, licenses and restrictions contained in BCP 78, and 1180 except as set forth therein, the authors retain all their rights. 1182 Acknowledgment 1184 Funding for the RFC Editor function is currently provided by the 1185 Internet Society.