idnits 2.17.1 draft-rescorla-sec-cons-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == It seems as if not all pages are separated by form feeds - found 0 form feeds but 22 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 2 instances of too long lines in the document, the longest one being 4 characters in excess of 72. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 698: '... Authors MUST describe...' RFC 2119 keyword, line 707: '... forms of attack MUST be considered: e...' RFC 2119 keyword, line 718: '...y of the authentication method MUST be...' RFC 2119 keyword, line 719: '...That is, authors MUST document the ass...' RFC 2119 keyword, line 743: '...ected to provide MUST be clearly speci...' (4 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? 'RFC1543' on line 36 looks like a reference -- Missing reference section? 'REF' on line 639 looks like a reference -- Missing reference section? 'OTP' on line 479 looks like a reference -- Missing reference section? 'PKIX' on line 598 looks like a reference -- Missing reference section? 'TLS' on line 513 looks like a reference -- Missing reference section? 'EKE' on line 524 looks like a reference -- Missing reference section? 'SPEKE' on line 524 looks like a reference -- Missing reference section? 'SRP2' on line 524 looks like a reference -- Missing reference section? 'SPKI' on line 598 looks like a reference -- Missing reference section? 'SMTPTLS' on line 845 looks like a reference -- Missing reference section? 'RFC2388' on line 900 looks like a reference -- Missing reference section? 'AUTH' on line 962 looks like a reference -- Missing reference section? 'HMAC' on line 963 looks like a reference Summary: 9 errors (**), 0 flaws (~~), 2 warnings (==), 15 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 E. Rescorla 2 RTFM, Inc. 3 B. Korver 4 INTERNET-DRAFT Network Alchemy 5 October 1999 (Expires April 2000) 7 Guidelines for Writing RFC Text on Security Considerations 9 Status of this Memo 11 This document is an Internet-Draft and is in full conformance with 12 all provisions of Section 10 of RFC2026. Internet-Drafts are working 13 documents of the Internet Engineering Task Force (IETF), its areas, 14 and its working groups. Note that other groups may also distribute 15 working documents as Internet-Drafts. 17 Internet-Drafts are draft documents valid for a maximum of six months 18 and may be updated, replaced, or obsoleted by other documents at any 19 time. It is inappropriate to use Internet-Drafts as reference mate- 20 rial or to cite them other than as ``work in progress.'' 22 The list of current Internet-Drafts can be accessed at 23 http://www.ietf.org/ietf/1id-abstracts.txt 25 The list of Internet-Draft Shadow Directories can be accessed at 26 http://www.ietf.org/shadow.html. 28 To learn the current status of any Internet-Draft, please check the 29 ``1id-abstracts.txt'' listing contained in the Internet-Drafts Shadow 30 Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), 31 munnari.oz.au (Pacific Rim), ftp.ietf.org (US East Coast), or 32 ftp.isi.edu (US West Coast). 34 1. Introduction 36 All RFCs are required by [RFC1543] to contain a Security Considera- 37 tions section. The purpose of this is both to encourage document 38 authors to consider security in their designs and to inform the 39 reader of relevant security issues. This memo is intended to provide 40 guidance to RFC authors in service of both ends. 42 This document is structured in three parts. The first is a combina- 43 tion security tutorial and definition of common terms; the second is 44 a series of guidelines for writing Security Considerations; the third 45 is a series of examples. 47 2. The Goals of Security 49 Most people speak of security as if it were a single monolithic prop- 50 erty of a protocol or system, but upon reflection that's very clearly 51 not true. Rather, security is a series of related but somewhat inde- 52 pendent properties. Not all of these properties are required for 53 every application. 55 Internet-Draft Security Considerations Guidelines 57 We can loosely divide security goals into those related to protecting 58 communications (COMMUNICATIONS SECURITY) and those relating to pro- 59 tecting systems (SYSTEMS SECURITY). Since communications are carried 60 out by systems and access to systems is through communications chan- 61 nels, these goals obviously interlock, but they can also be indepen- 62 dently provided. 64 2.1. Communications Security 66 Different authors partition the goals of communications security dif- 67 ferently. The partitioning we've found most useful is to divide them 68 into three major categories: CONFIDENTIALITY, MESSAGE INTEGRITY and 69 ENDPOINT AUTHENTICATION. 71 2.1.1. Confidentiality 73 When most people think of security, they think of CONFIDENTIALITY. 74 Confidentiality means that your data is kept secret from unintended 75 listeners. Usually, these listeners are simply eavesdroppers. When 76 the government taps your phone, that poses a risk to your confiden- 77 tiality. 79 Obviously, if you have secrets, you're concerned that no-one else 80 knows them and so at minimum you want confidentiality. When you see 81 spies in the movies go into the bathroom and turn on all the water to 82 foil bugging, the property they're looking for is confidentiality. 84 2.1.2. Message Integrity 86 The second primary goal is MESSAGE INTEGRITY. The basic idea here is 87 that we want to be sure that the message we receive is the one that 88 the sender sent. In paper-based systems, some message integrity comes 89 automatically. When you receive a letter written in pen you can be 90 fairly certain that no words have been removed by an attacker because 91 pen marks are difficult to remove from paper. However, an attacker 92 could have easily added some marks to the paper and completely 93 changed the meaning of the message. Similarly, it's easy to shorten 94 the page to truncate the message. 96 On the other hand, in the electronic world, since all bits look 97 alike, it's trivial to tamper with messages in transit. You simply 98 remove the message from the wire, copy out the parts you like, add 99 whatever data you want, and generate a new message of your choosing, 100 and the recipient is no wiser. This is the moral equivalent of the 101 attacker taking a letter you wrote, buying some new paper and recopy- 102 ing the message, changing it as he does it. It's just a lot easier to 103 do electronically since all bits look alike. 105 Internet-Draft Security Considerations Guidelines 107 2.1.3. Endpoint authentication 109 The third property we're concerned with is ENDPOINT AUTHENTICATION. 110 What we mean by this is that we know that one of the endpoints in the 111 communication is the one we intended. Without endpoint authentica- 112 tion, it's very difficult to provide either confidentiality or mes- 113 sage integrity. For instance, if we receive a message from Alice, 114 the property of message integrity doesn't do us much good unless we 115 know that it was in fact sent by Alice and not the attacker. Simi- 116 larly, if we want to send a confidential message to Bob, it's not of 117 much value to us if we're actually sending a confidential message to 118 the attacker. 120 Note that endpoint authentication can be provided asymmetrically. 121 When you call someone on the phone, you can be fairly certain that 122 you have the right person -- or at least that you got a person who's 123 actually at the phone number you called. On the other hand, if they 124 don't have caller ID, then the receiver of a phone call has no idea 125 who's calling them. Calling someone on the phone is an example of 126 recipient authentication, since you know who the recipient of the 127 call is, but they don't know anything about the sender. 129 On the other hand, cash is an example of sender authentication. A 130 dollar bill is like a message signed by the government. The govern- 131 ment has no idea who's got any given dollar bill but you can be 132 fairly certain that any bill was actually printed by the US Mint 133 because currency is difficult to forge. 135 2.2. Systems Security 137 In general, systems security is concerned with protecting one's 138 machines and data. The intent is that machines should be used only by 139 authorized users and for the purposes that the owners intend. Fur- 140 thermore, they should be available for those purposes. Attackers 141 should not be able to deprive legitimate users of resources. 143 2.2.1. Unauthorized Usage 145 Most systems are not intended to be completely accessible to the pub- 146 lic. Rather, they are intended to be used only by certain authorized 147 individuals. Although many Internet services are available to all 148 Internet users, even those servers generally offer a larger subset of 149 services to specific users. For instance, Web Servers often will 150 serve data to any user, but restrict the ability to modify pages to 151 specific users. Such modifications by the general public would be 152 UNAUTHORIZED USAGE. 154 Internet-Draft Security Considerations Guidelines 156 2.2.2. Inappropriate Usage 158 Being an authorized user does not mean that you have free run of the 159 system. As we said above, some activities are restricted to autho- 160 rized users, some to specific users, and some activities are gener- 161 ally forbidden to all but administrators. Moreover, even activities 162 which are in general permitted might be forbidden in some cases. For 163 instance, users may be permitted to send email but forbidden from 164 sending files above a certain size, or files which contain viruses. 165 These are examples of INAPPROPRIATE USAGE. 167 2.2.3. Denial of Service 169 Recall that our third goal was that the system should be available to 170 legitimate users. A broad variety of attacks are possible which 171 threaten such usage. Such attacks are collectively referred to as 172 DENIAL OF SERVICE attacks. Denial of service attacks are often very 173 easy to mount and difficult to stop. Many such attacks are designed 174 to consume machine resources, making it difficult or impossible to 175 serve legitimate users. Other attacks cause the target machine to 176 crash, completely denying service to users. 178 3. The Internet Threat Model 180 A THREAT MODEL describes the capabilities that an attacker is assumed 181 to be able to deploy against a resource. It should contain such 182 information as the resources available to an attacker in terms of 183 information, computing capability, and control of the system. The 184 purpose of a threat model is twofold. First, we wish to identify the 185 threats we are concerned with. Second, we wish to rule some threats 186 explicitly out of scope. Nearly every security system is vulnerable 187 to a sufficiently dedicated and resourceful attacker. 189 The Internet environment has a fairly well understood threat model. 190 In general, we assumed that the end-systems engaging in a protocol 191 exchange have not themselves been compromised. Protecting against an 192 attack when one of the end-systems has been compromised is extraordi- 193 narily difficult. It is, however, possible to design protocols which 194 minimize the extent of the damage done under these circumstances. 196 By contrast, we assume that the attacker has nearly complete control 197 of the communications channel over which the end-systems communicate. 198 This means that the attacker can read any PDU (Protocol Data Unit) on 199 the network and undetectably remove, change, or inject forged packets 200 onto the wire. This includes being able to generate packets that 201 appear to be from a trusted machine. Thus, even if the end-system 202 with which you wish to communicate is itself secure, the Internet 204 Internet-Draft Security Considerations Guidelines 206 environment provides no assurance that packets which claim to be from 207 that system in fact are. 209 It's important to realize that the meaning of a PDU is different at 210 different levels. At the IP level, a PDU means an IP packet. At the 211 TCP level, it means a TCP segment. At the application layer, it means 212 some kind of application PDU. For instance, at the level of email, it 213 might either mean an RFC-822 message or a single SMTP command. At the 214 HTTP level, it might mean a request or response. 216 3.1. Limited Threat Models 218 As we've said, a resourceful and dedicated attacker can control the 219 entire communications channel. However, a large number of attacks can 220 be mounted by an attacker with fewer resources. A number of cur- 221 rently known attacks can be mounted by an attacker with limited con- 222 trol of the network. For instance, password sniffing attacks can be 223 mounted by an attacker who can only read arbitrary packets. This is 224 generally referred to as a PASSIVE ATTACK. 226 By contrast, Morris's sequence number guessing attack [REF] can be 227 mounted by an attacker who can write but not read arbitrary packets. 228 Any attack which requires the attacker to write to the network is 229 known as an ACTIVE ATTACK. 231 Thus, a useful way of organizing attacks is to divide them based on 232 the capabilities required to mount the attack. The rest of this sec- 233 tion describes these categories and provides some examples of each 234 category. 236 3.2. Passive Attacks 238 In a passive attack, the attacker reads packets off the network but 239 does not write them. The simplest way to mount such an attack is to 240 simply be on the same LAN as the victim. On most common LAN configu- 241 rations, including Ethernet, 802.3, and FDDI, any machine on the wire 242 can read all traffic destined for any other machine on the same LAN. 243 Note that switching hubs make this sort of sniffing substantially 244 more difficult, since traffic destined for a machine only goes to the 245 network segment which that machine is on. 247 Similarly, an attacker who has control of a host in the communica- 248 tions path between two victim machines is able to mount a passive 249 attack on their communications. It is also possible to compromise 250 the routing infrastructure to specifically arrange that traffic 251 passes through a compromised machine. This might involve an active 252 attack on the routing infrastructure to facilitate a passive attack 253 on a victim machine. 255 Internet-Draft Security Considerations Guidelines 257 Wireless communications channels deserve special consideration. 258 Since the data is simply broadcast on well-known radio frequencies, 259 an attacker simply needs to be able to receive those transmissions. 260 Such channels are especially vulnerable to passive attacks. 262 In general, the goal of a passive attack is to obtain information 263 which the sender and receiver would rather remain private. Examples 264 of such information include credentials useful in the electronic 265 world such as passwords or credentials useful in the outside world, 266 such as confidential business information. 268 3.2.1. Privacy Violations 270 The classic example of passive attack is sniffing some inherently 271 private data off of the wire. For instance, despite the wide avail- 272 ability of SSL, many credit card transactions still traverse the 273 Internet in the clear. An attacker could sniff such a message and 274 recover the credit card number, which can then be used to make fraud- 275 ulent transactions. Moreover, confidential business information is 276 routinely transmitted over the network in the clear in email. 278 3.2.2. Password Sniffing 280 Another example of a passive attack is PASSWORD SNIFFING. Password 281 sniffing is directed towards obtaining unauthorized use of resources. 282 Many protocols, including TELNET [REF], POP [REF], and NNTP [REF], 283 use a shared password to authenticate the client to the server. Fre- 284 quently, this password is transmitted from the client to the server 285 in the clear over the communications channel. An attacker who can 286 read this traffic can therefore capture the password and REPLAY it. 287 That is to say that he can initiate a connection to the server and 288 pose as the client and login using the captured password. 290 Note that although the login phase of the attack is active, the 291 actual password capture phase is passive. Moreover, unless the server 292 checks the originating address of connections, the login phase does 293 not require any special control of the network. 295 3.2.3. Offline Cryptographic Attacks 297 Many cryptographic protocols are subject to OFFLINE ATTACKS. In such 298 a protocol, the attacker recovers data which has been processed using 299 the victim's secret key and then mounts a cryptanalytic attack on 300 that key. Passwords make a particularly vulnerable target because 301 they are typically low entropy. A number of popular password-based 302 challenge response protocols are vulnerable to DICTIONARY SEARCH. The 303 attacker captures a challenge-response pair and then proceeds to try 304 dictionary words until he finds a password that produces the right 306 Internet-Draft Security Considerations Guidelines 308 response. 310 A similar such attack can be mounted on a local network when NIS is 311 used. The Unix password is crypted using a one-way function, but 312 tools exist to break such crypted passwords [REF: Crack]. When NIS 313 is used, the crypted password is transmitted over the local network 314 and an attacker can thus sniff the password and attack it. 316 Historically, it has also been possible to exploit small operating 317 system security holes to recover the password file using an active 318 attack. These holes can then be bootstrapped into an actual account 319 by using the aforementioned offline password recovery techniques. 320 Thus we combine a low-level active attack with an offline passive 321 attack. 323 3.3. Active Attacks 325 When an attack involves writing data to the network, we refer to this 326 as an ACTIVE ATTACK. When IP is used without IPSEC, there is no 327 authentication for the sender address. As a consequence, it's 328 straightforward for an attacker to create a packet with a source 329 address of his choosing. We'll refer to this as a SPOOFING ATTACK. 331 Under certain circumstances, such a packet may be screened out by the 332 network. For instance, many packet filtering firewalls screen out all 333 packets with source addresses on the INTERNAL network that arrive on 334 the EXTERNAL interface. Note, however, that this provides no protec- 335 tion against an attacker who is inside the firewall. In general, 336 designers should assume that attackers can forge packets. 338 However, the ability to forge packets does not go hand in hand with 339 the ability to receive arbitrary packets. In fact, there are active 340 attacks that involve being able to send forged packets but not 341 receive the responses. We'll refer to these as BLIND ATTACKS. 343 Note that not all active attacks require forging addresses. For 344 instance, the TCP SYN denial of service attack [REF] can be mounted 345 successfully without disguising the sender's address. However, it is 346 common practice to disguise one's address in order to conceal one's 347 identity if an attack is discovered. 349 Each protocol is susceptible to specific active attacks, but experi- 350 ence shows that a number of common patterns of attack can be adapted 351 to any given protocol. The next sections describe a number of these 352 patterns and give specific examples of them as applied to known pro- 353 tocols. 355 Internet-Draft Security Considerations Guidelines 357 3.3.1. Replay Attacks 359 In a REPLAY ATTACK, the attacker records a sequence of messages off 360 of the wire and plays them back to the party which originally 361 received them. Note that the attacker does not need to be able to 362 understand the messages. He merely needs to capture and retransmit 363 them. 365 For example, consider the case where an S/MIME message is being used 366 to request some service, such as a credit card purchase or a stock 367 trade. An attacker might wish to have the service executed twice, if 368 only to inconvenience the victim. He could capture the message and 369 replay it, even though he can't read it, causing the transaction to 370 be executed twice. 372 3.3.2. Message Insertion 374 In a MESSAGE INSERTION attack, the attacker forges a message with 375 some chosen set of properties and injects it into the network. Often 376 this message will have a forged source address in order to disguise 377 the identity of the attacker. 379 For example, a denial-of-service attack can be mounted by inserting a 380 series of spurious TCP SYN packets [REF] directed towards the target 381 host. The target host responds with its own SYN and allocates kernel 382 data structures for the new connection. The attacker never completes 383 the 3-way handshake, so the allocated connection endpoints just sit 384 there taking up kernel memory. Typical TCP stack implementations only 385 allow some limited number of connections in this "half-open" state 386 and when this limit is reached, no more connections can be initiated, 387 even from legitimate hosts. Note that this attack is a blind attack, 388 since the attacker does not need to process the victim's SYNs. 390 3.3.3. Message Deletion 392 In a MESSAGE DELETION attack, the attacker removes a message from the 393 wire. Morris's sequence number guessing attack [REF] often requires 394 a message deletion attack to be performed successfully. In this blind 395 attack, the host whose address is being forged will receive a spuri- 396 ous TCP SYN packet from the host being attacked. Receipt of this SYN 397 packet generates a RST, which would tear the illegitimate connection 398 down. In order to prevent this host from sending a RST so that the 399 attack can be carried out successfully, Morris describes flooding 400 this host to create queue overflows such that the SYN packet is lost 401 and thus never responded to. 403 Internet-Draft Security Considerations Guidelines 405 3.3.4. Message Modification 407 In a MESSAGE MODIFICATION attack, the attacker removes a message from 408 the wire, modifies it, and reinjects it into the network. This sort 409 of attack is particularly useful if the attacker wants to send some 410 of the data in the message but also wants to change some of it. 412 Consider the case where the attacker wants to attack an order for 413 goods placed over the Internet. He doesn't have the victim's credit 414 card number so he waits for the victim to place the order and then 415 replaces the delivery address (and possibly the goods description) 416 with his own. Note that this particular attack is known as a CUT-AND- 417 PASTE attack since the attacker cuts the credit card number out of 418 the original message and pastes it into the new message. 420 3.3.5. Man-In-The-Middle 422 A MAN-IN-THE-MIDDLE attack combines the above techniques in a special 423 form: The attacker subverts the communication stream in order to pose 424 as the sender to receiver and the receiver to the sender: 426 What Alice and Bob think: 427 Alice <----------------------------------------------> Bob 429 What's happening: 430 Alice <----------------> Attacker <----------------> Bob 432 This differs fundamentally from the above forms of attack because it 433 attacks the identity of the communicating parties, rather than the 434 data stream itself. Consequently, many techniques which provide 435 integrity of the communications stream are insufficient to protect 436 against man-in-the-middle attacks. 438 Man-in-the-middle attacks are possible whenever a client/server pro- 439 tocol lacks MUTUAL ENDPOINT AUTHENTICATION. For instance, if an 440 attacker can hijack the client TCP connection during the TCP hand- 441 shake (perhaps by responding to the client's SYN before the server 442 does), then the attacker can open another connection to the server 443 and begin a man-in-the-middle attack. 445 4. Common Issues 447 Although each system's security requirements are unique, certain com- 448 mon requirements appear in a number of protocols. Often, naive proto- 449 col designers are faced with these requirements, they choose an obvi- 450 ous but insecure solution even though better solutions are available. 451 This section describes a number of issues seen in many protocols and 453 Internet-Draft Security Considerations Guidelines 455 the common pieces of security technology that may be useful in 456 addressing them. 458 4.1. User Authentication 460 Essentially every system which wants to control access to its 461 resources needs some way to authenticate users. A nearly uncountable 462 number of such mechanisms have been designed for this purpose. The 463 next several sections describe some of these techniques. 465 4.1.1. Username/Password 467 The most common access control mechanism is simple USERNAME/PASSWORD 468 The user provides a username and a reusable password to the host 469 which he wishes to use. This system is vulnerable to a simple passive 470 attack where the attacker sniffs the password off the wire and then 471 initiates a new session, presenting the password. This threat can be 472 mitigated by hosting the protocol across an encrypted connection such 473 as TLS or IPSEC. Unprotected (plaintext) username/password systems 474 are not acceptable in IETF standards. 476 4.1.2. Challenge Response and One Time Passwords 478 Systems which desire greater security than USERNAME/PASSWORD often 479 employ either a ONE TIME PASSWORD [OTP] scheme or a CHALLENGE- 480 RESPONSE. In a one time password scheme, the user is provided with a 481 list of passwords, which must be used in sequence, one time each. 482 (Often these passwords are generated from some secret key so the user 483 can simply compute the next password in the sequence.) Secure-ID is a 484 variant of this scheme. In a challenge-response scheme, the host and 485 the user share some secret (which often is represented as a pass- 486 word). In order to authenticate the user, the host presents the user 487 with a (randomly generated) challenge. The user computes some func- 488 tion based on the challenge and the secret and provides that to the 489 host, which verifies it. Often this computation is performed in a 490 handheld device, such as a DES Gold card. 492 Both types of scheme provide protection against replay attack, but 493 often still vulnerable to an OFFLINE KEYSEARCH ATTACK (a form of pas- 494 sive attack): As previously mentioned, often the one-time password or 495 response is computed from a shared secret. If the attacker knows the 496 function being used, he can simply try all possible shared secrets 497 until he finds one that produces the right output. This is made eas- 498 ier if the shared secret is a password, in which case he can mount a 499 DICTIONARY ATTACK--meaning that he tries a list of common words 500 rather than just random strings. 502 Internet-Draft Security Considerations Guidelines 504 These systems are also often vulnerable to an active attack. Unless 505 communications security is provided for the entire session, the 506 attacker can simply wait until authentication has been performed and 507 hijack the connection. 509 4.1.3. Certificates 511 A simple approach is to have all users have certificates [PKIX] which 512 they then use to authenticate in some protocol-specific way, as in 513 [TLS] or [S/MIME]. The primary obstacle to this approach in client- 514 server type systems is that it requires clients to have certificates, 515 which can be a deployment problem. 517 4.1.4. Some Uncommon Systems 519 There are ways to do a better job than the schemes mentioned above, 520 but they typically don't add much security unless communications 521 security (at least message integrity) will be employed to secure the 522 connection, because otherwise the attacker can merely hijack the con- 523 nection after authentication has been performed. A number of proto- 524 cols ([EKE], [SPEKE], [SRP2]) allow one to securely bootstrap a 525 user's password into a shared key which can be used as input to a 526 cryptographic protocol. Similarly, the user can authenticate using 527 public key certificates. (e.g. S-HTTP client authentication). Typi- 528 cally these methods are used as part of a more complete security pro- 529 tocol. 531 4.1.5. Host Authentication 533 Host authentication presents a special problem. Quite commonly, the 534 addresses of services are presented using a DNS hostname, for 535 instance as a URL [REF]. When requesting such a service, one has 536 ensure that the entity that one is talking to not only has a certifi- 537 cate but that that certificate corresponds to the expected identity 538 of the server. The important thing to have is a secure binding 539 between the certificate and the expected hostname. 541 For instance, it is usually not acceptable for the certificate to 542 contain an identity in the form of an IP address if the request was 543 for a given hostname. This does not provide end-to-end security 544 because the hostname-IP mapping is not secure unless secure DNS [REF] 545 is being used. This is a particular problem when the hostname is pre- 546 sented at the application layer but the authentication is performed 547 at some lower layer. 549 Internet-Draft Security Considerations Guidelines 551 4.2. Authorization vs. Authentication 553 AUTHORIZATION is the process by which one determines whether an 554 authenticated party has permission to access a particular resource or 555 service. Although tightly bound, it is important to realize that 556 authentication and authorization are two separate mechanisms. Per- 557 haps because of this tight coupling, authentication is sometimes mis- 558 takenly thought to imply authorization. Authentication simply iden- 559 tifies a party, authorization defines whether they can perform a cer- 560 tain action. 562 Authorization necessarily relies on authentication, but authentica- 563 tion alone does not imply authorization. Rather, before granting 564 permission to perform an action, the authorization mechanism must be 565 consulted to determine whether that action is permitted. 567 4.2.1. Access Control Lists 569 One common form of authorization mechanism is an access control list 570 (ACL) that lists users that are permitted access to a resource. Since 571 assigning individual authorization permissions to each resource is 572 tedious, often resources are hierarchically arranged such that the 573 parent resource's ACL is inherited by child resources. This allows 574 administrators to set top level policies and override them when nec- 575 essary. 577 4.2.2. Certificate Based Systems 579 While the distinction between authentication and authorization is 580 intuitive when using simple authentication mechanisms such as user- 581 name and password (i.e., everyone understands the difference between 582 the administrator account and a user account), with more complex 583 authentication mechanisms the distinction is sometimes lost. 585 With certificates, for instance, presenting a valid signature does 586 not imply authorization. The signature must be backed by a certifi- 587 cate chain that contains a trusted root, and that root must be 588 trusted in the given context. For instance, users who possess cer- 589 tificates issued by the Acme MIS CA may have different web access 590 privileges than users who possess certificates issued by the Acme 591 Accounting CA, even though both of these CAs are "trusted" by the 592 Acme web server. 594 Mechanisms for enforcing these more complicated properties have not 595 yet been completely explored. One approach is simply to attack poli- 596 cies to ACLs describing what sorts of certificates are trusted. 597 Another approach is to carry that information with the certificate, 598 either as a certificate extension/attribute [PKIX, SPKI] or as a 600 Internet-Draft Security Considerations Guidelines 602 separate "Attribute Certificate". 604 4.3. Providing Traffic Security 606 Securely designed protocols should provide some mechanism for secur- 607 ing (meaning integrity protecting, authenticating, and possibly 608 encrypting) all sensitive traffic. One approach is to secure the pro- 609 tocol itself, as in Secure DNS [REF], S/MIME [REF] or S-HTTP [REF]. 610 Although this provides security which is most fitted to the protocol, 611 it also requires considerable effort to get right. 613 Many protocols can be adequately secured using one of the available 614 channel security systems. We'll discuss the two most common, IPSEC 615 [REF] and SSL/TLS [REF]. 617 4.3.1. IPSEC 619 When used, IPSEC can provide security for all traffic between two 620 hosts. When working, IPSEC is transparent to the programmer who can 621 issue ordinary networking calls as usual. The primary problem with 622 IPSEC is deployment. In general, it must be implemented directly in 623 the operating system protocol stack and many operating systems don't 624 have it. Note the following tradeoff: because IPSec happens at the IP 625 layer, important security information (such as identity) is often not 626 available to the protocol. 628 4.3.2. SSL/TLS 630 The currently most common approach is to use SSL or it's successor 631 TLS. They provide channel security for a TCP connection at the appli- 632 cation level. That is, they run over TCP. SSL implementations typi- 633 cally provide a Berkeley Sockets-like interface for easy programming. 634 The primary issue when designing a protocol solution around TLS is to 635 differentiate between connections protected using TLS and those which 636 are not. 638 The two primary approaches used are to have a separate well-known 639 port for TLS connections (e.g. the HTTP over TLS port is 443) [REF] 640 or to have a mechanism for negotiating upward from the base protocol 641 to TLS. [REF: SMTP/TLS, HTTP Upgrade] When an upward negotiation 642 strategy is used, care must be taken to ensure that an attacker can 643 not force a clear connection when both parties wish to use TLS. 645 4.4. Object vs. Channel Security 647 It's useful to make the conceptual distinction between object secu- 648 rity and channel security. Object security refers to security mea- 649 sures which apply to entire data objects. Channel security measures 651 Internet-Draft Security Considerations Guidelines 653 provide a secure channel over which objects may be carried transpar- 654 ently but the channel has no special knowledge about object bound- 655 aries. 657 Consider the case of an email message. When it's carried over an 658 IPSEC or TLS secured connection, the message is protected during 659 transmission. However, it is unprotected in the receiver's mailbox, 660 and in intermediate spool files along the way. 662 By contrast, when an email message is protected with S/MIME or 663 OpenPGP, the entire message is encrypted and integrity protected 664 until it is examined and decrypted by the recipient. It also pro- 665 vides strong authentication of the actual sender, as opposed to the 666 machine the message came from. This is object security. Moreover, 667 the receiver can prove the signed message's authenticity to a third 668 party. 670 Note that the difference between object and channel security is a 671 matter of perspective. Object security at one layer of the protocol 672 stack often looks like channel security at the next layer up. So, 673 from the perspective of the IP layer, each packet looks like an indi- 674 vidually secured object. But from the perspective of a web client, 675 IPSEC just provides a secure channel. 677 The distinction isn't always clear-cut. For example, S-HTTP provides 678 object level security for a single HTTP transaction, but a web page 679 typically consists of multiple HTTP transactions (the base page and 680 numerous inline images.) Thus, from the perspective of the total web 681 page, this looks rather more like channel security. Object security 682 for a web page would consist of security for the transitive closure 683 of the page and all its embedded content as a single unit. 685 5. Writing Security Considerations Sections 687 While it is not a requirement that any given protocol or system be 688 immune to all forms of attack, it is still necessary for authors to 689 consider them. Part of the purpose of the Security Considerations 690 section is to explain what attacks are out of scope and what counter- 691 measures can be applied to defend against them. 693 There should be a clear description of the kinds of threats on the 694 described protocol or technology. This should be approached as an 695 effort to perform "due diligence" in describing all known or foresee- 696 able risks and threats to potential implementers and users. 698 Authors MUST describe 700 Internet-Draft Security Considerations Guidelines 702 1. which attacks are out of scope (and why!) 703 2. which attacks are in-scope 704 2.1 and the protocol is susceptable to 705 2.2 and the protocol protects against 707 At least the following forms of attack MUST be considered: eavesdrop- 708 ping, replay, message insertion, deletion, modification, and man-in- 709 the-middle. If the protocol incorporates cryptographic protection 710 mechanisms, it should be clearly indicated which portions of the data 711 are protected and what the protections are (i.e. integrity only, con- 712 fidentiality, and/or endpoint authentication, etc.). Some indication 713 should also be given to what sorts of attacks the cryptographic pro- 714 tection is susceptible. Data which should be held secret (keying 715 material, random seeds, etc.) should be clearly labeled. 717 If the technology involves authentication, particularly, user-host 718 authentication, the security of the authentication method MUST be 719 clearly specified. That is, authors MUST document the assumptions 720 that the security of this authentication method is predicated upon. 721 For instance, in the case of the UNIX username/password login method, 722 a statement to the effect of: 724 Authentication in the system is secure only to the extent that it 725 is difficult to guess or obtain a ASCII password that is a maximum 726 of 8 characters long. These passwords can be obtained by sniffing 727 telnet sessions or by running the 'crack' program using the con- 728 tents of the /etc/passwd file. Attempts to protect against on-line 729 password guessing by (1) disconnecting after several unsuccessful 730 login attempts and (2) waiting between successive password prompts 731 is effective only to the extent that attackers are impatient. 733 Because the /etc/passwd file maps usernames to user ids, groups, 734 etc. it must be world readable. In order to permit this usage but 735 make running crack more difficult, the file is often split into 736 /etc/passwd and a 'shadow' password file. The shadow file is not 737 world readable and contains the encrypted password. The regular 738 /etc/passwd file contains a dummy password in its place. 740 It is insufficient to simply state that one's protocol should be run 741 over some lower layer security protocol. If a system relies upon 742 lower layer security services for security, the protections those 743 services are expected to provide MUST be clearly specified. In addi- 744 tion, the resultant properties of the combined system need to be 745 specified. 747 The threat environment addressed by the Security Considerations sec- 748 tion MUST at a minimum include deployment across the global Internet 750 Internet-Draft Security Considerations Guidelines 752 across multiple administrative boundaries without assuming that fire- 753 walls are in place, even if only to provide justification for why 754 such consideration is out of scope for the protocool. It is not 755 acceptable to only discuss threats applicable to LANs and ignore the 756 broader threat environment. All IETF standards-track protocols are 757 considered likely to have deployment in the global Internet. In some 758 cases, there might be an Applicability Statement discouraging use of 759 a technology or protocol in a particular environment. Nonetheless, 760 the security issues of broader deployment should be discussed in the 761 document. 763 There should be a clear description of the residual risk to the user 764 or operator of that protocol after threat mitigation has been 765 deployed. Such risks might arise from compromise in a related proto- 766 col (e.g. IPSEC is useless if key management has been compromised), 767 from incorrect implementation, compromise of the security technology 768 used for risk reduction (e.g. 40-bit DES), or there might be risks 769 that are not addressed by the protocol specification (e.g. denial of 770 service attacks on an underlying link protocol). 772 There should also be some discussion of potential security risks 773 arising from potential misapplications of the protocol or technology 774 described in the RFC. This might be coupled with an Applicability 775 Statement for that RFC. 777 6. Examples 779 This section consists of some example security considerations sec- 780 tions, intended to give the reader a flavor of what's intended by 781 this document. 783 The first example is a 'retrospective' example, applying the criteria 784 of this document to a historical document, RFC-821. The second exam- 785 ple is a good security considerations section clipped from a current 786 protocol. 788 6.1. SMTP 790 When RFC-821 was written, Security Considerations sections were not 791 required in RFCs, and none is contained in that document. Had that 792 document been written today, the Security Considerations section 793 might look something like this: 795 6.1.1. SMTP Security Considerations 797 SMTP as-is provides no security precautions of any kind. All the 798 attacks we are about to describe must be provided by a different pro- 799 tocol layer. 801 Internet-Draft Security Considerations Guidelines 803 A passive attack is sufficient to recover message text. No endpoint 804 authentication is provided by the protocol. Sender spoofing is triv- 805 ial, and therefore forging email messages is trivial. Some implemen- 806 tations do add header lines with hostnames derived through reverse 807 name resolution (which is only secure to the extent that it is diffi- 808 cult to spoof DNS -- not very), although these header lines are nor- 809 mally not displayed to users. Receiver spoofing is also fairly 810 straight-forward, either using TCP connection hijacking or DNS spoof- 811 ing. Moreover, since email messages often pass through SMTP gateways, 812 all intermediate gateways must be trusted, a condition nearly impos- 813 sible on the global Internet. 815 Several approaches are available for alleviating these threats. In 816 order of increasingly high level in the protocol stack, we have: 818 SMTP over IPSEC 819 SMTP/TLS 820 S/MIME and PGP/MIME 822 6.1.1.1. SMTP over IPSEC 824 An SMTP connection run over IPSEC can provide confidentiality for the 825 message between the sender and the first hop SMTP gateway, or between 826 any pair of connected SMTP gateways. That is to say, it provides 827 channel security for the SMTP connections. In a situation where the 828 message goes directly from the client to the receiver's gateway, this 829 may provide substantial security (though the receiver must still 830 trust the gateway). Protection is provided against replay attacks, 831 since the data itself is protected and the packets cannot be 832 replayed. 834 Endpoint identification is a problem, however, unless the receiver's 835 address can be directly cryptographically authenticated. No sender 836 identification is available, since the sender's machine is authenti- 837 cated, not the sender himself. Furthermore, the identity of the 838 sender simply appears in the From header of the message, so it is 839 easily spoofable by the sender. Finally, unless the security policy 840 is set extremely strictly, there is also an active downgrade to 841 cleartext attack. 843 6.1.1.2. SMTP/TLS 845 SMTP can be combined with TLS as described in [SMTPTLS]. This pro- 846 vides similar protection to that provided when using IPSEC. Since 847 TLS certificates typically contain the server's host name, recipient 848 authentication may be slightly more obvious, but is still susceptible 849 to DNS spoofing attacks. Notably, common implementations of TLS 851 Internet-Draft Security Considerations Guidelines 853 contain a US exportable (and hence low security) mode. Applications 854 desiring high security should ensure that this mode is disabled. 855 Protection is provided against replay attacks, since the data itself 856 is protected and the packets cannot be replayed. [note: The Security 857 Considerations section of the SMTP over TLS draft is quite good and 858 bears reading as an example of how to do things.] 860 6.1.1.3. S/MIME and PGP/MIME 862 S/MIME and PGP/MIME are both message oriented security protocols. 863 They provide object security for individual messages. With various 864 settings, sender and recipient authentication and confidentiality may 865 be provided. More importantly, the identification is not of the send- 866 ing and receiving machines, but rather of the sender and recipient 867 themselves. (Or, at least, of cryptographic keys corresponding to the 868 sender and recipient.) Consequently, end-to-end security may be 869 obtained. Note, however, that no protection is provided against 870 replay attacks. 872 6.1.1.4. Denial of Service 874 None of these security measures provides any real protection against 875 denial of service. SMTP connections can easily be used to tie up sys- 876 tem resources in a number of ways, including excessive port consump- 877 tion, excessive disk usage (email is typically delivered to disk 878 files), and excessive memory consumption (sendmail, for instance, is 879 fairly large, and typically forks a new process to deal with each 880 message.) 882 6.1.1.5. Inappropriate Usage 884 In particular, there is no protection provided against unsolicited 885 mass email (aka SPAM). 887 SMTP also includes several commands which may be used by attackers to 888 explore the machine on which the SMTP server runs. The VRFY command 889 permits an attacker to convert user-names to mailbox name and often 890 real name. This is often useful in mounting a password guessing 891 attack, as many users use their name as their password. EXPN permits 892 an attacker to expand an email list to the names of the subscribers. 893 This may be used in order to generate a list of legitimate users in 894 order to attack their accounts, as well as to build mailing lists for 895 future SPAM. Administrators may choose to disable these commands. 897 6.2. VRRP 899 The second example is from VRRP, the Virtual Router Redundance Proto- 900 col ([RFC2388]). We reproduce here the Security Considerations 902 Internet-Draft Security Considerations Guidelines 904 section from that document (with new section numbers). Our comments 905 are indented and prefaced with 'NOTE:'. 907 6.2.1. Security Considerations 909 VRRP is designed for a range of internetworking environments that may 910 employ different security policies. The protocol includes several 911 authentication methods ranging from no authentication, simple clear 912 text passwords, and strong authentication using IP Authentication 913 with MD5 HMAC. The details on each approach including possible 914 attacks and recommended environments follows. 916 Independent of any authentication type VRRP includes a mechanism 917 (setting TTL=255, checking on receipt) that protects against VRRP 918 packets being injected from another remote network. This limits most 919 vulnerabilities to local attacks. 921 NOTE: The security measures discussed in the following sections 922 only provide various kinds of authentication. No confidentiality 923 is provided at all. This should be explicitly described as outside 924 the scope. 926 6.2.1.1. No Authentication 928 The use of this authentication type means that VRRP protocol 929 exchanges are not authenticated. This type of authentication SHOULD 930 only be used in environments were there is minimal security risk and 931 little chance for configuration errors (e.g., two VRRP routers on a 932 LAN). 934 6.2.1.2. Simple Text Password 936 The use of this authentication type means that VRRP protocol 937 exchanges are authenticated by a simple clear text password. 939 This type of authentication is useful to protect against accidental 940 misconfiguration of routers on a LAN. It protects against routers 941 inadvertently backing up another router. A new router must first be 942 configured with the correct password before it can run VRRP with 943 another router. This type of authentication does not protect against 944 hostile attacks where the password can be learned by a node snooping 945 VRRP packets on the LAN. The Simple Text Authentication combined 946 with the TTL check makes it difficult for a VRRP packet to be sent 947 from another LAN to disrupt VRRP operation. 949 This type of authentication is RECOMMENDED when there is minimal risk 950 of nodes on a LAN actively disrupting VRRP operation. If this type 952 Internet-Draft Security Considerations Guidelines 954 of authentication is used the user should be aware that this clear 955 text password is sent frequently, and therefore should not be the 956 same as any security significant password. 958 6.2.1.3. IP Authentication Header 960 The use of this authentication type means the VRRP protocol exchanges 961 are authenticated using the mechanisms defined by the IP Authentica- 962 tion Header [AUTH] using "The Use of HMAC-MD5-96 within ESP and AH", 963 [HMAC]. This provides strong protection against configuration 964 errors, replay attacks, and packet corruption/modification. 966 This type of authentication is RECOMMENDED when there is limited con- 967 trol over the administration of nodes on a LAN. While this type of 968 authentication does protect the operation of VRRP, there are other 969 types of attacks that may be employed on shared media links (e.g., 970 generation of bogus ARP replies) which are independent from VRRP and 971 are not protected. 973 NOTE: Specifically, although securing VRRP prevents unauthorized 974 machines 975 from taking part in the election protocol, it does not protect 976 hosts on the network from being deceived. For example, a gratutitous 977 ARP reply from what purports to be the virtual router's IP address 978 can redirect traffic to an unauthorized machine. Similarly, 979 individual connections can be diverted by means of forged ICMP 980 Redirect messages. 982 Acknowledgments 984 This document is heavily based on a note written by Ran Atkinson in 985 1997. That note was written after the IAB Security Workshop held in 986 early 1997, based on input from everyone at that workshop. Some of 987 the specific text above was taken from Ran's original document, and 988 some of that text was taken from an email message written by Fred 989 Baker. 991 The other primary source for this document is specific comments 992 received from Steve Bellovin. Early review of this document was done 993 by Lisa Dusseault and Mark Schertler. 995 References 996 [CA-96.21] "TCP SYN Flooding and IP Spoofing", CERT Advisory CA-96.21 997 ftp://info.cert.org/pub/cert advisories/CA-96.21.tcp syn flooding 999 The rest are TBD 1001 Internet-Draft Security Considerations Guidelines 1003 Security Considerations 1005 This entire document is about security considerations. 1007 Author's Address 1008 Eric Rescorla 1009 RTFM, Inc. 1010 30 Newell Road #16 1011 East Palo Alto, CA 94303 1012 Phone: (650) 328-8631 1014 Brian Korver 1015 Network Alchemy 1016 1538 Pacific Avenue 1017 Santa Cruz, CA 95060 1018 Phone: (831)-460-3800 1020 Internet-Draft Security Considerations Guidelines 1022 Table of Contents 1024 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1025 2. The Goals of Security . . . . . . . . . . . . . . . . . . . . . . 1 1026 2.1. Communications Security . . . . . . . . . . . . . . . . . . . . 2 1027 2.1.1. Confidentiality . . . . . . . . . . . . . . . . . . . . . . . 2 1028 2.1.2. Message Integrity . . . . . . . . . . . . . . . . . . . . . . 2 1029 2.1.3. Endpoint authentication . . . . . . . . . . . . . . . . . . . 3 1030 2.2. Systems Security . . . . . . . . . . . . . . . . . . . . . . . 3 1031 2.2.1. Unauthorized Usage . . . . . . . . . . . . . . . . . . . . . 3 1032 2.2.2. Inappropriate Usage . . . . . . . . . . . . . . . . . . . . . 4 1033 2.2.3. Denial of Service . . . . . . . . . . . . . . . . . . . . . . 4 1034 3. The Internet Threat Model . . . . . . . . . . . . . . . . . . . . 4 1035 3.1. Limited Threat Models . . . . . . . . . . . . . . . . . . . . . 5 1036 3.2. Passive Attacks . . . . . . . . . . . . . . . . . . . . . . . . 5 1037 3.2.1. Privacy Violations . . . . . . . . . . . . . . . . . . . . . 6 1038 3.2.2. Password Sniffing . . . . . . . . . . . . . . . . . . . . . . 6 1039 3.2.3. Offline Cryptographic Attacks . . . . . . . . . . . . . . . . 6 1040 3.3. Active Attacks . . . . . . . . . . . . . . . . . . . . . . . . 7 1041 3.3.1. Replay Attacks . . . . . . . . . . . . . . . . . . . . . . . 8 1042 3.3.2. Message Insertion . . . . . . . . . . . . . . . . . . . . . . 8 1043 3.3.3. Message Deletion . . . . . . . . . . . . . . . . . . . . . . 8 1044 3.3.4. Message Modification . . . . . . . . . . . . . . . . . . . . 9 1045 3.3.5. Man-In-The-Middle . . . . . . . . . . . . . . . . . . . . . . 9 1046 4. Common Issues . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1047 4.1. User Authentication . . . . . . . . . . . . . . . . . . . . . . 10 1048 4.1.1. Username/Password . . . . . . . . . . . . . . . . . . . . . . 10 1049 4.1.2. Challenge Response and One Time Passwords . . . . . . . . . . 10 1050 4.1.3. Certificates . . . . . . . . . . . . . . . . . . . . . . . . 11 1051 4.1.4. Some Uncommon Systems . . . . . . . . . . . . . . . . . . . . 11 1052 4.1.5. Host Authentication . . . . . . . . . . . . . . . . . . . . . 11 1053 4.2. Authorization vs. Authentication . . . . . . . . . . . . . . . 12 1054 4.2.1. Access Control Lists . . . . . . . . . . . . . . . . . . . . 12 1055 4.2.2. Certificate Based Systems . . . . . . . . . . . . . . . . . . 12 1056 4.3. Providing Traffic Security . . . . . . . . . . . . . . . . . . 13 1057 4.3.1. IPSEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1058 4.3.2. SSL/TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1059 4.4. Object vs. Channel Security . . . . . . . . . . . . . . . . . . 13 1060 5. Writing Security Considerations Sections . . . . . . . . . . . . 14 1061 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1062 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1063 6.1. SMTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1064 6.1.1. SMTP Security Considerations . . . . . . . . . . . . . . . . 16 1065 6.1.1.1. SMTP over IPSEC . . . . . . . . . . . . . . . . . . . . . . 17 1066 6.1.1.2. SMTP/TLS . . . . . . . . . . . . . . . . . . . . . . . . . 17 1067 6.1.1.3. S/MIME and PGP/MIME . . . . . . . . . . . . . . . . . . . . 18 1068 6.1.1.4. Denial of Service . . . . . . . . . . . . . . . . . . . . . 18 1070 Internet-Draft Security Considerations Guidelines 1072 6.1.1.5. Inappropriate Usage . . . . . . . . . . . . . . . . . . . . 18 1073 6.2. VRRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1074 6.2.1. Security Considerations . . . . . . . . . . . . . . . . . . . 19 1075 6.2.1.1. No Authentication . . . . . . . . . . . . . . . . . . . . . 19 1076 6.2.1.2. Simple Text Password . . . . . . . . . . . . . . . . . . . 19 1077 6.2.1.3. IP Authentication Header . . . . . . . . . . . . . . . . . 20 1078 6.2.1.3. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 20 1079 6.2.1.3. References . . . . . . . . . . . . . . . . . . . . . . . . 20 1080 Security Considerations . . . . . . . . . . . . . . . . . . . . . . 21 1081 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . . 21