idnits 2.17.1 draft-rescorla-sec-cons-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 20 instances of too long lines in the document, the longest one being 8 characters in excess of 72. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 613: '... used, designers SHOULD carefully exam...' RFC 2119 keyword, line 615: '... is necessary, designers SHOULD choose...' RFC 2119 keyword, line 750: '...y options. However, designers MUST not...' RFC 2119 keyword, line 752: '...n layer protocol SHOULD not simply sta...' RFC 2119 keyword, line 825: '...ternet standards MUST describe which d...' (11 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: In environments where IPsec is sure to be available, it represents a viable option for protecting application communications traffic. If the traffic to be protected is UDP, IPsec and application-specific object security are the only options. However, designers MUST not assume that IPsec will be available. A security policy for a generic application layer protocol SHOULD not simply state that IPsec must be used, unless there is some reason to believe that IPsec will be available in the intended deployment environment. In environments where IPsec may not be available and the traffic is solely TCP, TLS is the method of choice, since the application developer can easily ensure its presence by including a TLS implementation in his package. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC 2223' is mentioned on line 30, but not defined ** Obsolete undefined reference: RFC 2223 (Obsoleted by RFC 7322) == Missing Reference: 'REF' is mentioned on line 584, but not defined == Unused Reference: 'RFC-2223' is defined on line 1304, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2402 (ref. 'AH') (Obsoleted by RFC 4302, RFC 4305) -- Possible downref: Non-RFC (?) normative reference: ref. 'DDOS' ** Obsolete normative reference: RFC 2535 (ref. 'DNSSEC') (Obsoleted by RFC 4033, RFC 4034, RFC 4035) -- Possible downref: Non-RFC (?) normative reference: ref. 'EKE' ** Obsolete normative reference: RFC 2406 (ref. 'ESP') (Obsoleted by RFC 4303, RFC 4305) ** Obsolete normative reference: RFC 2818 (ref. 'HTTPTLS') (Obsoleted by RFC 9110) ** Downref: Normative reference to an Informational RFC: RFC 2104 (ref. 'HMAC') -- Possible downref: Non-RFC (?) normative reference: ref. 'IPSPPROB' -- Possible downref: Non-RFC (?) normative reference: ref. 'KLEIN' ** Obsolete normative reference: RFC 977 (ref. 'NNTP') (Obsoleted by RFC 3977) ** Downref: Normative reference to an Experimental RFC: RFC 2522 (ref. 'PHOTURIS') ** Obsolete normative reference: RFC 2459 (ref. 'PKIX') (Obsoleted by RFC 3280) ** Obsolete normative reference: RFC 2223 (Obsoleted by RFC 7322) -- Possible downref: Non-RFC (?) normative reference: ref. 'SEQNUM' ** Downref: Normative reference to an Experimental RFC: RFC 2693 (ref. 'SPKI') -- Possible downref: Non-RFC (?) normative reference: ref. 'SPEKE' -- Possible downref: Non-RFC (?) normative reference: ref. 'SRP' -- Possible downref: Non-RFC (?) normative reference: ref. 'SSH' ** Obsolete normative reference: RFC 2487 (ref. 'STARTTLS') (Obsoleted by RFC 3207) ** Downref: Normative reference to an Historic RFC: RFC 2660 (ref. 'S-HTTP') ** Obsolete normative reference: RFC 2246 (ref. 'TLS') (Obsoleted by RFC 4346) -- Possible downref: Non-RFC (?) normative reference: ref. 'TCPSYN' ** Obsolete normative reference: RFC 1738 (ref. 'URL') (Obsoleted by RFC 4248, RFC 4266) ** Obsolete normative reference: RFC 2338 (ref. 'VRRP') (Obsoleted by RFC 3768) -- Possible downref: Non-RFC (?) normative reference: ref. 'WEP' Summary: 27 errors (**), 0 flaws (~~), 5 warnings (==), 12 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 E. Rescorla 2 RTFM, Inc. 3 B. Korver 4 INTERNET-DRAFT Xythos Software 5 February 2002 (Expires August 2002) 7 Guidelines for Writing RFC Text on Security Considerations 9 Status of this Memo 11 This document is an Internet-Draft and is in full conformance with 12 all provisions of Section 10 of RFC2026. Internet-Drafts are working 13 documents of the Internet Engineering Task Force (IETF), its areas, 14 and its working groups. Note that other groups may also distribute 15 working documents as Internet-Drafts. 17 Internet-Drafts are draft documents valid for a maximum of six months 18 and may be updated, replaced, or obsoleted by other documents at any 19 time. It is inappropriate to use Internet-Drafts as reference mate- 20 rial or to cite them other than as ``work in progress.'' 22 To learn the current status of any Internet-Draft, please check the 23 ``1id-abstracts.txt'' listing contained in the Internet-Drafts Shadow 24 Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), 25 munnari.oz.au (Pacific Rim), ftp.ietf.org (US East Coast), or 26 ftp.isi.edu (US West Coast). 28 1. Introduction 30 All RFCs are required by [RFC 2223] to contain a Security Considera- 31 tions section. The purpose of this is both to encourage document 32 authors to consider security in their designs and to inform the 33 reader of relevant security issues. This memo is intended to provide 34 guidance to RFC authors in service of both ends. 36 This document is structured in three parts. The first is a combina- 37 tion security tutorial and definition of common terms; the second is 38 a series of guidelines for writing Security Considerations; the third 39 is a series of examples. 41 2. The Goals of Security 43 Most people speak of security as if it were a single monolithic prop- 44 erty of a protocol or system, but upon reflection that's very clearly 45 not true. Rather, security is a series of related but somewhat inde- 46 pendent properties. Not all of these properties are required for 47 every application. 49 We can loosely divide security goals into those related to protecting 50 communications (COMMUNICATION SECURITY, also known as COMSEC) and 51 those relating to protecting systems (ADMINISTRATIVE SECURITY or SYS- 52 TEM SECURITY). Since communications are carried out by systems and 53 access to systems is through communications channels, these goals 54 obviously interlock, but they can also be independently provided. 56 2.1. Communication Security 58 Different authors partition the goals of communication security dif- 59 ferently. The partitioning we've found most useful is to divide them 60 into three major categories: CONFIDENTIALITY, DATA INTEGRITY and PEER 61 ENTITY AUTHENTICATION. 63 2.1.1. Confidentiality 65 When most people think of security, they think of CONFIDENTIALITY. 66 Confidentiality means that your data is kept secret from unintended 67 listeners. Usually, these listeners are simply eavesdroppers. When 68 the government taps your phone, that poses a risk to your confiden- 69 tiality. 71 Obviously, if you have secrets, you're concerned that no-one else 72 knows them and so at minimum you want confidentiality. When you see 73 spies in the movies go into the bathroom and turn on all the water to 74 foil bugging, the property they're looking for is confidentiality. 76 2.1.2. Data Integrity 78 The second primary goal is DATA INTEGRITY. The basic idea here is 79 that we want to be sure that the data we receive is the one that the 80 sender sent. In paper-based systems, some data integrity comes auto- 81 matically. When you receive a letter written in pen you can be fairly 82 certain that no words have been removed by an attacker because pen 83 marks are difficult to remove from paper. However, an attacker could 84 have easily added some marks to the paper and completely changed the 85 meaning of the message. Similarly, it's easy to shorten the page to 86 truncate the message. 88 On the other hand, in the electronic world, since all bits look 89 alike, it's trivial to tamper with messages in transit. You simply 90 remove the message from the wire, copy out the parts you like, add 91 whatever data you want, and generate a new message of your choosing, 92 and the recipient is no wiser. This is the moral equivalent of the 93 attacker taking a letter you wrote, buying some new paper and recopy- 94 ing the message, changing it as he does it. It's just a lot easier to 95 do electronically since all bits look alike. 97 2.1.3. Peer Entity authentication 99 The third property we're concerned with is PEER ENTITY AUTHENTICA- 100 TION. What we mean by this is that we know that one of the endpoints 101 in the communication is the one we intended. Without peer entity 102 authentication, it's very difficult to provide either confidentiality 103 or data integrity. For instance, if we receive a message from Alice, 104 the property of data integrity doesn't do us much good unless we know 105 that it was in fact sent by Alice and not the attacker. Similarly, if 106 we want to send a confidential message to Bob, it's not of much value 107 to us if we're actually sending a confidential message to the 108 attacker. 110 Note that peer entity authentication can be provided asymmetrically. 111 When you call someone on the phone, you can be fairly certain that 112 you have the right person -- or at least that you got a person who's 113 actually at the phone number you called. On the other hand, if they 114 don't have caller ID, then the receiver of a phone call has no idea 115 who's calling them. Calling someone on the phone is an example of 116 recipient authentication, since you know who the recipient of the 117 call is, but they don't know anything about the sender. 119 In messaging situations, you often wish to use peer entity authenti- 120 cation to establish the identity of the sender of a certain message. 121 In such contexts, this property is called DATA ORIGIN AUTHENTICATION. 123 2.2. Non-Repudiation 125 A system that provides endpoint authentication allows one party to be 126 certain of the identity of someone with whom he is communicating. 127 When the system provides data integrity a receiver can be sure of 128 both the sender's identity and that he is receiving the data that 129 that sender meant to send. However, he cannot necessarily demonstrate 130 this fact to a third party. The ability to make this demonstration is 131 called NON-REPUDIATION. 133 There are many situations in which non-repudiation is desirable. Con- 134 sider the situation in which two parties have signed a contract which 135 one party wishes to unilaterally abrogate. He might simply claim that 136 he had never signed it in the first place. Non-repudiation prevents 137 him from doing so, thus protecting the counterparty. 138 Unfortunately, non-repudiation can be very difficult to achieve in 139 practice and naive approaches are generally inadequate. Section XXX 140 describes some of the difficulties, which generally stem from the 141 fact that the interests of the two parties are not aligned--one party 142 wishes to prove something that the other party wishes to deny. 144 2.3. Systems Security 146 In general, systems security is concerned with protecting one's 147 machines and data. The intent is that machines should be used only by 148 authorized users and for the purposes that the owners intend. Fur- 149 thermore, they should be available for those purposes. Attackers 150 should not be able to deprive legitimate users of resources. 152 2.3.1. Unauthorized Usage 154 Most systems are not intended to be completely accessible to the pub- 155 lic. Rather, they are intended to be used only by certain authorized 156 individuals. Although many Internet services are available to all 157 Internet users, even those servers generally offer a larger subset of 158 services to specific users. For instance, Web Servers often will 159 serve data to any user, but restrict the ability to modify pages to 160 specific users. Such modifications by the general public would be 161 UNAUTHORIZED USAGE. 163 2.3.2. Inappropriate Usage 165 Being an authorized user does not mean that you have free run of the 166 system. As we said above, some activities are restricted to autho- 167 rized users, some to specific users, and some activities are gener- 168 ally forbidden to all but administrators. Moreover, even activities 169 which are in general permitted might be forbidden in some cases. For 170 instance, users may be permitted to send email but forbidden from 171 sending files above a certain size, or files which contain viruses. 172 These are examples of INAPPROPRIATE USAGE. 174 2.3.3. Denial of Service 176 Recall that our third goal was that the system should be available to 177 legitimate users. A broad variety of attacks are possible which 178 threaten such usage. Such attacks are collectively referred to as 179 DENIAL OF SERVICE attacks. Denial of service attacks are often very 180 easy to mount and difficult to stop. Many such attacks are designed 181 to consume machine resources, making it difficult or impossible to 182 serve legitimate users. Other attacks cause the target machine to 183 crash, completely denying service to users. 185 3. The Internet Threat Model 187 A THREAT MODEL describes the capabilities that an attacker is assumed 188 to be able to deploy against a resource. It should contain such 189 information as the resources available to an attacker in terms of 190 information, computing capability, and control of the system. The 191 purpose of a threat model is twofold. First, we wish to identify the 192 threats we are concerned with. Second, we wish to rule some threats 193 explicitly out of scope. Nearly every security system is vulnerable 194 to a sufficiently dedicated and resourceful attacker. 196 The Internet environment has a fairly well understood threat model. 197 In general, we assumed that the end-systems engaging in a protocol 198 exchange have not themselves been compromised. Protecting against an 199 attack when one of the end-systems has been compromised is extraordi- 200 narily difficult. It is, however, possible to design protocols which 201 minimize the extent of the damage done under these circumstances. 203 By contrast, we assume that the attacker has nearly complete control 204 of the communications channel over which the end-systems communicate. 205 This means that the attacker can read any PDU (Protocol Data Unit) on 206 the network and undetectably remove, change, or inject forged packets 207 onto the wire. This includes being able to generate packets that 208 appear to be from a trusted machine. Thus, even if the end-system 209 with which you wish to communicate is itself secure, the Internet 210 environment provides no assurance that packets which claim to be from 211 that system in fact are. 213 It's important to realize that the meaning of a PDU is different at 214 different levels. At the IP level, a PDU means an IP packet. At the 215 TCP level, it means a TCP segment. At the application layer, it means 216 some kind of application PDU. For instance, at the level of email, it 217 might either mean an RFC-822 message or a single SMTP command. At the 218 HTTP level, it might mean a request or response. 220 3.1. Limited Threat Models 222 As we've said, a resourceful and dedicated attacker can control the 223 entire communications channel. However, a large number of attacks can 224 be mounted by an attacker with fewer resources. A number of currently 225 known attacks can be mounted by an attacker with limited control of 226 the network. For instance, password sniffing attacks can be mounted 227 by an attacker who can only read arbitrary packets. This is generally 228 referred to as a PASSIVE ATTACK. 230 By contrast, Morris's sequence number guessing attack [SEQNUM] can be 231 mounted by an attacker who can write but not read arbitrary packets. 232 Any attack which requires the attacker to write to the network is 233 known as an ACTIVE ATTACK. 235 Thus, a useful way of organizing attacks is to divide them based on 236 the capabilities required to mount the attack. The rest of this sec- 237 tion describes these categories and provides some examples of each 238 category. 240 3.2. Passive Attacks 242 In a passive attack, the attacker reads packets off the network but 243 does not write them. The simplest way to mount such an attack is to 244 simply be on the same LAN as the victim. On most common LAN configu- 245 rations, including Ethernet, 802.3, and FDDI, any machine on the wire 246 can read all traffic destined for any other machine on the same LAN. 247 Note that switching hubs make this sort of sniffing substantially 248 more difficult, since traffic destined for a machine only goes to the 249 network segment which that machine is on. 251 Similarly, an attacker who has control of a host in the communica- 252 tions path between two victim machines is able to mount a passive 253 attack on their communications. It is also possible to compromise the 254 routing infrastructure to specifically arrange that traffic passes 255 through a compromised machine. This might involve an active attack on 256 the routing infrastructure to facilitate a passive attack on a victim 257 machine. 259 Wireless communications channels deserve special consideration, espe- 260 cially with the recent and growing popularity of wireless-based LANs, 261 such as those using 802.11. Since the data is simply broadcast on 262 well-known radio frequencies, an attacker simply needs to be able to 263 receive those transmissions. Such channels are especially vulnerable 264 to passive attacks. Although many such channels include cryptographic 265 protection, it is often of such poor quality as to be nearly useless. 266 [WEP] 268 In general, the goal of a passive attack is to obtain information 269 which the sender and receiver would rather remain private. Examples 270 of such information include credentials useful in the electronic 271 world such as passwords or credentials useful in the outside world, 272 such as confidential business information. 274 3.2.1. Confidentiality Violations 276 The classic example of passive attack is sniffing some inherently 277 private data off of the wire. For instance, despite the wide avail- 278 ability of SSL, many credit card transactions still traverse the 279 Internet in the clear. An attacker could sniff such a message and 280 recover the credit card number, which can then be used to make fraud- 281 ulent transactions. Moreover, confidential business information is 282 routinely transmitted over the network in the clear in email. 284 3.2.2. Password Sniffing 286 Another example of a passive attack is PASSWORD SNIFFING. Password 287 sniffing is directed towards obtaining unauthorized use of resources. 289 Many protocols, including [TELNET], [POP], and [NNTP] use a shared 290 password to authenticate the client to the server. Frequently, this 291 password is transmitted from the client to the server in the clear 292 over the communications channel. An attacker who can read this traf- 293 fic can therefore capture the password and REPLAY it. That is to say 294 that he can initiate a connection to the server and pose as the 295 client and login using the captured password. 297 Note that although the login phase of the attack is active, the 298 actual password capture phase is passive. Moreover, unless the server 299 checks the originating address of connections, the login phase does 300 not require any special control of the network. 302 3.2.3. Offline Cryptographic Attacks 304 Many cryptographic protocols are subject to OFFLINE ATTACKS. In such 305 a protocol, the attacker recovers data which has been processed using 306 the victim's secret key and then mounts a cryptanalytic attack on 307 that key. Passwords make a particularly vulnerable target because 308 they are typically low entropy. A number of popular password-based 309 challenge response protocols are vulnerable to DICTIONARY ATTACK. The 310 attacker captures a challenge-response pair and then proceeds to try 311 entries from a list of common words (such as a dictionary file) until 312 he finds a password that produces the right response. 314 A similar such attack can be mounted on a local network when NIS is 315 used. The Unix password is crypted using a one-way function, but 316 tools exist to break such crypted passwords [KLEIN]. When NIS is 317 used, the crypted password is transmitted over the local network and 318 an attacker can thus sniff the password and attack it. 320 Historically, it has also been possible to exploit small operating 321 system security holes to recover the password file using an active 322 attack. These holes can then be bootstrapped into an actual account 323 by using the aforementioned offline password recovery techniques. 324 Thus we combine a low-level active attack with an offline passive 325 attack. 327 3.3. Active Attacks 329 When an attack involves writing data to the network, we refer to this 330 as an ACTIVE ATTACK. When IP is used without IPsec, there is no 331 authentication for the sender address. As a consequence, it's 332 straightforward for an attacker to create a packet with a source 333 address of his choosing. We'll refer to this as a SPOOFING ATTACK. 335 Under certain circumstances, such a packet may be screened out by the 336 network. For instance, many packet filtering firewalls screen out all 337 packets with source addresses on the INTERNAL network that arrive on 338 the EXTERNAL interface. Note, however, that this provides no protec- 339 tion against an attacker who is inside the firewall. In general, 340 designers should assume that attackers can forge packets. 342 However, the ability to forge packets does not go hand in hand with 343 the ability to receive arbitrary packets. In fact, there are active 344 attacks that involve being able to send forged packets but not 345 receive the responses. We'll refer to these as BLIND ATTACKS. 347 Note that not all active attacks require forging addresses. For 348 instance, the TCP SYN denial of service attack [TCPSYN] can be 349 mounted successfully without disguising the sender's address. How- 350 ever, it is common practice to disguise one's address in order to 351 conceal one's identity if an attack is discovered. 353 Each protocol is susceptible to specific active attacks, but experi- 354 ence shows that a number of common patterns of attack can be adapted 355 to any given protocol. The next sections describe a number of these 356 patterns and give specific examples of them as applied to known pro- 357 tocols. 359 3.3.1. Replay Attacks 361 In a REPLAY ATTACK, the attacker records a sequence of messages off 362 of the wire and plays them back to the party which originally 363 received them. Note that the attacker does not need to be able to 364 understand the messages. He merely needs to capture and retransmit 365 them. 367 For example, consider the case where an S/MIME message is being used 368 to request some service, such as a credit card purchase or a stock 369 trade. An attacker might wish to have the service executed twice, if 370 only to inconvenience the victim. He could capture the message and 371 replay it, even though he can't read it, causing the transaction to 372 be executed twice. 374 3.3.2. Message Insertion 376 In a MESSAGE INSERTION attack, the attacker forges a message with 377 some chosen set of properties and injects it into the network. Often 378 this message will have a forged source address in order to disguise 379 the identity of the attacker. 381 For example, a denial-of-service attack can be mounted by inserting a 382 series of spurious TCP SYN packets directed towards the target host. 383 The target host responds with its own SYN and allocates kernel data 384 structures for the new connection. The attacker never completes the 385 3-way handshake, so the allocated connection endpoints just sit there 386 taking up kernel memory. Typical TCP stack implementations only allow 387 some limited number of connections in this "half-open" state and when 388 this limit is reached, no more connections can be initiated, even 389 from legitimate hosts. Note that this attack is a blind attack, since 390 the attacker does not need to process the victim's SYNs. 392 3.3.3. Message Deletion 394 In a MESSAGE DELETION attack, the attacker removes a message from the 395 wire. Morris's sequence number guessing attack [SEQNUM] often 396 requires a message deletion attack to be performed successfully. In 397 this blind attack, the host whose address is being forged will 398 receive a spurious TCP SYN packet from the host being attacked. 399 Receipt of this SYN packet generates a RST, which would tear the 400 illegitimate connection down. In order to prevent this host from 401 sending a RST so that the attack can be carried out successfully, 402 Morris describes flooding this host to create queue overflows such 403 that the SYN packet is lost and thus never responded to. 405 3.3.4. Message Modification 407 In a MESSAGE MODIFICATION attack, the attacker removes a message from 408 the wire, modifies it, and reinjects it into the network. This sort 409 of attack is particularly useful if the attacker wants to send some 410 of the data in the message but also wants to change some of it. 412 Consider the case where the attacker wants to attack an order for 413 goods placed over the Internet. He doesn't have the victim's credit 414 card number so he waits for the victim to place the order and then 415 replaces the delivery address (and possibly the goods description) 416 with his own. Note that this particular attack is known as a CUT-AND- 417 PASTE attack since the attacker cuts the credit card number out of 418 the original message and pastes it into the new message. 420 Another interesting example of a cut-and-paste attack is provided by 421 [IPSPPROB]. If IPsec ESP is used without any MAC then it is possible 422 for the attacker to read traffic encrypted for a victim on the same 423 machine. The attacker attaches an IP header corresponding to a port 424 he controls onto the encrypted IP packet. When the packet is received 425 by the host it will automatically be decrypted and forwarded to the 426 attacker's port. Similar techniques can be used to mount a session 427 hijacking attack. Both of these attacks can be avoided by always 428 using message authentication when you use encryption. Note that if 429 the receiving machine is single-user than this attack is infeasible. 431 3.3.5. Man-In-The-Middle 433 A MAN-IN-THE-MIDDLE attack combines the above techniques in a special 434 form: The attacker subverts the communication stream in order to pose 435 as the sender to receiver and the receiver to the sender: 437 What Alice and Bob think: 438 Alice <----------------------------------------------> Bob 440 What's happening: 441 Alice <----------------> Attacker <----------------> Bob 443 This differs fundamentally from the above forms of attack because it 444 attacks the identity of the communicating parties, rather than the 445 data stream itself. Consequently, many techniques which provide 446 integrity of the communications stream are insufficient to protect 447 against man-in-the-middle attacks. 449 Man-in-the-middle attacks are possible whenever a protocol lacks PEER 450 ENTITY AUTHENTICATION. For instance, if an attacker can hijack the 451 client TCP connection during the TCP handshake (perhaps by responding 452 to the client's SYN before the server does), then the attacker can 453 open another connection to the server and begin a man-in-the-middle 454 attack. It is also trivial to mount man-in-the-middle attacks on 455 local networks via ARP spoofing--the attacker forges an ARP with the 456 victim's IP address and his own MAC address. Tools to mount this sort 457 of attack are readily available. 458 Note that it is only necessary to authenticate one side of the 459 transaction in order to prevent man-in-the-middle attacks. In such a 460 situation the the peers can establish an association in which only 461 one peer is authenticated. In such a system, an attacker can initiate 462 an association posing as the unauthenticated peer but cannot transmit 463 or access data being sent on a legitimate connection. This is an 464 acceptable situation in contexts such as Web e-commerce where only 465 the server needs to be authenticated (or the client is independently 466 authenticated via some non-cryptographic mechanism such as a credit 467 card number). 469 4. Common Issues 471 Although each system's security requirements are unique, certain com- 472 mon requirements appear in a number of protocols. Often, when naive 473 protocol designers are faced with these requirements, they choose an 474 obvious but insecure solution even though better solutions are avail- 475 able. This section describes a number of issues seen in many proto- 476 cols and the common pieces of security technology that may be useful 477 in addressing them. 479 4.1. User Authentication 481 Essentially every system which wants to control access to its 482 resources needs some way to authenticate users. A nearly uncountable 483 number of such mechanisms have been designed for this purpose. The 484 next several sections describe some of these techniques. 486 4.1.1. Username/Password 488 The most common access control mechanism is simple USERNAME/PASSWORD 489 The user provides a username and a reusable password to the host 490 which he wishes to use. This system is vulnerable to a simple passive 491 attack where the attacker sniffs the password off the wire and then 492 initiates a new session, presenting the password. This threat can be 493 mitigated by hosting the protocol over an encrypted connection such 494 as TLS or IPSEC. Unprotected (plaintext) username/password systems 495 are not acceptable in IETF standards. 497 4.1.2. Challenge Response and One Time Passwords 499 Systems which desire greater security than USERNAME/PASSWORD often 500 employ either a ONE TIME PASSWORD [OTP] scheme or a CHALLENGE- 501 RESPONSE. In a one time password scheme, the user is provided with a 502 list of passwords, which must be used in sequence, one time each. 503 (Often these passwords are generated from some secret key so the user 504 can simply compute the next password in the sequence.) SecureID and 505 DES Gold are variants of this scheme. In a challenge-response scheme, 506 the host and the user share some secret (which often is represented 507 as a password). In order to authenticate the user, the host presents 508 the user with a (randomly generated) challenge. The user computes 509 some function based on the challenge and the secret and provides that 510 to the host, which verifies it. Often this computation is performed 511 in a handheld device, such as a DES Gold card. 513 Both types of scheme provide protection against replay attack, but 514 often still vulnerable to an OFFLINE KEYSEARCH ATTACK (a form of pas- 515 sive attack): As previously mentioned, often the one-time password or 516 response is computed from a shared secret. If the attacker knows the 517 function being used, he can simply try all possible shared secrets 518 until he finds one that produces the right output. This is made eas- 519 ier if the shared secret is a password, in which case he can mount a 520 DICTIONARY ATTACK--meaning that he tries a list of common words (or 521 strings) rather than just random strings. 523 These systems are also often vulnerable to an active attack. Unless 524 communication security is provided for the entire session, the 525 attacker can simply wait until authentication has been performed and 526 hijack the connection. 528 4.1.3. Certificates 530 A simple approach is to have all users have certificates [PKIX] which 531 they then use to authenticate in some protocol-specific way, as in 532 [TLS] or [S/MIME]. The primary obstacle to this approach in client- 533 server type systems is that it requires clients to have certificates, 534 which can be a deployment problem. 536 4.1.4. Some Uncommon Systems 538 There are ways to do a better job than the schemes mentioned above, 539 but they typically don't add much security unless communications 540 security (at least message integrity) will be employed to secure the 541 connection, because otherwise the attacker can merely hijack the con- 542 nection after authentication has been performed. A number of proto- 543 cols ( [EKE], [SPEKE], [SRP]) allow one to securely bootstrap a 544 user's password into a shared key which can be used as input to a 545 cryptographic protocol. One major obstacle to the deployment of these 546 protocols has been that their Intellectual Property status is 547 extremely unclear. Similarly, the user can authenticate using public 548 key certificates. (e.g. S-HTTP client authentication). Typically 549 these methods are used as part of a more complete security protocol. 551 4.1.5. Host Authentication 553 Host authentication presents a special problem. Quite commonly, the 554 addresses of services are presented using a DNS hostname, for 555 instance as a URL [URL]. When requesting such a service, one has to 556 ensure that the entity that one is talking to not only has a 557 certificate but that that certificate corresponds to the expected 558 identity of the server. The important thing to have is a secure bind- 559 ing between the certificate and the expected hostname. 561 For instance, it is usually not acceptable for the certificate to 562 contain an identity in the form of an IP address if the request was 563 for a given hostname. This does not provide end-to-end security 564 because the hostname-IP mapping is not secure unless secure name res- 565 olution [DNSSEC] is being used. This is a particular problem when the 566 hostname is presented at the application layer but the authentication 567 is performed at some lower layer. 569 4.2. Generic Security Frameworks 571 Providing security functionality in a protocol can be difficult. In 572 addition to the problem of choosing authentication and key establish- 573 ment mechanisms, one needs to integrate it into a protocol. One 574 response to this problem (embodied in IPsec and TLS) is to create a 575 lower-level security protocol and then insist that new protocols be 576 run over that protocol. 578 Another approach that has recently become popular is to design 579 generic application layer security frameworks. The idea is that you 580 design a protocol that allows you to negotiate various security mech- 581 anisms in a pluggable fashion. Application protocol designers then 582 arrange to carry the security protocol PDUs in their application pro- 583 tocol. Examples of such frameworks include GSS-API [REF] and SASL 584 [REF]. 586 The generic framework approach has a number of problems. First, it is 587 highly susceptible to DOWNGRADE ATTACKS. In a downgrade attack, an 588 active attacker tampers with the negotiation in order to force the 589 parties to negotiate weaker protection than they otherwise would. 590 It's possible to include an integrity check after the negotiation and 591 key establishment have both completed, but the strength of this 592 integrity check is necessarily limited to the weakest common algo- 593 rithm. This problem exists with any negotiation approach, but generic 594 frameworks exacerbate it by encouraging the application protocol 595 author to just specify the framework rather than think hard about the 596 appropriate underlying mechanisms, particularly since the mechanisms 597 can very widely in the degree of security offered. 599 Another problem is that it's not always obvious how the various secu- 600 rity features in the framework interact with the application layer 601 protocol. For instance, SASL can be used merely as an authentication 602 framework--in which case the SASL exchange occurs but the rest of the 603 connection is unprotected, but can also negotiate TLS as a mechanism. 604 Knowing under what circumstances TLS is optional and which it is 605 required requires thinking about the threat model. 607 In general, authentication frameworks are most useful in situations 608 where users have a wide variety of credentials that must all be acco- 609 modated by some service. When the security requirements of a system 610 can be clearly identified and only a few forms of authentication are 611 used, choosing a single security mechanism leads to greater simplic- 612 ity and predictability. In situations where a framework is to be 613 used, designers SHOULD carefully examine the framework's options and 614 specify only the mechanisms that are appropriate for their particular 615 threat model. If a framework is necessary, designers SHOULD choose 616 one of the established ones instead of designing their own. 618 4.3. Non-repudiation 620 The naive approach to non-repudiation is simply to use public-key 621 digital signatures over the content. The party who wishes to be bound 622 (the SIGNING PARTY) digitally signs the message in question. The 623 counterparty (the RELYING PARTY) can later point to the digital 624 signature as proof that the signing party at one point agreed to the 625 disputed message. Unfortunately, this approach is insufficient. 627 The easiest way for the signing party to repudiate the message is by 628 claiming that his private key has been compromised and that some 629 attacker (though not necessarily the relying party) signed the dis- 630 puted message. In order to defend against this attack the relying 631 party needs to demonstrate that the signing party's key had not been 632 compromised at the time of the signature. This requires substantial 633 infrastructure, including archival storage of certificate revocation 634 information and timestamp servers to establish the time that the mes- 635 sage was signed. 637 Additionally, the relying party might attempt to trick the signing 638 party into signing one message while thinking he's signing another. 639 This problem is particularly severe when the relying party controls 640 the infrastructure that the signing party uses for signing, such as 641 in kiosk situations. In many such situations the signing party's key 642 is kept on a smartcard but the message to be signed is displayed by 643 the relying party. 645 All of these complications make non-repudiation a difficult service 646 to deploy in practice. 648 4.4. Authorization vs. Authentication 650 AUTHORIZATION is the process by which one determines whether an 651 authenticated party has permission to access a particular resource or 652 service. Although tightly bound, it is important to realize that 653 authentication and authorization are two separate mechanisms. Perhaps 654 because of this tight coupling, authentication is sometimes mistak- 655 enly thought to imply authorization. Authentication simply identifies 656 a party, authorization defines whether they can perform a certain 657 action. 659 Authorization necessarily relies on authentication, but authentica- 660 tion alone does not imply authorization. Rather, before granting per- 661 mission to perform an action, the authorization mechanism must be 662 consulted to determine whether that action is permitted. 664 4.4.1. Access Control Lists 666 One common form of authorization mechanism is an access control list 667 (ACL) that lists users that are permitted access to a resource. Since 668 assigning individual authorization permissions to each resource is 669 tedious, often resources are hierarchically arranged such that the 670 parent resource's ACL is inherited by child resources. This allows 671 administrators to set top level policies and override them when 672 necessary. 674 4.4.2. Certificate Based Systems 676 While the distinction between authentication and authorization is 677 intuitive when using simple authentication mechanisms such as user- 678 name and password (i.e., everyone understands the difference between 679 the administrator account and a user account), with more complex 680 authentication mechanisms the distinction is sometimes lost. 682 With certificates, for instance, presenting a valid signature does 683 not imply authorization. The signature must be backed by a 684 certificate chain that contains a trusted root, and that root must be 685 trusted in the given context. For instance, users who possess cer- 686 tificates issued by the Acme MIS CA may have different web access 687 privileges than users who possess certificates issued by the Acme 688 Accounting CA, even though both of these CAs are "trusted" by the 689 Acme web server. 691 Mechanisms for enforcing these more complicated properties have not 692 yet been completely explored. One approach is simply to attach poli- 693 cies to ACLs describing what sorts of certificates are trusted. 694 Another approach is to carry that information with the certificate, 695 either as a certificate extension/attribute [PKIX, SPKI] or as a sep- 696 arate "Attribute Certificate". 698 4.5. Providing Traffic Security 700 Securely designed protocols should provide some mechanism for secur- 701 ing (meaning integrity protecting, authenticating, and possibly 702 encrypting) all sensitive traffic. One approach is to secure the pro- 703 tocol itself, as in [DNSSEC], [S/MIME] or [S-HTTP]. Although this 704 provides security which is most fitted to the protocol, it also 705 requires considerable effort to get right. 707 Many protocols can be adequately secured using one of the available 708 channel security systems. We'll discuss the two most common, IPsec 709 [AH, ESP] and [TLS]. 711 4.5.1. IPsec 713 The IPsec protocols (specifically, AH and ESP) can provide transmis- 714 sion security for all traffic between two hosts. The IPsec protocols 715 support varying granularities of user identification, including for 716 example "IP Subnet", "IP Address", "Fully Qualified Domain Name", and 717 individual user ("Mailbox name"). These varying levels of identifica- 718 tion are employed as inputs to access control facilities that are an 719 intrinsic part of IPsec. However, a given IPsec implementation might 720 not support all identity types. In particular, security gateways may 721 not provide user-to-user authentication or have mechanisms to provide 722 that authentication information to applications. 724 When AH or ESP is used, the application programmer might not need to 725 do anything (if AH or ESP has been enabled system-wide) or might need 726 to make specific software changes (e.g. adding specific setsockopt() 727 calls) -- depending on the AH or ESP implementation being used. 728 Unfortunately, APIs for controlling IPsec implementations are not yet 729 standardized. 731 The primary obstacle to using IPsec to secure other protocols is 732 deployment. The major use of IPsec at present is for VPN applica- 733 tions, especially for remote network access. Without extremely tight 734 coordination between security administrators and application develop- 735 ers, VPN usage is not well suited to providing security services for 736 individual applications since it is difficult for such applications 737 to determine what security services have in fact been provided. 739 IPsec deployment in host-to-host environments has been slow. Unlike 740 application security systems such as TLS, adding IPsec to a non-IPsec 741 system generally involves changing the operating system, either by 742 tampering with the kernel or installing new drivers. This is a sub- 743 stantially greater undertaking than simply installing a new applica- 744 tion. However, recent versions of a number of commodity operating 745 systems include IPsec stacks, so deployment is becoming easier. 747 In environments where IPsec is sure to be available, it represents a 748 viable option for protecting application communications traffic. If 749 the traffic to be protected is UDP, IPsec and application-specific 750 object security are the only options. However, designers MUST not 751 assume that IPsec will be available. A security policy for a generic 752 application layer protocol SHOULD not simply state that IPsec must be 753 used, unless there is some reason to believe that IPsec will be 754 available in the intended deployment environment. In environments 755 where IPsec may not be available and the traffic is solely TCP, TLS 756 is the method of choice, since the application developer can easily 757 ensure its presence by including a TLS implementation in his package. 759 4.5.2. SSL/TLS 761 The currently most common approach is to use SSL or its successor 762 TLS. They provide channel security for a TCP connection at the appli- 763 cation level. That is, they run over TCP. SSL implementations typi- 764 cally provide a Berkeley Sockets-like interface for easy programming. 765 The primary issue when designing a protocol solution around TLS is to 766 differentiate between connections protected using TLS and those which 767 are not. 769 The two primary approaches used are to have a separate well-known 770 port for TLS connections (e.g. the HTTP over TLS port is 443) 771 [HTTPTLS] or to have a mechanism for negotiating upward from the base 772 protocol to TLS as in [UPGRADE] or [STARTTLS]. When an upward negoti- 773 ation strategy is used, care must be taken to ensure that an attacker 774 can not force a clear connection when both parties wish to use TLS. 776 Note that TLS depends upon a reliable protocl such as TCP or SCTP. 777 This produces two notable difficulties. First, it cannot be used to 778 secure datagram protocols that use UDP. Second, TLS is susceptible to 779 IP layer attacks that IPsec is not. Typically, these attacks take 780 some form of denial of service or connection assassination. For 781 instance, an attacker might forge a TCP RST to shut down SSL connec- 782 tions. TLS has mechanisms to detect truncation attacks but these 783 merely allow the victim to know he is being attacked and do not pro- 784 vide connection survivability in the face of such attacks. By con- 785 trast, if IPsec were being used, such a forged RST could be rejected 786 without affecting the TCP conection. 788 4.5.3. Remote Login 790 In some special cases it may be worth providing channel-level secu- 791 rity directly in the application rather than using IPSEC or SSL/TLS. 792 One such case is remote terminal security. Characters are typically 793 delivered from client to server one character at a time. Since 794 SSL/TLS and AH/ESP MAC and encrypt every packet, this can mean a data 795 expansion of 20-fold. The telnet encryption option [ENCOPT] prevents 796 this expansion by foregoing message integrity. 798 When using remote terminal service, it's often desirable to securely 799 perform other sorts of communications services. In addition to pro- 800 viding remote login, SSH [SSH] also provides secure port forwarding 801 for arbitrary TCP ports, thus allowing users run arbitrary TCP-based 802 applications over the SSH channel. Note that this capability also 803 represents a security vulnerability in that it circumvents firewalls 804 and may potentially expose insecure applications to the outside 805 world. 807 4.6. Denial of Service Attacks and Countermeasures 809 Denial of service attacks are all too frequently viewed as an fact of 810 life. One problem is that an attacker can often choose from one of 811 many denial of service attacks to inflict upon a victim, and because 812 most of these attacks cannot be thwarted, common wisdom frequently 813 assumes that there is no point protecting against one kind of denial 814 of service attack when there are many other denial of service attacks 815 that are possible but that cannot be prevented. 817 However, not all denial of service attacks are equal and more impor- 818 tantly, it is possible to design protocols such that denial of ser- 819 vice attacks are made more difficult if not impractical. Recent SYN 820 flood attacks [TCPSYN] demonstrate both of these properties: SYN 821 flood attacks are so easy, anonymous, and effective that they are 822 more attractive to attackers than other attacks; and because the 823 design of TCP enables this attack. 825 Authors of internet standards MUST describe which denial of service 826 attacks their protocol is susceptable to. This description MUST 827 include the reasons it was either unreasonable or out of scope to 828 attempt to avoid these denial of service attacks. 830 4.6.1. Blind Denial of Service 832 BLIND denial of service attacks are particularly pernicious. With a 833 blind attack the attacker has a significant advantage. If the 834 attacker must be able to receive traffic from the victim then he must 835 either subvert the routing fabric or must use his own IP address. 836 Either provides an opportunity for victim to track the attacker 837 and/or filter out his traffic. With a blind attack the attacker can 838 use forged IP addresses, making it extremely difficult for the victim 839 to filter out his packets. The TCP SYN flood attack is an example of 840 a blind attack. Designers should make every attempt possible to pre- 841 vent blind denial of service attacks. 843 4.6.2. Distributed Denial of Service 845 Even more dangerous are DISTRIBUTED denial of service attacks (DDoS) 846 [DDOS] In a DDoS the attacker arranges for a number of machines to 847 attack the target machine simultaneously. Usually this is accom- 848 plished by infecting a large number of machines with a program that 849 allows remote initiation of attacks. The machines actually performing 850 the attack are called ZOMBIEs and are likely owned by unsuspecting 851 third parties in an entirely different location from the true 852 attacker. DDoS attacks can be very hard to counter because the zom- 853 bies often appear to be making legitimate protocol requests and sim- 854 ply crowd out the real users. DDoS attacks can be difficult to 855 thwart, but protocol designers are expected to be cognizant of these 856 forms of attack while designing protocols. 858 4.6.3. Avoiding Denial of Service 860 There are two common approaches to making denial of service attacks 861 more difficult: 863 4.6.3.1. Make your attacker do more work than you do 865 If an attacker consumes more of his resources than yours when launch- 866 ing an attack, attackers with fewer resources than you will be unable 867 to launch effective attacks. One common technique is to require the 868 attacker perform a time-intensive operation, such as a cryptographic 869 operation. Note that an attacker can still mount a denial of service 870 attack if he can muster substantially sufficient CPU power. For 871 instance, this technique would not stop the distributed attacks 872 described in [TCPSYN]. 874 4.6.3.2. Make your attacker prove they can receive data from you 876 A blind attack can be subverted by forcing the attack prove that they 877 can can receive data from the victim. A common technique is to 878 require that the attacker reply using information that was gained 879 earlier in the message exchange. If this countermeasure is used, the 880 attacker must either use his own address (making him easy to track) 881 or to forge an address which will be routed back along a path that 882 traverses the host from which the attack is being launched. 884 Hosts on small subnets are thus useless to the attacker (at least in 885 the context of a spoofing attack) because the attack can be traced 886 back to a subnet (which should be sufficient for locating the 887 attacker) so that anti-attack measures can be put into place (for 888 instance, a boundary router can be configured to drop all traffic 889 from that subnet). A common technique is to require that the attacker 890 reply using information that was gained earlier in the message 891 exchange. 893 4.6.4. Example: TCP SYN Floods 895 TCP/IP is vulnerable to SYN flood attacks (which are described in 896 section 3.3.2) because of the design of the 3-way handshake. First, 897 an attacker can force a victim to consume significant resources (in 898 this case, memory) by sending a single packet. Second, because the 899 attacker can perform this action without ever having received data 900 from the victim, the attack can be performed anonymously (and there- 901 fore using a large number of forged source addresses). 903 4.6.5. Example: Photuris 905 [PHOTURIS] implements an anti-clogging mechanism that prevents 906 attacks on Photuris that resemble the SYN flood attack. Photuris 907 employs a time-variant secret to generate a "cookie" which is 908 returned to the attacker. This cookie must be returned in subsequent 909 messages for the exchange to progress. The interesting feature is 910 that this cookie can be re-generated by the victim later in the 911 exchange, and thus no state need be retained by the victim until 912 after the attacker has proven that he can receive packets from the 913 victim. 915 4.7. Object vs. Channel Security 917 It's useful to make the conceptual distinction between object secu- 918 rity and channel security. Object security refers to security mea- 919 sures which apply to entire data objects. Channel security measures 920 provide a secure channel over which objects may be carried transpar- 921 ently but the channel has no special knowledge about object bound- 922 aries. 924 Consider the case of an email message. When it's carried over an 925 IPSEC or TLS secured connection, the message is protected during 926 transmission. However, it is unprotected in the receiver's mailbox, 927 and in intermediate spool files along the way. Moreover, since mail 928 servers generally run as a daemon, not a user, authentication of mes- 929 sages generally merely means authentication of the daemon not the 930 user. Finally, since mail transport is hop-by-hop, even if the user 931 authenticates to the first hop relay the authentication can't be 932 safely verified by the receiver. 934 By contrast, when an email message is protected with S/MIME or 935 OpenPGP, the entire message is encrypted and integrity protected 936 until it is examined and decrypted by the recipient. It also provides 937 strong authentication of the actual sender, as opposed to the machine 938 the message came from. This is object security. Moreover, the 939 receiver can prove the signed message's authenticity to a third 940 party. 942 Note that the difference between object and channel security is a 943 matter of perspective. Object security at one layer of the protocol 944 stack often looks like channel security at the next layer up. So, 945 from the perspective of the IP layer, each packet looks like an indi- 946 vidually secured object. But from the perspective of a web client, 947 IPSEC just provides a secure channel. 949 The distinction isn't always clear-cut. For example, S-HTTP provides 950 object level security for a single HTTP transaction, but a web page 951 typically consists of multiple HTTP transactions (the base page and 952 numerous inline images.) Thus, from the perspective of the total web 953 page, this looks rather more like channel security. Object security 954 for a web page would consist of security for the transitive closure 955 of the page and all its embedded content as a single unit. 957 5. Writing Security Considerations Sections 959 While it is not a requirement that any given protocol or system be 960 immune to all forms of attack, it is still necessary for authors to 961 consider them. Part of the purpose of the Security Considerations 962 section is to explain what attacks are out of scope and what counter- 963 measures can be applied to defend against them. 965 There should be a clear description of the kinds of threats on the 966 described protocol or technology. This should be approached as an 967 effort to perform "due diligence" in describing all known or foresee- 968 able risks and threats to potential implementers and users. 970 Authors MUST describe 972 1. which attacks are out of scope (and why!) 973 2. which attacks are in-scope 974 2.1 and the protocol is susceptable to 975 2.2 and the protocol protects against 977 At least the following forms of attack MUST be considered: eavesdrop- 978 ping, replay, message insertion, deletion, modification, and man-in- 979 the-middle. Potential denial of service attacks MUST be identified as 980 well. If the protocol incorporates cryptographic protection mecha- 981 nisms, it should be clearly indicated which portions of the data are 982 protected and what the protections are (i.e. integrity only, confi- 983 dentiality, and/or endpoint authentication, etc.). Some indication 984 should also be given to what sorts of attacks the cryptographic pro- 985 tection is susceptible. Data which should be held secret (keying 986 material, random seeds, etc.) should be clearly labeled. 988 If the technology involves authentication, particularly user-host 989 authentication, the security of the authentication method MUST be 990 clearly specified. That is, authors MUST document the assumptions 991 that the security of this authentication method is predicated upon. 992 For instance, in the case of the UNIX username/password login method, 993 a statement to the effect of: 995 Authentication in the system is secure only to the extent that it 996 is difficult to guess or obtain a ASCII password that is a maximum 997 of 8 characters long. These passwords can be obtained by sniffing 998 telnet sessions or by running the 'crack' program using the con- 999 tents of the /etc/passwd file. Attempts to protect against on-line 1000 password guessing by (1) disconnecting after several unsuccessful 1001 login attempts and (2) waiting between successive password prompts 1002 is effective only to the extent that attackers are impatient. 1004 Because the /etc/passwd file maps usernames to user ids, groups, 1005 etc. it must be world readable. In order to permit this usage but 1006 make running crack more difficult, the file is often split into 1007 /etc/passwd and a 'shadow' password file. The shadow file is not 1008 world readable and contains the encrypted password. The regular 1009 /etc/passwd file contains a dummy password in its place. 1011 It is insufficient to simply state that one's protocol should be run 1012 over some lower layer security protocol. If a system relies upon 1013 lower layer security services for security, the protections those 1014 services are expected to provide MUST be clearly specified. In addi- 1015 tion, the resultant properties of the combined system need to be 1016 specified. 1018 Note: In general, the IESG will not approve standards track protocols 1019 which do not provide for strong authentication, either internal to 1020 the protocol or through tight binding to a lower layer security pro- 1021 tocol. 1023 The threat environment addressed by the Security Considerations sec- 1024 tion MUST at a minimum include deployment across the global Internet 1025 across multiple administrative boundaries without assuming that fire- 1026 walls are in place, even if only to provide justification for why 1027 such consideration is out of scope for the protocool. It is not 1028 acceptable to only discuss threats applicable to LANs and ignore the 1029 broader threat environment. All IETF standards-track protocols are 1030 considered likely to have deployment in the global Internet. In some 1031 cases, there might be an Applicability Statement discouraging use of 1032 a technology or protocol in a particular environment. Nonetheless, 1033 the security issues of broader deployment should be discussed in the 1034 document. 1036 There should be a clear description of the residual risk to the user 1037 or operator of that protocol after threat mitigation has been 1038 deployed. Such risks might arise from compromise in a related proto- 1039 col (e.g. IPsec is useless if key management has been compromised), 1040 from incorrect implementation, compromise of the security technology 1041 used for risk reduction (e.g. 40-bit DES), or there might be risks 1042 that are not addressed by the protocol specification (e.g. denial of 1043 service attacks on an underlying link protocol). 1045 There should also be some discussion of potential security risks 1046 arising from potential misapplications of the protocol or technology 1047 described in the RFC. This might be coupled with an Applicability 1048 Statement for that RFC. 1050 6. Examples 1052 This section consists of some example security considerations sec- 1053 tions, intended to give the reader a flavor of what's intended by 1054 this document. 1056 The first example is a 'retrospective' example, applying the criteria 1057 of this document to a historical document, RFC-821. The second exam- 1058 ple is a good security considerations section clipped from a current 1059 protocol. 1061 6.1. SMTP 1063 When RFC-821 was written, Security Considerations sections were not 1064 required in RFCs, and none is contained in that document. Had that 1065 document been written today, the Security Considerations section 1066 might look something like this: 1068 6.1.1. SMTP Security Considerations 1070 SMTP as-is provides no security precautions of any kind. All the 1071 attacks we are about to describe must be provided by a different pro- 1072 tocol layer. 1074 A passive attack is sufficient to recover message text. No endpoint 1075 authentication is provided by the protocol. Sender spoofing is triv- 1076 ial, and therefore forging email messages is trivial. Some implemen- 1077 tations do add header lines with hostnames derived through reverse 1078 name resolution (which is only secure to the extent that it is diffi- 1079 cult to spoof DNS -- not very), although these header lines are nor- 1080 mally not displayed to users. Receiver spoofing is also fairly 1081 straight-forward, either using TCP connection hijacking or DNS spoof- 1082 ing. Moreover, since email messages often pass through SMTP gateways, 1083 all intermediate gateways must be trusted, a condition nearly impos- 1084 sible on the global Internet. 1086 Several approaches are available for alleviating these threats. In 1087 order of increasingly high level in the protocol stack, we have: 1089 SMTP over IPSEC 1090 SMTP/TLS 1091 S/MIME and PGP/MIME 1093 6.1.1.1. SMTP over IPSEC 1095 An SMTP connection run over IPSEC can provide confidentiality for the 1096 message between the sender and the first hop SMTP gateway, or between 1097 any pair of connected SMTP gateways. That is to say, it provides 1098 channel security for the SMTP connections. In a situation where the 1099 message goes directly from the client to the receiver's gateway, this 1100 may provide substantial security (though the receiver must still 1101 trust the gateway). Protection is provided against replay attacks, 1102 since the data itself is protected and the packets cannot be 1103 replayed. 1105 Endpoint identification is a problem, however, unless the receiver's 1106 address can be directly cryptographically authenticated. No sender 1107 identification is available, since the sender's machine is authenti- 1108 cated, not the sender himself. Furthermore, the identity of the 1109 sender simply appears in the From header of the message, so it is 1110 easily spoofable by the sender. Finally, unless the security policy 1111 is set extremely strictly, there is also an active downgrade to 1112 cleartext attack. 1114 6.1.1.2. SMTP/TLS 1116 SMTP can be combined with TLS as described in [STARTTLS]. This pro- 1117 vides similar protection to that provided when using IPSEC. Since TLS 1118 certificates typically contain the server's host name, recipient 1119 authentication may be slightly more obvious, but is still susceptible 1120 to DNS spoofing attacks. Notably, common implementations of TLS con- 1121 tain a US exportable (and hence low security) mode. Applications 1122 desiring high security should ensure that this mode is disabled. Pro- 1123 tection is provided against replay attacks, since the data itself is 1124 protected and the packets cannot be replayed. [note: The Security 1125 Considerations section of the SMTP over TLS draft is quite good and 1126 bears reading as an example of how to do things.] 1128 6.1.1.3. S/MIME and PGP/MIME 1130 S/MIME and PGP/MIME are both message oriented security protocols. 1131 They provide object security for individual messages. With various 1132 settings, sender and recipient authentication and confidentiality may 1133 be provided. More importantly, the identification is not of the send- 1134 ing and receiving machines, but rather of the sender and recipient 1135 themselves. (Or, at least, of cryptographic keys corresponding to the 1136 sender and recipient.) Consequently, end-to-end security may be 1137 obtained. Note, however, that no protection is provided against 1138 replay attacks. 1140 6.1.1.4. Denial of Service 1142 None of these security measures provides any real protection against 1143 denial of service. SMTP connections can easily be used to tie up sys- 1144 tem resources in a number of ways, including excessive port 1145 consumption, excessive disk usage (email is typically delivered to 1146 disk files), and excessive memory consumption (sendmail, for 1147 instance, is fairly large, and typically forks a new process to deal 1148 with each message.) 1150 6.1.1.5. Inappropriate Usage 1152 In particular, there is no protection provided against unsolicited 1153 mass email (aka SPAM). 1155 SMTP also includes several commands which may be used by attackers to 1156 explore the machine on which the SMTP server runs. The VRFY command 1157 permits an attacker to convert user-names to mailbox name and often 1158 real name. This is often useful in mounting a password guessing 1159 attack, as many users use their name as their password. EXPN permits 1160 an attacker to expand an email list to the names of the subscribers. 1161 This may be used in order to generate a list of legitimate users in 1162 order to attack their accounts, as well as to build mailing lists for 1163 future SPAM. Administrators may choose to disable these commands. 1165 6.2. VRRP 1167 The second example is from VRRP, the Virtual Router Redundance Proto- 1168 col ( [VRRP] ). We reproduce here the Security Considerations section 1169 from that document (with new section numbers). Our comments are 1170 indented and prefaced with 'NOTE:'. 1172 6.2.1. Security Considerations 1174 VRRP is designed for a range of internetworking environments that may 1175 employ different security policies. The protocol includes several 1176 authentication methods ranging from no authentication, simple clear 1177 text passwords, and strong authentication using IP Authentication 1178 with MD5 HMAC. The details on each approach including possible 1179 attacks and recommended environments follows. 1181 Independent of any authentication type VRRP includes a mechanism 1182 (setting TTL=255, checking on receipt) that protects against VRRP 1183 packets being injected from another remote network. This limits most 1184 vulnerabilities to local attacks. 1186 NOTE: The security measures discussed in the following sections 1187 only provide various kinds of authentication. No confidentiality 1188 is provided at all. This should be explicitly described as outside 1189 the scope. 1191 6.2.1.1. No Authentication 1193 The use of this authentication type means that VRRP protocol 1194 exchanges are not authenticated. This type of authentication SHOULD 1195 only be used in environments were there is minimal security risk and 1196 little chance for configuration errors (e.g., two VRRP routers on a 1197 LAN). 1199 6.2.1.2. Simple Text Password 1201 The use of this authentication type means that VRRP protocol 1202 exchanges are authenticated by a simple clear text password. 1204 This type of authentication is useful to protect against accidental 1205 misconfiguration of routers on a LAN. It protects against routers 1206 inadvertently backing up another router. A new router must first be 1207 configured with the correct password before it can run VRRP with 1208 another router. This type of authentication does not protect against 1209 hostile attacks where the password can be learned by a node snooping 1210 VRRP packets on the LAN. The Simple Text Authentication combined with 1211 the TTL check makes it difficult for a VRRP packet to be sent from 1212 another LAN to disrupt VRRP operation. 1214 This type of authentication is RECOMMENDED when there is minimal risk 1215 of nodes on a LAN actively disrupting VRRP operation. If this type of 1216 authentication is used the user should be aware that this clear text 1217 password is sent frequently, and therefore should not be the same as 1218 any security significant password. 1220 6.2.1.3. IP Authentication Header 1222 The use of this authentication type means the VRRP protocol exchanges 1223 are authenticated using the mechanisms defined by the IP Authentica- 1224 tion Header [AH] using [HMAC]. This provides strong protection 1225 against configuration errors, replay attacks, and packet corrup- 1226 tion/modification. 1228 This type of authentication is RECOMMENDED when there is limited con- 1229 trol over the administration of nodes on a LAN. While this type of 1230 authentication does protect the operation of VRRP, there are other 1231 types of attacks that may be employed on shared media links (e.g., 1232 generation of bogus ARP replies) which are independent from VRRP and 1233 are not protected. 1235 NOTE: Specifically, although securing VRRP prevents unauthorized machines 1236 from taking part in the election protocol, it does not protect 1237 hosts on the network from being deceived. For example, a gratutitous 1238 ARP reply from what purports to be the virtual router's IP address 1239 can redirect traffic to an unauthorized machine. Similarly, 1240 individual connections can be diverted by means of forged ICMP 1241 Redirect messages. 1243 Acknowledgments 1245 This document is heavily based on a note written by Ran Atkinson in 1246 1997. That note was written after the IAB Security Workshop held in 1247 early 1997, based on input from everyone at that workshop. Some of 1248 the specific text above was taken from Ran's original document, and 1249 some of that text was taken from an email message written by Fred 1250 Baker. The other primary source for this document is specific com- 1251 ments received from Steve Bellovin. Early review of this document was 1252 done by Lisa Dusseault and Mark Schertler 1254 References 1255 [AH] Kent, S., and Atkinson, R., "IP Authentication Header", 1256 RFC 2402, November 1998. 1258 [DDOS] "Denial-Of-Service Tools" CERT Advisory CA-1999-17, 1259 28 December 1999, CERT 1260 http://www.cert.org/advisories/CA-1999-17.html 1262 [DNSSEC] Eastlake, D., "Domain Name System Security Extensions", 1263 RFC 2535, March 1999. 1265 [EKE] Bellovin, S., Merritt, M., "Encrypted Key Exchange: 1266 Password-based protocols secure against dictionary 1267 attacks", Proceedings of the IEEE Symposium on Research 1268 in Security and Privacy, May 1992. 1270 [ENCOPT] Tso, T., "Telnet Data Encryption Option", RFC 2946, 1271 September, 2000. 1273 [ESP] Kent, S., and Atkinson, R., "IP Encapsulating Security 1274 Payload (ESP)", RFC 2406, November 1998. 1276 [HTTPTLS] Rescorla, E., "HTTP over TLS", RFC 2818, May 2000. 1278 [HMAC] Krawczyk, H., Bellare, M., Canetti, R., "HMAC: Keyed-Hashing 1279 for Message Authentication", RFC 2104, February 1997. 1281 [IPSPPROB] Bellovin, S. M., "Problem Areas for the IP Security Protocols", 1282 Proceedings of the Sixth Usenix UNIX Security Symposium, 1283 July 1996. 1285 [KLEIN] Klein, D.V., "Foiling the Cracker: A Survey of and 1286 Improvements to Password Security", 1990. 1288 [NNTP] Kantor, B, and Lapsley, P., "Network News Transfer Protocol", 1289 RFC 977, February 1986. 1291 [OTP] Haller, N., Metz, C., Nesser, P., "A One-Time Password 1292 System", Straw, M., RFC 2289, February 1998. 1294 [PHOTURIS] Karn, P., and Simpson, W., "Photuris: Session-Key Management 1295 Protocol", RFC 2522, March 1999. 1297 [PKIX] Housley, R., Ford, W., Polk, W., Solo, D., Internet X.509 1298 "Public Key Infrastructure Certificate and CRL Profile", 1299 RFC 2459, January 1999. 1301 [POP] Myers, J., and Rose, M., "Post Office Protocol - Version 3", 1302 RFC 1939, May 1996. 1304 [RFC-2223] Postel J., and Reynolds J., "Instructions to RFC Authors", 1305 RFC 2223, October 1997. 1307 [SEQNUM] Morris, R.T., "A Weakness in the 4.2 BSD UNIX TCP/IP Software", 1308 AT&T Bell Laboratories, CSTR 117, 1985. 1310 [SPKI] Ellison, C., Frantz, B., Lampson, B., Rivest, R., Thomas, B., 1311 Ylonen, T., "SPKI Certificate Theory", RFC 2693, 1312 September 1999. 1314 [SPEKE] Jablon, D., "Strong Password-Only Authenticated Key Exchange", 1315 Computer Communication Review, ACM SIGCOMM, vol. 26, no. 5, 1316 pp. 5-26, October 1996. 1318 [SRP] Wu T., "The Secure Remote Password Protocol", ISOC NDSS 1319 Symposium, 1998. 1321 [SSH] Ylonen, T., "SSH - Secure Login Connections Over the Internet", 1322 6th USENIX Security Symposium, p. 37-42, July 1996. 1324 [STARTTLS] Hoffman, P., "SMTP Service Extension for Secure SMTP over TLS", 1325 RFC 2487, January 1998. 1327 [S-HTTP] Rescorla, E., and Schiffman, A., "The Secure HyperText Transfer 1328 Protocol", RFC 2660, August 1999. 1330 [S/MIME] Ramsdell, B., Ed., "S/MIME Version 3 Message Specification", 1331 RFC 2633, June 1999. 1333 [TELNET] Postel, J., and Reynolds, J., "Telnet Protocol Specification", 1334 RFC 854, May 1983. 1336 [TLS] Dierks, T., and Allen, C., "The TLS Protocol Version 1.0", 1337 RFC 2246, January 1999. 1339 [TCPSYN] "TCP SYN Flooding and IP Spoofing Attacks", 1340 CERT Advisory CA-1996-21, 19 September 1996, CERT. 1341 http://www.cert.org/advisories/CA-1996-21.html 1343 [UPGRADE] Khare, R., Lawrence, S., "Upgrading to TLS Within HTTP/1.1", 1344 RFC 2817, May 2000. 1346 [URL] Berners-Lee, T., Masinter, M., McCahill, M., "Uniform Resource 1347 Locators (URL)", RFC 1738, December 1994. 1349 [VRRP] Knight, S., Weaver, D., Whipple, D., Hinden, R., Mitzel, D., Hunt, 1350 P., Higginson, P., Shand, M., Lindemn, A., "Virtual Router 1351 Redundancy Protocol", RFC 2338, April 1998. 1353 [WEP] Borisov, N., Goldberg, I., Wagner, D., "Intercepting Mobile 1354 Communications: The Insecurity of 802.11", 1355 http://www.isaac.cs.berkeley.edu/isaac/wep-draft.pdf 1357 Security Considerations 1359 This entire document is about security considerations. 1361 Author's Address 1362 Eric Rescorla 1363 RTFM, Inc. 1364 2439 Alvin Drive 1365 Mountain View, CA 94043 1366 Phone: (650)-320-8549 1368 Brian Korver 1369 Xythos Software 1370 77 Maiden Lane, Suite 200 1371 San Francisco, CA, USA 1372 Phone: (415)-248-3800 1373 Table of Contents 1375 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . ii 1376 2. The Goals of Security . . . . . . . . . . . . . . . . . . . . . . ii 1377 2.1. Communication Security . . . . . . . . . . . . . . . . . . . . iii 1378 2.1.1. Confidentiality . . . . . . . . . . . . . . . . . . . . . . . iii 1379 2.1.2. Data Integrity . . . . . . . . . . . . . . . . . . . . . . . iii 1380 2.1.3. Peer Entity authentication . . . . . . . . . . . . . . . . . iv 1381 2.2. Non-Repudiation . . . . . . . . . . . . . . . . . . . . . . . . iv 1382 2.3. Systems Security . . . . . . . . . . . . . . . . . . . . . . . v 1383 2.3.1. Unauthorized Usage . . . . . . . . . . . . . . . . . . . . . v 1384 2.3.2. Inappropriate Usage . . . . . . . . . . . . . . . . . . . . . v 1385 2.3.3. Denial of Service . . . . . . . . . . . . . . . . . . . . . . v 1386 3. The Internet Threat Model . . . . . . . . . . . . . . . . . . . . v 1387 3.1. Limited Threat Models . . . . . . . . . . . . . . . . . . . . . vi 1388 3.2. Passive Attacks . . . . . . . . . . . . . . . . . . . . . . . . vii 1389 3.2.1. Confidentiality Violations . . . . . . . . . . . . . . . . . vii 1390 3.2.2. Password Sniffing . . . . . . . . . . . . . . . . . . . . . . vii 1391 3.2.3. Offline Cryptographic Attacks . . . . . . . . . . . . . . . .viii 1392 3.3. Active Attacks . . . . . . . . . . . . . . . . . . . . . . . .viii 1393 3.3.1. Replay Attacks . . . . . . . . . . . . . . . . . . . . . . . ix 1394 3.3.2. Message Insertion . . . . . . . . . . . . . . . . . . . . . . ix 1395 3.3.3. Message Deletion . . . . . . . . . . . . . . . . . . . . . . x 1396 3.3.4. Message Modification . . . . . . . . . . . . . . . . . . . . x 1397 3.3.5. Man-In-The-Middle . . . . . . . . . . . . . . . . . . . . . . xi 1398 4. Common Issues . . . . . . . . . . . . . . . . . . . . . . . . . . xi 1399 4.1. User Authentication . . . . . . . . . . . . . . . . . . . . . . xii 1400 4.1.1. Username/Password . . . . . . . . . . . . . . . . . . . . . . xii 1401 4.1.2. Challenge Response and One Time Passwords . . . . . . . . . . xii 1402 4.1.3. Certificates . . . . . . . . . . . . . . . . . . . . . . . .xiii 1403 4.1.4. Some Uncommon Systems . . . . . . . . . . . . . . . . . . . .xiii 1404 4.1.5. Host Authentication . . . . . . . . . . . . . . . . . . . . .xiii 1405 4.2. Generic Security Frameworks . . . . . . . . . . . . . . . . . .xiii 1406 4.3. Non-repudiation . . . . . . . . . . . . . . . . . . . . . . . . xiv 1407 4.4. Authorization vs. Authentication . . . . . . . . . . . . . . . xv 1408 4.4.1. Access Control Lists . . . . . . . . . . . . . . . . . . . . xv 1409 4.4.2. Certificate Based Systems . . . . . . . . . . . . . . . . . . xvi 1410 4.5. Providing Traffic Security . . . . . . . . . . . . . . . . . . xvi 1411 4.5.1. IPsec . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi 1412 4.5.2. SSL/TLS . . . . . . . . . . . . . . . . . . . . . . . . . . .xvii 1413 4.5.3. Remote Login . . . . . . . . . . . . . . . . . . . . . . . .viii 1414 4.6. Denial of Service Attacks and Countermeasures . . . . . . . . .viii 1415 4.6.1. Blind Denial of Service . . . . . . . . . . . . . . . . . . . xix 1416 4.6.2. Distributed Denial of Service . . . . . . . . . . . . . . . . xix 1417 4.6.3. Avoiding Denial of Service . . . . . . . . . . . . . . . . . xix 1418 4.6.3.1. Make your attacker do more work than you do . . . . . . . . xx 1419 4.6.3.2. Make your attacker prove they can receive data from you 1420 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx 1421 4.6.4. Example: TCP SYN Floods . . . . . . . . . . . . . . . . . . . xx 1422 4.6.5. Example: Photuris . . . . . . . . . . . . . . . . . . . . . . xx 1423 4.7. Object vs. Channel Security . . . . . . . . . . . . . . . . . . xxi 1424 5. Writing Security Considerations Sections . . . . . . . . . . . .xxii 1425 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxiv 1426 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxiv 1427 6.1. SMTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxiv 1428 6.1.1. SMTP Security Considerations . . . . . . . . . . . . . . . .xxiv 1429 6.1.1.1. SMTP over IPSEC . . . . . . . . . . . . . . . . . . . . . .xxiv 1430 6.1.1.2. SMTP/TLS . . . . . . . . . . . . . . . . . . . . . . . . . xxv 1431 6.1.1.3. S/MIME and PGP/MIME . . . . . . . . . . . . . . . . . . . . xxv 1432 6.1.1.4. Denial of Service . . . . . . . . . . . . . . . . . . . . . xxv 1433 6.1.1.5. Inappropriate Usage . . . . . . . . . . . . . . . . . . . .xxvi 1434 6.2. VRRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxvi 1435 6.2.1. Security Considerations . . . . . . . . . . . . . . . . . . .xxvi 1436 6.2.1.1. No Authentication . . . . . . . . . . . . . . . . . . . . .xvii 1437 6.2.1.2. Simple Text Password . . . . . . . . . . . . . . . . . . .xvii 1438 6.2.1.3. IP Authentication Header . . . . . . . . . . . . . . . . .xvii 1439 6.2.1.3. Acknowledgments . . . . . . . . . . . . . . . . . . . . .x.viii 1440 6.2.1.3. References . . . . . . . . . . . . . . . . . . . . . . .x.viii 1441 Security Considerations . . . . . . . . . . . . . . . . . . . . . . xxx 1442 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . . xxx