idnits 2.17.1 draft-rescorla-sec-cons-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 26 instances of too long lines in the document, the longest one being 10 characters in excess of 72. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 622: '... used, designers SHOULD carefully exam...' RFC 2119 keyword, line 624: '... is necessary, designers SHOULD choose...' RFC 2119 keyword, line 759: '...y options. However, designers MUST not...' RFC 2119 keyword, line 761: '...n layer protocol SHOULD not simply sta...' RFC 2119 keyword, line 843: '...ternet standards MUST describe which d...' (14 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: In environments where IPsec is sure to be available, it represents a viable option for protecting application communications traffic. If the traffic to be protected is UDP, IPsec and application-specific object security are the only options. However, designers MUST not assume that IPsec will be available. A security policy for a generic application layer protocol SHOULD not simply state that IPsec must be used, unless there is some reason to believe that IPsec will be available in the intended deployment environment. In environments where IPsec may not be available and the traffic is solely TCP, TLS is the method of choice, since the application developer can easily ensure its presence by including a TLS implementation in his package. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC 2223' is mentioned on line 31, but not defined ** Obsolete undefined reference: RFC 2223 (Obsoleted by RFC 7322) == Unused Reference: 'RFC-2223' is defined on line 1342, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2402 (ref. 'AH') (Obsoleted by RFC 4302, RFC 4305) -- Possible downref: Non-RFC (?) normative reference: ref. 'DDOS' ** Obsolete normative reference: RFC 2535 (ref. 'DNSSEC') (Obsoleted by RFC 4033, RFC 4034, RFC 4035) -- Possible downref: Non-RFC (?) normative reference: ref. 'EKE' ** Obsolete normative reference: RFC 2406 (ref. 'ESP') (Obsoleted by RFC 4303, RFC 4305) ** Obsolete normative reference: RFC 2818 (ref. 'HTTPTLS') (Obsoleted by RFC 9110) ** Downref: Normative reference to an Informational RFC: RFC 2104 (ref. 'HMAC') ** Downref: Normative reference to an Informational RFC: RFC 1704 (ref. 'INTAUTH') -- Possible downref: Non-RFC (?) normative reference: ref. 'IPSPPROB' -- Possible downref: Non-RFC (?) normative reference: ref. 'KLEIN' ** Obsolete normative reference: RFC 977 (ref. 'NNTP') (Obsoleted by RFC 3977) ** Downref: Normative reference to an Experimental RFC: RFC 2522 (ref. 'PHOTURIS') ** Obsolete normative reference: RFC 2459 (ref. 'PKIX') (Obsoleted by RFC 3280) ** Obsolete normative reference: RFC 2223 (Obsoleted by RFC 7322) ** Obsolete normative reference: RFC 2222 (ref. 'SASL') (Obsoleted by RFC 4422, RFC 4752) -- Possible downref: Non-RFC (?) normative reference: ref. 'SEQNUM' ** Downref: Normative reference to an Experimental RFC: RFC 2693 (ref. 'SPKI') -- Possible downref: Non-RFC (?) normative reference: ref. 'SPEKE' -- Possible downref: Non-RFC (?) normative reference: ref. 'SRP' -- Possible downref: Non-RFC (?) normative reference: ref. 'SSH' ** Obsolete normative reference: RFC 2487 (ref. 'STARTTLS') (Obsoleted by RFC 3207) ** Downref: Normative reference to an Historic RFC: RFC 2660 (ref. 'S-HTTP') ** Obsolete normative reference: RFC 2246 (ref. 'TLS') (Obsoleted by RFC 4346) -- Possible downref: Non-RFC (?) normative reference: ref. 'TCPSYN' ** Obsolete normative reference: RFC 1738 (ref. 'URL') (Obsoleted by RFC 4248, RFC 4266) ** Obsolete normative reference: RFC 2338 (ref. 'VRRP') (Obsoleted by RFC 3768) -- Possible downref: Non-RFC (?) normative reference: ref. 'WEP' Summary: 27 errors (**), 0 flaws (~~), 4 warnings (==), 12 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 E. Rescorla 3 RTFM, Inc. 4 B. Korver 5 INTERNET-DRAFT Xythos Software 6 (April 2002 (Expires October 2002) 8 Guidelines for Writing RFC Text on Security Considerations 10 Status of this Memo 12 This document is an Internet-Draft and is in full conformance with 13 all provisions of Section 10 of RFC2026. Internet-Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its areas, 15 and its working groups. Note that other groups may also distribute 16 working documents as Internet-Drafts. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or obsoleted by other documents at any 20 time. It is inappropriate to use Internet-Drafts as reference mate- 21 rial or to cite them other than as ``work in progress.'' 23 The list of current Internet-Drafts can be accessed at 24 http://www.ietf.org/1id-abstracts.html 26 The list of Internet-Draft Shadow Directories can be accessed at 27 http://www.ietf.org/shadow.html 29 1. Introduction 31 All RFCs are required by [RFC 2223] to contain a Security Considera- 32 tions section. The purpose of this is both to encourage document 33 authors to consider security in their designs and to inform the 34 reader of relevant security issues. This memo is intended to provide 35 guidance to RFC authors in service of both ends. 37 This document is structured in three parts. The first is a combina- 38 tion security tutorial and definition of common terms; the second is 39 a series of guidelines for writing Security Considerations; the third 40 is a series of examples. 42 2. The Goals of Security 44 Most people speak of security as if it were a single monolithic prop- 45 erty of a protocol or system, but upon reflection that's very clearly 46 not true. Rather, security is a series of related but somewhat inde- 47 pendent properties. Not all of these properties are required for 48 every application. 50 We can loosely divide security goals into those related to protecting 51 communications (COMMUNICATION SECURITY, also known as COMSEC) and 52 those relating to protecting systems (ADMINISTRATIVE SECURITY or SYS- 53 TEM SECURITY). Since communications are carried out by systems and 54 access to systems is through communications channels, these goals 55 obviously interlock, but they can also be independently provided. 57 2.1. Communication Security 59 Different authors partition the goals of communication security dif- 60 ferently. The partitioning we've found most useful is to divide them 61 into three major categories: CONFIDENTIALITY, DATA INTEGRITY and PEER 62 ENTITY AUTHENTICATION. 64 2.1.1. Confidentiality 66 When most people think of security, they think of CONFIDENTIALITY. 67 Confidentiality means that your data is kept secret from unintended 68 listeners. Usually, these listeners are simply eavesdroppers. When an 69 adversary taps your phone, that poses a risk to your confidentiality. 71 Obviously, if you have secrets, you're concerned that no-one else 72 knows them and so at minimum you want confidentiality. When you see 73 spies in the movies go into the bathroom and turn on all the water to 74 foil bugging, the property they're looking for is confidentiality. 76 2.1.2. Data Integrity 78 The second primary goal is DATA INTEGRITY. The basic idea here is 79 that we want to be sure that the data we receive is the one that the 80 sender sent. In paper-based systems, some data integrity comes auto- 81 matically. When you receive a letter written in pen you can be fairly 82 certain that no words have been removed by an attacker because pen 83 marks are difficult to remove from paper. However, an attacker could 84 have easily added some marks to the paper and completely changed the 85 meaning of the message. Similarly, it's easy to shorten the page to 86 truncate the message. 88 On the other hand, in the electronic world, since all bits look 89 alike, it's trivial to tamper with messages in transit. You simply 90 remove the message from the wire, copy out the parts you like, add 91 whatever data you want, and generate a new message of your choosing, 92 and the recipient is no wiser. This is the moral equivalent of the 93 attacker taking a letter you wrote, buying some new paper and recopy- 94 ing the message, changing it as he does it. It's just a lot easier to 95 do electronically since all bits look alike. 97 2.1.3. Peer Entity authentication 99 The third property we're concerned with is PEER ENTITY AUTHENTICA- 100 TION. What we mean by this is that we know that one of the endpoints 101 in the communication is the one we intended. Without peer entity 102 authentication, it's very difficult to provide either confidentiality 103 or data integrity. For instance, if we receive a message from Alice, 104 the property of data integrity doesn't do us much good unless we know 105 that it was in fact sent by Alice and not the attacker. Similarly, if 106 we want to send a confidential message to Bob, it's not of much value 107 to us if we're actually sending a confidential message to the 108 attacker. 110 Note that peer entity authentication can be provided asymmetrically. 111 When you call someone on the phone, you can be fairly certain that 112 you have the right person -- or at least that you got a person who's 113 actually at the phone number you called. On the other hand, if they 114 don't have caller ID, then the receiver of a phone call has no idea 115 who's calling them. Calling someone on the phone is an example of 116 recipient authentication, since you know who the recipient of the 117 call is, but they don't know anything about the sender. 119 In messaging situations, you often wish to use peer entity authenti- 120 cation to establish the identity of the sender of a certain message. 121 In such contexts, this property is called DATA ORIGIN AUTHENTICATION. 123 2.2. Non-Repudiation 125 A system that provides endpoint authentication allows one party to be 126 certain of the identity of someone with whom he is communicating. 127 When the system provides data integrity a receiver can be sure of 128 both the sender's identity and that he is receiving the data that 129 that sender meant to send. However, he cannot necessarily demonstrate 130 this fact to a third party. The ability to make this demonstration is 131 called NON-REPUDIATION. 133 There are many situations in which non-repudiation is desirable. Con- 134 sider the situation in which two parties have signed a contract which 135 one party wishes to unilaterally abrogate. He might simply claim that 136 he had never signed it in the first place. Non-repudiation prevents 137 him from doing so, thus protecting the counterparty. 138 Unfortunately, non-repudiation can be very difficult to achieve in 139 practice and naive approaches are generally inadequate. Section 4.3 140 describes some of the difficulties, which generally stem from the 141 fact that the interests of the two parties are not aligned--one party 142 wishes to prove something that the other party wishes to deny. 144 2.3. Systems Security 146 In general, systems security is concerned with protecting one's 147 machines and data. The intent is that machines should be used only by 148 authorized users and for the purposes that the owners intend. Fur- 149 thermore, they should be available for those purposes. Attackers 150 should not be able to deprive legitimate users of resources. 152 2.3.1. Unauthorized Usage 154 Most systems are not intended to be completely accessible to the pub- 155 lic. Rather, they are intended to be used only by certain authorized 156 individuals. Although many Internet services are available to all 157 Internet users, even those servers generally offer a larger subset of 158 services to specific users. For instance, Web Servers often will 159 serve data to any user, but restrict the ability to modify pages to 160 specific users. Such modifications by the general public would be 161 UNAUTHORIZED USAGE. 163 2.3.2. Inappropriate Usage 165 Being an authorized user does not mean that you have free run of the 166 system. As we said above, some activities are restricted to autho- 167 rized users, some to specific users, and some activities are gener- 168 ally forbidden to all but administrators. Moreover, even activities 169 which are in general permitted might be forbidden in some cases. For 170 instance, users may be permitted to send email but forbidden from 171 sending files above a certain size, or files which contain viruses. 172 These are examples of INAPPROPRIATE USAGE. 174 2.3.3. Denial of Service 176 Recall that our third goal was that the system should be available to 177 legitimate users. A broad variety of attacks are possible which 178 threaten such usage. Such attacks are collectively referred to as 179 DENIAL OF SERVICE attacks. Denial of service attacks are often very 180 easy to mount and difficult to stop. Many such attacks are designed 181 to consume machine resources, making it difficult or impossible to 182 serve legitimate users. Other attacks cause the target machine to 183 crash, completely denying service to users. 185 3. The Internet Threat Model 187 A THREAT MODEL describes the capabilities that an attacker is assumed 188 to be able to deploy against a resource. It should contain such 189 information as the resources available to an attacker in terms of 190 information, computing capability, and control of the system. The 191 purpose of a threat model is twofold. First, we wish to identify the 192 threats we are concerned with. Second, we wish to rule some threats 193 explicitly out of scope. Nearly every security system is vulnerable 194 to a sufficiently dedicated and resourceful attacker. 196 The Internet environment has a fairly well understood threat model. 197 In general, we assume that the end-systems engaging in a protocol 198 exchange have not themselves been compromised. Protecting against an 199 attack when one of the end-systems has been compromised is extraordi- 200 narily difficult. It is, however, possible to design protocols which 201 minimize the extent of the damage done under these circumstances. 203 By contrast, we assume that the attacker has nearly complete control 204 of the communications channel over which the end-systems communicate. 205 This means that the attacker can read any PDU (Protocol Data Unit) on 206 the network and undetectably remove, change, or inject forged packets 207 onto the wire. This includes being able to generate packets that 208 appear to be from a trusted machine. Thus, even if the end-system 209 with which you wish to communicate is itself secure, the Internet 210 environment provides no assurance that packets which claim to be from 211 that system in fact are. 213 It's important to realize that the meaning of a PDU is different at 214 different levels. At the IP level, a PDU means an IP packet. At the 215 TCP level, it means a TCP segment. At the application layer, it means 216 some kind of application PDU. For instance, at the level of email, it 217 might either mean an RFC-822 message or a single SMTP command. At the 218 HTTP level, it might mean a request or response. 220 3.1. Limited Threat Models 222 As we've said, a resourceful and dedicated attacker can control the 223 entire communications channel. However, a large number of attacks can 224 be mounted by an attacker with fewer resources. A number of currently 225 known attacks can be mounted by an attacker with limited control of 226 the network. For instance, password sniffing attacks can be mounted 227 by an attacker who can only read arbitrary packets. This is generally 228 referred to as a PASSIVE ATTACK [INTAUTH] 230 By contrast, Morris's sequence number guessing attack [SEQNUM] can be 231 mounted by an attacker who can write but not read arbitrary packets. 232 Any attack which requires the attacker to write to the network is 233 known as an ACTIVE ATTACK. 235 Thus, a useful way of organizing attacks is to divide them based on 236 the capabilities required to mount the attack. The rest of this sec- 237 tion describes these categories and provides some examples of each 238 category. 240 3.2. Passive Attacks 242 In a passive attack, the attacker reads packets off the network but 243 does not write them. The simplest way to mount such an attack is to 244 simply be on the same LAN as the victim. On most common LAN configu- 245 rations, including Ethernet, 802.3, and FDDI, any machine on the wire 246 can read all traffic destined for any other machine on the same LAN. 247 Note that switching hubs make this sort of sniffing substantially 248 more difficult, since traffic destined for a machine only goes to the 249 network segment which that machine is on. 251 Similarly, an attacker who has control of a host in the communica- 252 tions path between two victim machines is able to mount a passive 253 attack on their communications. It is also possible to compromise the 254 routing infrastructure to specifically arrange that traffic passes 255 through a compromised machine. This might involve an active attack on 256 the routing infrastructure to facilitate a passive attack on a victim 257 machine. 259 Wireless communications channels deserve special consideration, espe- 260 cially with the recent and growing popularity of wireless-based LANs, 261 such as those using 802.11. Since the data is simply broadcast on 262 well-known radio frequencies, an attacker simply needs to be able to 263 receive those transmissions. Such channels are especially vulnerable 264 to passive attacks. Although many such channels include cryptographic 265 protection, it is often of such poor quality as to be nearly useless. 266 [WEP] 268 In general, the goal of a passive attack is to obtain information 269 which the sender and receiver would rather remain private. Examples 270 of such information include credentials useful in the electronic 271 world such as passwords or credentials useful in the outside world, 272 such as confidential business information. 274 3.2.1. Confidentiality Violations 276 The classic example of passive attack is sniffing some inherently 277 private data off of the wire. For instance, despite the wide avail- 278 ability of SSL, many credit card transactions still traverse the 279 Internet in the clear. An attacker could sniff such a message and 280 recover the credit card number, which can then be used to make fraud- 281 ulent transactions. Moreover, confidential business information is 282 routinely transmitted over the network in the clear in email. 284 3.2.2. Password Sniffing 286 Another example of a passive attack is PASSWORD SNIFFING. Password 287 sniffing is directed towards obtaining unauthorized use of resources. 289 Many protocols, including [TELNET], [POP], and [NNTP] use a shared 290 password to authenticate the client to the server. Frequently, this 291 password is transmitted from the client to the server in the clear 292 over the communications channel. An attacker who can read this traf- 293 fic can therefore capture the password and REPLAY it. That is to say 294 that he can initiate a connection to the server and pose as the 295 client and login using the captured password. 297 Note that although the login phase of the attack is active, the 298 actual password capture phase is passive. Moreover, unless the server 299 checks the originating address of connections, the login phase does 300 not require any special control of the network. 302 3.2.3. Offline Cryptographic Attacks 304 Many cryptographic protocols are subject to OFFLINE ATTACKS. In such 305 a protocol, the attacker recovers data which has been processed using 306 the victim's secret key and then mounts a cryptanalytic attack on 307 that key. Passwords make a particularly vulnerable target because 308 they are typically low entropy. A number of popular password-based 309 challenge response protocols are vulnerable to DICTIONARY ATTACK. The 310 attacker captures a challenge-response pair and then proceeds to try 311 entries from a list of common words (such as a dictionary file) until 312 he finds a password that produces the right response. 314 A similar such attack can be mounted on a local network when NIS is 315 used. The Unix password is crypted using a one-way function, but 316 tools exist to break such crypted passwords [KLEIN]. When NIS is 317 used, the crypted password is transmitted over the local network and 318 an attacker can thus sniff the password and attack it. 320 Historically, it has also been possible to exploit small operating 321 system security holes to recover the password file using an active 322 attack. These holes can then be bootstrapped into an actual account 323 by using the aforementioned offline password recovery techniques. 324 Thus we combine a low-level active attack with an offline passive 325 attack. 327 3.3. Active Attacks 329 When an attack involves writing data to the network, we refer to this 330 as an ACTIVE ATTACK. When IP is used without IPsec, there is no 331 authentication for the sender address. As a consequence, it's 332 straightforward for an attacker to create a packet with a source 333 address of his choosing. We'll refer to this as a SPOOFING ATTACK. 335 Under certain circumstances, such a packet may be screened out by the 336 network. For instance, many packet filtering firewalls screen out all 337 packets with source addresses on the INTERNAL network that arrive on 338 the EXTERNAL interface. Note, however, that this provides no protec- 339 tion against an attacker who is inside the firewall. In general, 340 designers should assume that attackers can forge packets. 342 However, the ability to forge packets does not go hand in hand with 343 the ability to receive arbitrary packets. In fact, there are active 344 attacks that involve being able to send forged packets but not 345 receive the responses. We'll refer to these as BLIND ATTACKS. 347 Note that not all active attacks require forging addresses. For 348 instance, the TCP SYN denial of service attack [TCPSYN] can be 349 mounted successfully without disguising the sender's address. How- 350 ever, it is common practice to disguise one's address in order to 351 conceal one's identity if an attack is discovered. 353 Each protocol is susceptible to specific active attacks, but experi- 354 ence shows that a number of common patterns of attack can be adapted 355 to any given protocol. The next sections describe a number of these 356 patterns and give specific examples of them as applied to known pro- 357 tocols. 359 3.3.1. Replay Attacks 361 In a REPLAY ATTACK, the attacker records a sequence of messages off 362 of the wire and plays them back to the party which originally 363 received them. Note that the attacker does not need to be able to 364 understand the messages. He merely needs to capture and retransmit 365 them. 367 For example, consider the case where an S/MIME message is being used 368 to request some service, such as a credit card purchase or a stock 369 trade. An attacker might wish to have the service executed twice, if 370 only to inconvenience the victim. He could capture the message and 371 replay it, even though he can't read it, causing the transaction to 372 be executed twice. 374 3.3.2. Message Insertion 376 In a MESSAGE INSERTION attack, the attacker forges a message with 377 some chosen set of properties and injects it into the network. Often 378 this message will have a forged source address in order to disguise 379 the identity of the attacker. 381 For example, a denial-of-service attack can be mounted by inserting a 382 series of spurious TCP SYN packets directed towards the target host. 383 The target host responds with its own SYN and allocates kernel data 384 structures for the new connection. The attacker never completes the 385 3-way handshake, so the allocated connection endpoints just sit there 386 taking up kernel memory. Typical TCP stack implementations only allow 387 some limited number of connections in this "half-open" state and when 388 this limit is reached, no more connections can be initiated, even 389 from legitimate hosts. Note that this attack is a blind attack, since 390 the attacker does not need to process the victim's SYNs. 392 3.3.3. Message Deletion 394 In a MESSAGE DELETION attack, the attacker removes a message from the 395 wire. Morris's sequence number guessing attack [SEQNUM] often 396 requires a message deletion attack to be performed successfully. In 397 this blind attack, the host whose address is being forged will 398 receive a spurious TCP SYN packet from the host being attacked. 399 Receipt of this SYN packet generates a RST, which would tear the 400 illegitimate connection down. In order to prevent this host from 401 sending a RST so that the attack can be carried out successfully, 402 Morris describes flooding this host to create queue overflows such 403 that the SYN packet is lost and thus never responded to. 405 3.3.4. Message Modification 407 In a MESSAGE MODIFICATION attack, the attacker removes a message from 408 the wire, modifies it, and reinjects it into the network. This sort 409 of attack is particularly useful if the attacker wants to send some 410 of the data in the message but also wants to change some of it. 412 Consider the case where the attacker wants to attack an order for 413 goods placed over the Internet. He doesn't have the victim's credit 414 card number so he waits for the victim to place the order and then 415 replaces the delivery address (and possibly the goods description) 416 with his own. Note that this particular attack is known as a CUT-AND- 417 PASTE attack since the attacker cuts the credit card number out of 418 the original message and pastes it into the new message. 420 Another interesting example of a cut-and-paste attack is provided by 421 [IPSPPROB]. If IPsec ESP is used without any MAC then it is possible 422 for the attacker to read traffic encrypted for a victim on the same 423 machine. The attacker attaches an IP header corresponding to a port 424 he controls onto the encrypted IP packet. When the packet is received 425 by the host it will automatically be decrypted and forwarded to the 426 attacker's port. Similar techniques can be used to mount a session 427 hijacking attack. Both of these attacks can be avoided by always 428 using message authentication when you use encryption. Note that this 429 attack only works if (1) no MAC check is being used, since this 430 attack generates damaged packets (2) a host-to-host SA is being used, 431 since a user-to-user SA will result in an inconsistency between the 432 port associated with the SA and the target port. If the receiving 433 machine is single-user than this attack is infeasible. 435 3.3.5. Man-In-The-Middle 437 A MAN-IN-THE-MIDDLE attack combines the above techniques in a special 438 form: The attacker subverts the communication stream in order to pose 439 as the sender to receiver and the receiver to the sender: 441 What Alice and Bob think: 442 Alice <----------------------------------------------> Bob 444 What's happening: 445 Alice <----------------> Attacker <----------------> Bob 447 This differs fundamentally from the above forms of attack because it 448 attacks the identity of the communicating parties, rather than the 449 data stream itself. Consequently, many techniques which provide 450 integrity of the communications stream are insufficient to protect 451 against man-in-the-middle attacks. 453 Man-in-the-middle attacks are possible whenever a protocol lacks PEER 454 ENTITY AUTHENTICATION. For instance, if an attacker can hijack the 455 client TCP connection during the TCP handshake (perhaps by responding 456 to the client's SYN before the server does), then the attacker can 457 open another connection to the server and begin a man-in-the-middle 458 attack. It is also trivial to mount man-in-the-middle attacks on 459 local networks via ARP spoofing--the attacker forges an ARP with the 460 victim's IP address and his own MAC address. Tools to mount this sort 461 of attack are readily available. 462 Note that it is only necessary to authenticate one side of the 463 transaction in order to prevent man-in-the-middle attacks. In such a 464 situation the the peers can establish an association in which only 465 one peer is authenticated. In such a system, an attacker can initiate 466 an association posing as the unauthenticated peer but cannot transmit 467 or access data being sent on a legitimate connection. This is an 468 acceptable situation in contexts such as Web e-commerce where only 469 the server needs to be authenticated (or the client is independently 470 authenticated via some non-cryptographic mechanism such as a credit 471 card number). 473 4. Common Issues 475 Although each system's security requirements are unique, certain com- 476 mon requirements appear in a number of protocols. Often, when naive 477 protocol designers are faced with these requirements, they choose an 478 obvious but insecure solution even though better solutions are avail- 479 able. This section describes a number of issues seen in many 480 protocols and the common pieces of security technology that may be 481 useful in addressing them. 483 4.1. User Authentication 485 Essentially every system which wants to control access to its 486 resources needs some way to authenticate users. A nearly uncountable 487 number of such mechanisms have been designed for this purpose. The 488 next several sections describe some of these techniques. 490 4.1.1. Username/Password 492 The most common access control mechanism is simple USERNAME/PASSWORD 493 The user provides a username and a reusable password to the host 494 which he wishes to use. This system is vulnerable to a simple passive 495 attack where the attacker sniffs the password off the wire and then 496 initiates a new session, presenting the password. This threat can be 497 mitigated by hosting the protocol over an encrypted connection such 498 as TLS or IPSEC. Unprotected (plaintext) username/password systems 499 are not acceptable in IETF standards. 501 4.1.2. Challenge Response and One Time Passwords 503 Systems which desire greater security than USERNAME/PASSWORD often 504 employ either a ONE TIME PASSWORD [OTP] scheme or a CHALLENGE- 505 RESPONSE. In a one time password scheme, the user is provided with a 506 list of passwords, which must be used in sequence, one time each. 507 (Often these passwords are generated from some secret key so the user 508 can simply compute the next password in the sequence.) SecureID and 509 DES Gold are variants of this scheme. In a challenge-response scheme, 510 the host and the user share some secret (which often is represented 511 as a password). In order to authenticate the user, the host presents 512 the user with a (randomly generated) challenge. The user computes 513 some function based on the challenge and the secret and provides that 514 to the host, which verifies it. Often this computation is performed 515 in a handheld device, such as a DES Gold card. 517 Both types of scheme provide protection against replay attack, but 518 often still vulnerable to an OFFLINE KEYSEARCH ATTACK (a form of pas- 519 sive attack): As previously mentioned, often the one-time password or 520 response is computed from a shared secret. If the attacker knows the 521 function being used, he can simply try all possible shared secrets 522 until he finds one that produces the right output. This is made eas- 523 ier if the shared secret is a password, in which case he can mount a 524 DICTIONARY ATTACK--meaning that he tries a list of common words (or 525 strings) rather than just random strings. 527 These systems are also often vulnerable to an active attack. Unless 528 communication security is provided for the entire session, the 529 attacker can simply wait until authentication has been performed and 530 hijack the connection. 532 4.1.3. Certificates 534 A simple approach is to have all users have CERTIFICATES [PKIX] which 535 they then use to authenticate in some protocol-specific way, as in 536 [TLS] or [S/MIME]. A certificate is a signed credential binding an 537 entity's identity to its public key. The signer of a certificate is a 538 CERTIFICATE AUTHORITY (CA), whose certificate may itself be signed by 539 some superior CA. In order for this system to work, trust in one or 540 more CAs must be established in an out-of-band fashion. Such CAs are 541 referred to as TRUSTED ROOTS or ROOT CAS. The primary obstacle to 542 this approach in client-server type systems is that it requires 543 clients to have certificates, which can be a deployment problem. 545 4.1.4. Some Uncommon Systems 547 There are ways to do a better job than the schemes mentioned above, 548 but they typically don't add much security unless communications 549 security (at least message integrity) will be employed to secure the 550 connection, because otherwise the attacker can merely hijack the con- 551 nection after authentication has been performed. A number of proto- 552 cols ( [EKE], [SPEKE], [SRP]) allow one to securely bootstrap a 553 user's password into a shared key which can be used as input to a 554 cryptographic protocol. One major obstacle to the deployment of these 555 protocols has been that their Intellectual Property status is 556 extremely unclear. Similarly, the user can authenticate using public 557 key certificates. (e.g. S-HTTP client authentication). Typically 558 these methods are used as part of a more complete security protocol. 560 4.1.5. Host Authentication 562 Host authentication presents a special problem. Quite commonly, the 563 addresses of services are presented using a DNS hostname, for 564 instance as a URL [URL]. When requesting such a service, one has to 565 ensure that the entity that one is talking to not only has a 566 certificate but that that certificate corresponds to the expected 567 identity of the server. The important thing to have is a secure bind- 568 ing between the certificate and the expected hostname. 570 For instance, it is usually not acceptable for the certificate to 571 contain an identity in the form of an IP address if the request was 572 for a given hostname. This does not provide end-to-end security 573 because the hostname-IP mapping is not secure unless secure name res- 574 olution [DNSSEC] is being used. This is a particular problem when the 575 hostname is presented at the application layer but the authentication 576 is performed at some lower layer. 578 4.2. Generic Security Frameworks 580 Providing security functionality in a protocol can be difficult. In 581 addition to the problem of choosing authentication and key establish- 582 ment mechanisms, one needs to integrate it into a protocol. One 583 response to this problem (embodied in IPsec and TLS) is to create a 584 lower-level security protocol and then insist that new protocols be 585 run over that protocol. 587 Another approach that has recently become popular is to design 588 generic application layer security frameworks. The idea is that you 589 design a protocol that allows you to negotiate various security mech- 590 anisms in a pluggable fashion. Application protocol designers then 591 arrange to carry the security protocol PDUs in their application pro- 592 tocol. Examples of such frameworks include GSS-API [GSS] and SASL 593 [SASL]. 595 The generic framework approach has a number of problems. First, it is 596 highly susceptible to DOWNGRADE ATTACKS. In a downgrade attack, an 597 active attacker tampers with the negotiation in order to force the 598 parties to negotiate weaker protection than they otherwise would. 599 It's possible to include an integrity check after the negotiation and 600 key establishment have both completed, but the strength of this 601 integrity check is necessarily limited to the weakest common algo- 602 rithm. This problem exists with any negotiation approach, but generic 603 frameworks exacerbate it by encouraging the application protocol 604 author to just specify the framework rather than think hard about the 605 appropriate underlying mechanisms, particularly since the mechanisms 606 can very widely in the degree of security offered. 608 Another problem is that it's not always obvious how the various secu- 609 rity features in the framework interact with the application layer 610 protocol. For instance, SASL can be used merely as an authentication 611 framework--in which case the SASL exchange occurs but the rest of the 612 connection is unprotected, but can also negotiate TLS as a mechanism. 613 Knowing under what circumstances TLS is optional and which it is 614 required requires thinking about the threat model. 616 In general, authentication frameworks are most useful in situations 617 where users have a wide variety of credentials that must all be acco- 618 modated by some service. When the security requirements of a system 619 can be clearly identified and only a few forms of authentication are 620 used, choosing a single security mechanism leads to greater simplic- 621 ity and predictability. In situations where a framework is to be 622 used, designers SHOULD carefully examine the framework's options and 623 specify only the mechanisms that are appropriate for their particular 624 threat model. If a framework is necessary, designers SHOULD choose 625 one of the established ones instead of designing their own. 627 4.3. Non-repudiation 629 The naive approach to non-repudiation is simply to use public-key 630 digital signatures over the content. The party who wishes to be bound 631 (the SIGNING PARTY) digitally signs the message in question. The 632 counterparty (the RELYING PARTY) can later point to the digital sig- 633 nature as proof that the signing party at one point agreed to the 634 disputed message. Unfortunately, this approach is insufficient. 636 The easiest way for the signing party to repudiate the message is by 637 claiming that his private key has been compromised and that some 638 attacker (though not necessarily the relying party) signed the dis- 639 puted message. In order to defend against this attack the relying 640 party needs to demonstrate that the signing party's key had not been 641 compromised at the time of the signature. This requires substantial 642 infrastructure, including archival storage of certificate revocation 643 information and timestamp servers to establish the time that the mes- 644 sage was signed. 646 Additionally, the relying party might attempt to trick the signing 647 party into signing one message while thinking he's signing another. 648 This problem is particularly severe when the relying party controls 649 the infrastructure that the signing party uses for signing, such as 650 in kiosk situations. In many such situations the signing party's key 651 is kept on a smartcard but the message to be signed is displayed by 652 the relying party. 654 All of these complications make non-repudiation a difficult service 655 to deploy in practice. 657 4.4. Authorization vs. Authentication 659 AUTHORIZATION is the process by which one determines whether an 660 authenticated party has permission to access a particular resource or 661 service. Although tightly bound, it is important to realize that 662 authentication and authorization are two separate mechanisms. Perhaps 663 because of this tight coupling, authentication is sometimes mistak- 664 enly thought to imply authorization. Authentication simply identifies 665 a party, authorization defines whether they can perform a certain 666 action. 668 Authorization necessarily relies on authentication, but authentica- 669 tion alone does not imply authorization. Rather, before granting per- 670 mission to perform an action, the authorization mechanism must be 671 consulted to determine whether that action is permitted. 673 4.4.1. Access Control Lists 675 One common form of authorization mechanism is an access control list 676 (ACL) that lists users that are permitted access to a resource. Since 677 assigning individual authorization permissions to each resource is 678 tedious, often resources are hierarchically arranged such that the 679 parent resource's ACL is inherited by child resources. This allows 680 administrators to set top level policies and override them when nec- 681 essary. 683 4.4.2. Certificate Based Systems 685 While the distinction between authentication and authorization is 686 intuitive when using simple authentication mechanisms such as user- 687 name and password (i.e., everyone understands the difference between 688 the administrator account and a user account), with more complex 689 authentication mechanisms the distinction is sometimes lost. 691 With certificates, for instance, presenting a valid signature does 692 not imply authorization. The signature must be backed by a 693 certificate chain that contains a trusted root, and that root must be 694 trusted in the given context. For instance, users who possess cer- 695 tificates issued by the Acme MIS CA may have different web access 696 privileges than users who possess certificates issued by the Acme 697 Accounting CA, even though both of these CAs are "trusted" by the 698 Acme web server. 700 Mechanisms for enforcing these more complicated properties have not 701 yet been completely explored. One approach is simply to attach poli- 702 cies to ACLs describing what sorts of certificates are trusted. 703 Another approach is to carry that information with the certificate, 704 either as a certificate extension/attribute [PKIX, SPKI] or as a sep- 705 arate "Attribute Certificate". 707 4.5. Providing Traffic Security 709 Securely designed protocols should provide some mechanism for secur- 710 ing (meaning integrity protecting, authenticating, and possibly 711 encrypting) all sensitive traffic. One approach is to secure the pro- 712 tocol itself, as in [DNSSEC], [S/MIME] or [S-HTTP]. Although this 713 provides security which is most fitted to the protocol, it also 714 requires considerable effort to get right. 716 Many protocols can be adequately secured using one of the available 717 channel security systems. We'll discuss the two most common, IPsec 718 [AH, ESP] and [TLS]. 720 4.5.1. IPsec 722 The IPsec protocols (specifically, AH and ESP) can provide transmis- 723 sion security for all traffic between two hosts. The IPsec protocols 724 support varying granularities of user identification, including for 725 example "IP Subnet", "IP Address", "Fully Qualified Domain Name", and 726 individual user ("Mailbox name"). These varying levels of identifica- 727 tion are employed as inputs to access control facilities that are an 728 intrinsic part of IPsec. However, a given IPsec implementation might 729 not support all identity types. In particular, security gateways may 730 not provide user-to-user authentication or have mechanisms to provide 731 that authentication information to applications. 733 When AH or ESP is used, the application programmer might not need to 734 do anything (if AH or ESP has been enabled system-wide) or might need 735 to make specific software changes (e.g. adding specific setsockopt() 736 calls) -- depending on the AH or ESP implementation being used. 737 Unfortunately, APIs for controlling IPsec implementations are not yet 738 standardized. 740 The primary obstacle to using IPsec to secure other protocols is 741 deployment. The major use of IPsec at present is for VPN applica- 742 tions, especially for remote network access. Without extremely tight 743 coordination between security administrators and application develop- 744 ers, VPN usage is not well suited to providing security services for 745 individual applications since it is difficult for such applications 746 to determine what security services have in fact been provided. 748 IPsec deployment in host-to-host environments has been slow. Unlike 749 application security systems such as TLS, adding IPsec to a non-IPsec 750 system generally involves changing the operating system, either by 751 tampering with the kernel or installing new drivers. This is a sub- 752 stantially greater undertaking than simply installing a new applica- 753 tion. However, recent versions of a number of commodity operating 754 systems include IPsec stacks, so deployment is becoming easier. 756 In environments where IPsec is sure to be available, it represents a 757 viable option for protecting application communications traffic. If 758 the traffic to be protected is UDP, IPsec and application-specific 759 object security are the only options. However, designers MUST not 760 assume that IPsec will be available. A security policy for a generic 761 application layer protocol SHOULD not simply state that IPsec must be 762 used, unless there is some reason to believe that IPsec will be 763 available in the intended deployment environment. In environments 764 where IPsec may not be available and the traffic is solely TCP, TLS 765 is the method of choice, since the application developer can easily 766 ensure its presence by including a TLS implementation in his package. 768 4.5.2. SSL/TLS 770 The currently most common approach is to use SSL or its successor 771 TLS. They provide channel security for a TCP connection at the appli- 772 cation level. That is, they run over TCP. SSL implementations typi- 773 cally provide a Berkeley Sockets-like interface for easy programming. 774 The primary issue when designing a protocol solution around TLS is to 775 differentiate between connections protected using TLS and those which 776 are not. 778 The two primary approaches used are to have a separate well-known 779 port for TLS connections (e.g. the HTTP over TLS port is 443) 780 [HTTPTLS] or to have a mechanism for negotiating upward from the base 781 protocol to TLS as in [UPGRADE] or [STARTTLS]. When an upward negoti- 782 ation strategy is used, care must be taken to ensure that an attacker 783 can not force a clear connection when both parties wish to use TLS. 785 Note that TLS depends upon a reliable protocl such as TCP or SCTP. 786 This produces two notable difficulties. First, it cannot be used to 787 secure datagram protocols that use UDP. Second, TLS is susceptible to 788 IP layer attacks that IPsec is not. Typically, these attacks take 789 some form of denial of service or connection assassination. For 790 instance, an attacker might forge a TCP RST to shut down SSL connec- 791 tions. TLS has mechanisms to detect truncation attacks but these 792 merely allow the victim to know he is being attacked and do not pro- 793 vide connection survivability in the face of such attacks. By con- 794 trast, if IPsec were being used, such a forged RST could be rejected 795 without affecting the TCP conection. 797 4.5.3. Remote Login 799 In some special cases it may be worth providing channel-level secu- 800 rity directly in the application rather than using IPSEC or SSL/TLS. 801 One such case is remote terminal security. Characters are typically 802 delivered from client to server one character at a time. Since 803 SSL/TLS and AH/ESP authenticate and encrypt every packet, this can 804 mean a data expansion of 20-fold. The telnet encryption option 805 [ENCOPT] prevents this expansion by foregoing message integrity. 807 When using remote terminal service, it's often desirable to securely 808 perform other sorts of communications services. In addition to pro- 809 viding remote login, SSH [SSH] also provides secure port forwarding 810 for arbitrary TCP ports, thus allowing users run arbitrary TCP-based 811 applications over the SSH channel. Note that SSH Port Forwarding can 812 be security issue if it is used improperly to circumvent firewall and 813 improperly expose insecure internal applications to the outside 814 world. 816 4.6. Denial of Service Attacks and Countermeasures 818 Denial of service attacks are all too frequently viewed as an fact of 819 life. One problem is that an attacker can often choose from one of 820 many denial of service attacks to inflict upon a victim, and because 821 most of these attacks cannot be thwarted, common wisdom frequently 822 assumes that there is no point protecting against one kind of denial 823 of service attack when there are many other denial of service attacks 824 that are possible but that cannot be prevented. 826 However, not all denial of service attacks are equal and more impor- 827 tantly, it is possible to design protocols such that denial of ser- 828 vice attacks are made more difficult if not impractical. Recent SYN 829 flood attacks [TCPSYN] demonstrate both of these properties: SYN 830 flood attacks are so easy, anonymous, and effective that they are 831 more attractive to attackers than other attacks; and because the 832 design of TCP enables this attack. 834 Because complete DoS protection is so difficult, security against DoS 835 must be dealt with pragmatically. In particular, some attacks which 836 would be desirable to defend against cannot be defended against eco- 837 nomically. The goal should be to manage risk by defending against 838 attacks with sufficiently high ratios of severity to cost of defense. 839 Both severity of attack and cost of defense change as technology 840 changes and therefore so does the set of attacks which should be 841 defended against. 843 Authors of internet standards MUST describe which denial of service 844 attacks their protocol is susceptable to. This description MUST 845 include the reasons it was either unreasonable or out of scope to 846 attempt to avoid these denial of service attacks. 848 4.6.1. Blind Denial of Service 850 BLIND denial of service attacks are particularly pernicious. With a 851 blind attack the attacker has a significant advantage. If the 852 attacker must be able to receive traffic from the victim then he must 853 either subvert the routing fabric or must use his own IP address. 854 Either provides an opportunity for victim to track the attacker 855 and/or filter out his traffic. With a blind attack the attacker can 856 use forged IP addresses, making it extremely difficult for the victim 857 to filter out his packets. The TCP SYN flood attack is an example of 858 a blind attack. Designers should make every attempt possible to pre- 859 vent blind denial of service attacks. 861 4.6.2. Distributed Denial of Service 863 Even more dangerous are DISTRIBUTED denial of service attacks (DDoS) 864 [DDOS] In a DDoS the attacker arranges for a number of machines to 865 attack the target machine simultaneously. Usually this is accom- 866 plished by infecting a large number of machines with a program that 867 allows remote initiation of attacks. The machines actually performing 868 the attack are called ZOMBIEs and are likely owned by unsuspecting 869 third parties in an entirely different location from the true 870 attacker. DDoS attacks can be very hard to counter because the zom- 871 bies often appear to be making legitimate protocol requests and sim- 872 ply crowd out the real users. DDoS attacks can be difficult to 873 thwart, but protocol designers are expected to be cognizant of these 874 forms of attack while designing protocols. 876 4.6.3. Avoiding Denial of Service 878 There are two common approaches to making denial of service attacks 879 more difficult: 881 4.6.3.1. Make your attacker do more work than you do 883 If an attacker consumes more of his resources than yours when launch- 884 ing an attack, attackers with fewer resources than you will be unable 885 to launch effective attacks. One common technique is to require the 886 attacker perform a time-intensive operation, such as a cryptographic 887 operation. Note that an attacker can still mount a denial of service 888 attack if he can muster substantially sufficient CPU power. For 889 instance, this technique would not stop the distributed attacks 890 described in [TCPSYN]. 892 4.6.3.2. Make your attacker prove they can receive data from you 894 A blind attack can be subverted by forcing the attack prove that they 895 can can receive data from the victim. A common technique is to 896 require that the attacker reply using information that was gained 897 earlier in the message exchange. If this countermeasure is used, the 898 attacker must either use his own address (making him easy to track) 899 or to forge an address which will be routed back along a path that 900 traverses the host from which the attack is being launched. 902 Hosts on small subnets are thus useless to the attacker (at least in 903 the context of a spoofing attack) because the attack can be traced 904 back to a subnet (which should be sufficient for locating the 905 attacker) so that anti-attack measures can be put into place (for 906 instance, a boundary router can be configured to drop all traffic 907 from that subnet). A common technique is to require that the attacker 908 reply using information that was gained earlier in the message 909 exchange. 911 4.6.4. Example: TCP SYN Floods 913 TCP/IP is vulnerable to SYN flood attacks (which are described in 914 section 3.3.2) because of the design of the 3-way handshake. First, 915 an attacker can force a victim to consume significant resources (in 916 this case, memory) by sending a single packet. Second, because the 917 attacker can perform this action without ever having received data 918 from the victim, the attack can be performed anonymously (and there- 919 fore using a large number of forged source addresses). 921 4.6.5. Example: Photuris 923 [PHOTURIS] specifies an anti-clogging mechanism that prevents attacks 924 on Photuris that resemble the SYN flood attack. Photuris employs a 925 time-variant secret to generate a "cookie" which is returned to the 926 attacker. This cookie must be returned in subsequent messages for the 927 exchange to progress. The interesting feature is that this cookie can 928 be re-generated by the victim later in the exchange, and thus no 929 state need be retained by the victim until after the attacker has 930 proven that he can receive packets from the victim. 932 4.7. Object vs. Channel Security 934 It's useful to make the conceptual distinction between object secu- 935 rity and channel security. Object security refers to security mea- 936 sures which apply to entire data objects. Channel security measures 937 provide a secure channel over which objects may be carried transpar- 938 ently but the channel has no special knowledge about object bound- 939 aries. 941 Consider the case of an email message. When it's carried over an 942 IPSEC or TLS secured connection, the message is protected during 943 transmission. However, it is unprotected in the receiver's mailbox, 944 and in intermediate spool files along the way. Moreover, since mail 945 servers generally run as a daemon, not a user, authentication of mes- 946 sages generally merely means authentication of the daemon not the 947 user. Finally, since mail transport is hop-by-hop, even if the user 948 authenticates to the first hop relay the authentication can't be 949 safely verified by the receiver. 951 By contrast, when an email message is protected with S/MIME or 952 OpenPGP, the entire message is encrypted and integrity protected 953 until it is examined and decrypted by the recipient. It also provides 954 strong authentication of the actual sender, as opposed to the machine 955 the message came from. This is object security. Moreover, the 956 receiver can prove the signed message's authenticity to a third 957 party. 959 Note that the difference between object and channel security is a 960 matter of perspective. Object security at one layer of the protocol 961 stack often looks like channel security at the next layer up. So, 962 from the perspective of the IP layer, each packet looks like an indi- 963 vidually secured object. But from the perspective of a web client, 964 IPSEC just provides a secure channel. 966 The distinction isn't always clear-cut. For example, S-HTTP provides 967 object level security for a single HTTP transaction, but a web page 968 typically consists of multiple HTTP transactions (the base page and 969 numerous inline images.) Thus, from the perspective of the total web 970 page, this looks rather more like channel security. Object security 971 for a web page would consist of security for the transitive closure 972 of the page and all its embedded content as a single unit. 974 5. Writing Security Considerations Sections 976 While it is not a requirement that any given protocol or system be 977 immune to all forms of attack, it is still necessary for authors to 978 consider them. Part of the purpose of the Security Considerations 979 section is to explain what attacks are out of scope and what counter- 980 measures can be applied to defend against them. 982 There should be a clear description of the kinds of threats on the 983 described protocol or technology. This should be approached as an 984 effort to perform "due diligence" in describing all known or foresee- 985 able risks and threats to potential implementers and users. 987 Authors MUST describe 989 1. which attacks are out of scope (and why!) 990 2. which attacks are in-scope 991 2.1 and the protocol is susceptable to 992 2.2 and the protocol protects against 994 At least the following forms of attack MUST be considered: eavesdrop- 995 ping, replay, message insertion, deletion, modification, and man-in- 996 the-middle. Potential denial of service attacks MUST be identified as 997 well. If the protocol incorporates cryptographic protection mecha- 998 nisms, it should be clearly indicated which portions of the data are 999 protected and what the protections are (i.e. integrity only, confi- 1000 dentiality, and/or endpoint authentication, etc.). Some indication 1001 should also be given to what sorts of attacks the cryptographic pro- 1002 tection is susceptible. Data which should be held secret (keying 1003 material, random seeds, etc.) should be clearly labeled. 1005 If the technology involves authentication, particularly user-host 1006 authentication, the security of the authentication method MUST be 1007 clearly specified. That is, authors MUST document the assumptions 1008 that the security of this authentication method is predicated upon. 1009 For instance, in the case of the UNIX username/password login method, 1010 a statement to the effect of: 1012 Authentication in the system is secure only to the extent that it 1013 is difficult to guess or obtain a ASCII password that is a maximum 1014 of 8 characters long. These passwords can be obtained by sniffing 1015 telnet sessions or by running the 'crack' program using the con- 1016 tents of the /etc/passwd file. Attempts to protect against on-line 1017 password guessing by (1) disconnecting after several unsuccessful 1018 login attempts and (2) waiting between successive password prompts 1019 is effective only to the extent that attackers are impatient. 1021 Because the /etc/passwd file maps usernames to user ids, groups, 1022 etc. it must be world readable. In order to permit this usage but 1023 make running crack more difficult, the file is often split into 1024 /etc/passwd and a 'shadow' password file. The shadow file is not 1025 world readable and contains the encrypted password. The regular 1026 /etc/passwd file contains a dummy password in its place. 1028 It is insufficient to simply state that one's protocol should be run 1029 over some lower layer security protocol. If a system relies upon 1030 lower layer security services for security, the protections those 1031 services are expected to provide MUST be clearly specified. In addi- 1032 tion, the resultant properties of the combined system need to be 1033 specified. 1035 Note: In general, the IESG will not approve standards track protocols 1036 which do not provide for strong authentication, either internal to 1037 the protocol or through tight binding to a lower layer security pro- 1038 tocol. 1040 The threat environment addressed by the Security Considerations sec- 1041 tion MUST at a minimum include deployment across the global Internet 1042 across multiple administrative boundaries without assuming that fire- 1043 walls are in place, even if only to provide justification for why 1044 such consideration is out of scope for the protocool. It is not 1045 acceptable to only discuss threats applicable to LANs and ignore the 1046 broader threat environment. All IETF standards-track protocols are 1047 considered likely to have deployment in the global Internet. In some 1048 cases, there might be an Applicability Statement discouraging use of 1049 a technology or protocol in a particular environment. Nonetheless, 1050 the security issues of broader deployment should be discussed in the 1051 document. 1053 There should be a clear description of the residual risk to the user 1054 or operator of that protocol after threat mitigation has been 1055 deployed. Such risks might arise from compromise in a related proto- 1056 col (e.g. IPsec is useless if key management has been compromised), 1057 from incorrect implementation, compromise of the security technology 1058 used for risk reduction (e.g. a cipher with a 40-bit key), or there 1059 might be risks that are not addressed by the protocol specification 1060 (e.g. denial of service attacks on an underlying link protocol). 1062 There should also be some discussion of potential security risks 1063 arising from potential misapplications of the protocol or technology 1064 described in the RFC. This might be coupled with an Applicability 1065 Statement for that RFC. 1067 6. Examples 1069 This section consists of some example security considerations sec- 1070 tions, intended to give the reader a flavor of what's intended by 1071 this document. 1073 The first example is a 'retrospective' example, applying the criteria 1074 of this document to a historical document, RFC-821. The second exam- 1075 ple is a good security considerations section clipped from a current 1076 protocol. 1078 6.1. SMTP 1080 When RFC-821 was written, Security Considerations sections were not 1081 required in RFCs, and none is contained in that document. Had that 1082 document been written today, the Security Considerations section 1083 might look something like this: 1085 6.1.1. SMTP Security Considerations 1087 SMTP as-is provides no security precautions of any kind. All the 1088 attacks we are about to describe must be provided by a different pro- 1089 tocol layer. 1091 A passive attack is sufficient to recover message text. No endpoint 1092 authentication is provided by the protocol. Sender spoofing is triv- 1093 ial, and therefore forging email messages is trivial. Some implemen- 1094 tations do add header lines with hostnames derived through reverse 1095 name resolution (which is only secure to the extent that it is diffi- 1096 cult to spoof DNS -- not very), although these header lines are nor- 1097 mally not displayed to users. Receiver spoofing is also fairly 1098 straight-forward, either using TCP connection hijacking or DNS spoof- 1099 ing. Moreover, since email messages often pass through SMTP gateways, 1100 all intermediate gateways must be trusted, a condition nearly 1101 impossible on the global Internet. 1103 Several approaches are available for alleviating these threats. In 1104 order of increasingly high level in the protocol stack, we have: 1106 SMTP over IPSEC 1107 SMTP/TLS 1108 S/MIME and PGP/MIME 1110 6.1.1.1. SMTP over IPSEC 1112 An SMTP connection run over IPSEC can provide confidentiality for the 1113 message between the sender and the first hop SMTP gateway, or between 1114 any pair of connected SMTP gateways. That is to say, it provides 1115 channel security for the SMTP connections. In a situation where the 1116 message goes directly from the client to the receiver's gateway, this 1117 may provide substantial security (though the receiver must still 1118 trust the gateway). Protection is provided against replay attacks, 1119 since the data itself is protected and the packets cannot be 1120 replayed. 1122 Endpoint identification is a problem, however, unless the receiver's 1123 address can be directly cryptographically authenticated. No sender 1124 identification is available, since the sender's machine is authenti- 1125 cated, not the sender himself. Furthermore, the identity of the 1126 sender simply appears in the From header of the message, so it is 1127 easily spoofable by the sender. Finally, unless the security policy 1128 is set extremely strictly, there is also an active downgrade to 1129 cleartext attack. 1131 6.1.1.2. SMTP/TLS 1133 SMTP can be combined with TLS as described in [STARTTLS]. This pro- 1134 vides similar protection to that provided when using IPSEC. Since TLS 1135 certificates typically contain the server's host name, recipient 1136 authentication may be slightly more obvious, but is still susceptible 1137 to DNS spoofing attacks. Notably, common implementations of TLS con- 1138 tain a US exportable (and hence low security) mode. Applications 1139 desiring high security should ensure that this mode is disabled. Pro- 1140 tection is provided against replay attacks, since the data itself is 1141 protected and the packets cannot be replayed. [note: The Security 1142 Considerations section of the SMTP over TLS draft is quite good and 1143 bears reading as an example of how to do things.] 1144 6.1.1.3. S/MIME and PGP/MIME 1146 S/MIME and PGP/MIME are both message oriented security protocols. 1147 They provide object security for individual messages. With various 1148 settings, sender and recipient authentication and confidentiality may 1149 be provided. More importantly, the identification is not of the send- 1150 ing and receiving machines, but rather of the sender and recipient 1151 themselves. (Or, at least, of cryptographic keys corresponding to the 1152 sender and recipient.) Consequently, end-to-end security may be 1153 obtained. Note, however, that no protection is provided against 1154 replay attacks. 1156 6.1.1.4. Denial of Service 1158 None of these security measures provides any real protection against 1159 denial of service. SMTP connections can easily be used to tie up sys- 1160 tem resources in a number of ways, including excessive port consump- 1161 tion, excessive disk usage (email is typically delivered to disk 1162 files), and excessive memory consumption (sendmail, for instance, is 1163 fairly large, and typically forks a new process to deal with each 1164 message.) 1166 6.1.1.5. Inappropriate Usage 1168 In particular, there is no protection provided against unsolicited 1169 mass email (aka SPAM). 1171 SMTP also includes several commands which may be used by attackers to 1172 explore the machine on which the SMTP server runs. The VRFY command 1173 permits an attacker to convert user-names to mailbox name and often 1174 real name. This is often useful in mounting a password guessing 1175 attack, as many users use their name as their password. EXPN permits 1176 an attacker to expand an email list to the names of the subscribers. 1177 This may be used in order to generate a list of legitimate users in 1178 order to attack their accounts, as well as to build mailing lists for 1179 future SPAM. Administrators may choose to disable these commands. 1181 6.2. VRRP 1183 The second example is from VRRP, the Virtual Router Redundance Proto- 1184 col ( [VRRP] ). We reproduce here the Security Considerations section 1185 from that document (with new section numbers). Our comments are 1186 indented and prefaced with 'NOTE:'. 1188 6.2.1. Security Considerations 1190 VRRP is designed for a range of internetworking environments that may 1191 employ different security policies. The protocol includes several 1192 authentication methods ranging from no authentication, simple clear 1193 text passwords, and strong authentication using IP Authentication 1194 with MD5 HMAC. The details on each approach including possible 1195 attacks and recommended environments follows. 1197 Independent of any authentication type VRRP includes a mechanism 1198 (setting TTL=255, checking on receipt) that protects against VRRP 1199 packets being injected from another remote network. This limits most 1200 vulnerabilities to local attacks. 1202 NOTE: The security measures discussed in the following sections 1203 only provide various kinds of authentication. No confidentiality 1204 is provided at all. This should be explicitly described as outside 1205 the scope. 1207 6.2.1.1. No Authentication 1209 The use of this authentication type means that VRRP protocol 1210 exchanges are not authenticated. This type of authentication SHOULD 1211 only be used in environments were there is minimal security risk and 1212 little chance for configuration errors (e.g., two VRRP routers on a 1213 LAN). 1215 6.2.1.2. Simple Text Password 1217 The use of this authentication type means that VRRP protocol 1218 exchanges are authenticated by a simple clear text password. 1220 This type of authentication is useful to protect against accidental 1221 misconfiguration of routers on a LAN. It protects against routers 1222 inadvertently backing up another router. A new router must first be 1223 configured with the correct password before it can run VRRP with 1224 another router. This type of authentication does not protect against 1225 hostile attacks where the password can be learned by a node snooping 1226 VRRP packets on the LAN. The Simple Text Authentication combined with 1227 the TTL check makes it difficult for a VRRP packet to be sent from 1228 another LAN to disrupt VRRP operation. 1230 This type of authentication is RECOMMENDED when there is minimal risk 1231 of nodes on a LAN actively disrupting VRRP operation. If this type of 1232 authentication is used the user should be aware that this clear text 1233 password is sent frequently, and therefore should not be the same as 1234 any security significant password. 1236 NOTE: This section should be clearer. The basic point is that no 1237 authentication and Simple Text are only useful for a very limited 1238 threat model, namely that none of the nodes on the local LAN are 1239 hostile. The TTL check prevents hostile nodes off-LAN from posing as 1240 valid nodes, but nothing stops hostile nodes on-LAN from impersonating 1241 authorized nodes. This is not a particularly realistic threat model in 1242 many situations. In particular, it's extremely brittle: the compromise 1243 of any node the LAN allows reconfiguration of the VRRP nodes. 1245 6.2.1.3. IP Authentication Header 1247 The use of this authentication type means the VRRP protocol exchanges 1248 are authenticated using the mechanisms defined by the IP Authentica- 1249 tion Header [AH] using [HMAC]. This provides strong protection 1250 against configuration errors, replay attacks, and packet corrup- 1251 tion/modification. 1253 This type of authentication is RECOMMENDED when there is limited con- 1254 trol over the administration of nodes on a LAN. While this type of 1255 authentication does protect the operation of VRRP, there are other 1256 types of attacks that may be employed on shared media links (e.g., 1257 generation of bogus ARP replies) which are independent from VRRP and 1258 are not protected. 1260 NOTE: It's a mistake to have AH be a RECOMMENDED in this context. 1261 Since AH is the only mechanism that protects VRRP against attack 1262 from other nodes on the same LAN, it should be a MUST for cases 1263 where there are untrusted nodes on the same network. In any case, 1264 AH should be a MUST implement. Additionally, there should be 1265 a required algorithm (HMAC-SHA1) 1267 NOTE: Specifically, although securing VRRP prevents unauthorized machines 1268 from taking part in the election protocol, it does not protect 1269 hosts on the network from being deceived. For example, a gratutitous 1270 ARP reply from what purports to be the virtual router's IP address 1271 can redirect traffic to an unauthorized machine. Similarly, 1272 individual connections can be diverted by means of forged ICMP 1273 Redirect messages. 1275 Acknowledgments 1277 This document is heavily based on a note written by Ran Atkinson in 1278 1997. That note was written after the IAB Security Workshop held in 1279 early 1997, based on input from everyone at that workshop. Some of 1280 the specific text above was taken from Ran's original document, and 1281 some of that text was taken from an email message written by Fred 1282 Baker. The other primary source for this document is specific com- 1283 ments received from Steve Bellovin. Early review of this document was 1284 done by Lisa Dusseault and Mark Schertler 1286 References 1287 [AH] Kent, S., and Atkinson, R., "IP Authentication Header", 1288 RFC 2402, November 1998. 1290 [DDOS] "Denial-Of-Service Tools" CERT Advisory CA-1999-17, 1291 28 December 1999, CERT 1292 http://www.cert.org/advisories/CA-1999-17.html 1294 [DNSSEC] Eastlake, D., "Domain Name System Security Extensions", 1295 RFC 2535, March 1999. 1297 [EKE] Bellovin, S., Merritt, M., "Encrypted Key Exchange: 1298 Password-based protocols secure against dictionary 1299 attacks", Proceedings of the IEEE Symposium on Research 1300 in Security and Privacy, May 1992. 1302 [ENCOPT] Tso, T., "Telnet Data Encryption Option", RFC 2946, 1303 September, 2000. 1305 [ESP] Kent, S., and Atkinson, R., "IP Encapsulating Security 1306 Payload (ESP)", RFC 2406, November 1998. 1308 [GSS] Linn, J., "Generic Security Services Application Program Interface 1309 Version 2, Update 1", RFC 2743, January 2000. 1311 [HTTPTLS] Rescorla, E., "HTTP over TLS", RFC 2818, May 2000. 1313 [HMAC] Krawczyk, H., Bellare, M., Canetti, R., "HMAC: Keyed-Hashing 1314 for Message Authentication", RFC 2104, February 1997. 1316 [INTAUTH] Haller, N., Atkinson, R., "On Internet Authentication", 1317 RFC 1704, October 1994. 1319 [IPSPPROB] Bellovin, S. M., "Problem Areas for the IP Security Protocols", 1320 Proceedings of the Sixth Usenix UNIX Security Symposium, 1321 July 1996. 1323 [KLEIN] Klein, D.V., "Foiling the Cracker: A Survey of and 1324 Improvements to Password Security", 1990. 1326 [NNTP] Kantor, B, and Lapsley, P., "Network News Transfer Protocol", 1327 RFC 977, February 1986. 1329 [OTP] Haller, N., Metz, C., Nesser, P., "A One-Time Password 1330 System", Straw, M., RFC 2289, February 1998. 1332 [PHOTURIS] Karn, P., and Simpson, W., "Photuris: Session-Key Management 1333 Protocol", RFC 2522, March 1999. 1335 [PKIX] Housley, R., Ford, W., Polk, W., Solo, D., Internet X.509 1336 "Public Key Infrastructure Certificate and CRL Profile", 1337 RFC 2459, January 1999. 1339 [POP] Myers, J., and Rose, M., "Post Office Protocol - Version 3", 1340 RFC 1939, May 1996. 1342 [RFC-2223] Postel J., and Reynolds J., "Instructions to RFC Authors", 1343 RFC 2223, October 1997. 1345 [SASL] Myers, J., "Simple Authenticatin and Security Layer (SASL)", 1346 RFC 2222, October 1997. 1348 [SEQNUM] Morris, R.T., "A Weakness in the 4.2 BSD UNIX TCP/IP Software", 1349 AT&T Bell Laboratories, CSTR 117, 1985. 1351 [SPKI] Ellison, C., Frantz, B., Lampson, B., Rivest, R., Thomas, B., 1352 Ylonen, T., "SPKI Certificate Theory", RFC 2693, 1353 September 1999. 1355 [SPEKE] Jablon, D., "Strong Password-Only Authenticated Key Exchange", 1356 Computer Communication Review, ACM SIGCOMM, vol. 26, no. 5, 1357 pp. 5-26, October 1996. 1359 [SRP] Wu T., "The Secure Remote Password Protocol", ISOC NDSS 1360 Symposium, 1998. 1362 [SSH] Ylonen, T., "SSH - Secure Login Connections Over the Internet", 1363 6th USENIX Security Symposium, p. 37-42, July 1996. 1365 [STARTTLS] Hoffman, P., "SMTP Service Extension for Secure SMTP over TLS", 1366 RFC 2487, January 1998. 1368 [S-HTTP] Rescorla, E., and Schiffman, A., "The Secure HyperText Transfer 1369 Protocol", RFC 2660, August 1999. 1371 [S/MIME] Ramsdell, B., Ed., "S/MIME Version 3 Message Specification", 1372 RFC 2633, June 1999. 1374 [TELNET] Postel, J., and Reynolds, J., "Telnet Protocol Specification", 1375 RFC 854, May 1983. 1377 [TLS] Dierks, T., and Allen, C., "The TLS Protocol Version 1.0", 1378 RFC 2246, January 1999. 1380 [TCPSYN] "TCP SYN Flooding and IP Spoofing Attacks", 1381 CERT Advisory CA-1996-21, 19 September 1996, CERT. 1382 http://www.cert.org/advisories/CA-1996-21.html 1384 [UPGRADE] Khare, R., Lawrence, S., "Upgrading to TLS Within HTTP/1.1", 1385 RFC 2817, May 2000. 1387 [URL] Berners-Lee, T., Masinter, M., McCahill, M., "Uniform Resource 1388 Locators (URL)", RFC 1738, December 1994. 1390 [VRRP] Knight, S., Weaver, D., Whipple, D., Hinden, R., Mitzel, D., Hunt, 1391 P., Higginson, P., Shand, M., Lindemn, A., "Virtual Router 1392 Redundancy Protocol", RFC 2338, April 1998. 1394 [WEP] Borisov, N., Goldberg, I., Wagner, D., "Intercepting Mobile 1395 Communications: The Insecurity of 802.11", 1396 http://www.isaac.cs.berkeley.edu/isaac/wep-draft.pdf 1398 Security Considerations 1400 This entire document is about security considerations. 1402 Author's Address 1403 Eric Rescorla 1404 RTFM, Inc. 1405 2439 Alvin Drive 1406 Mountain View, CA 94043 1407 Phone: (650)-320-8549 1409 Brian Korver 1410 Xythos Software 1411 77 Maiden Lane, Suite 200 1412 San Francisco, CA, USA 1413 Phone: (415)-248-3800 1414 Table of Contents 1416 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1417 2. The Goals of Security . . . . . . . . . . . . . . . . . . . . . . 2 1418 2.1. Communication Security . . . . . . . . . . . . . . . . . . . . 2 1419 2.1.1. Confidentiality . . . . . . . . . . . . . . . . . . . . . . . 2 1420 2.1.2. Data Integrity . . . . . . . . . . . . . . . . . . . . . . . 2 1421 2.1.3. Peer Entity authentication . . . . . . . . . . . . . . . . . 3 1422 2.2. Non-Repudiation . . . . . . . . . . . . . . . . . . . . . . . . 3 1423 2.3. Systems Security . . . . . . . . . . . . . . . . . . . . . . . 4 1424 2.3.1. Unauthorized Usage . . . . . . . . . . . . . . . . . . . . . 4 1425 2.3.2. Inappropriate Usage . . . . . . . . . . . . . . . . . . . . . 4 1426 2.3.3. Denial of Service . . . . . . . . . . . . . . . . . . . . . . 4 1427 3. The Internet Threat Model . . . . . . . . . . . . . . . . . . . . 4 1428 3.1. Limited Threat Models . . . . . . . . . . . . . . . . . . . . . 5 1429 3.2. Passive Attacks . . . . . . . . . . . . . . . . . . . . . . . . 6 1430 3.2.1. Confidentiality Violations . . . . . . . . . . . . . . . . . 6 1431 3.2.2. Password Sniffing . . . . . . . . . . . . . . . . . . . . . . 6 1432 3.2.3. Offline Cryptographic Attacks . . . . . . . . . . . . . . . . 7 1433 3.3. Active Attacks . . . . . . . . . . . . . . . . . . . . . . . . 7 1434 3.3.1. Replay Attacks . . . . . . . . . . . . . . . . . . . . . . . 8 1435 3.3.2. Message Insertion . . . . . . . . . . . . . . . . . . . . . . 8 1436 3.3.3. Message Deletion . . . . . . . . . . . . . . . . . . . . . . 9 1437 3.3.4. Message Modification . . . . . . . . . . . . . . . . . . . . 9 1438 3.3.5. Man-In-The-Middle . . . . . . . . . . . . . . . . . . . . . . 10 1439 4. Common Issues . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1440 4.1. User Authentication . . . . . . . . . . . . . . . . . . . . . . 11 1441 4.1.1. Username/Password . . . . . . . . . . . . . . . . . . . . . . 11 1442 4.1.2. Challenge Response and One Time Passwords . . . . . . . . . . 11 1443 4.1.3. Certificates . . . . . . . . . . . . . . . . . . . . . . . . 12 1444 4.1.4. Some Uncommon Systems . . . . . . . . . . . . . . . . . . . . 12 1445 4.1.5. Host Authentication . . . . . . . . . . . . . . . . . . . . . 12 1446 4.2. Generic Security Frameworks . . . . . . . . . . . . . . . . . . 13 1447 4.3. Non-repudiation . . . . . . . . . . . . . . . . . . . . . . . . 14 1448 4.4. Authorization vs. Authentication . . . . . . . . . . . . . . . 14 1449 4.4.1. Access Control Lists . . . . . . . . . . . . . . . . . . . . 15 1450 4.4.2. Certificate Based Systems . . . . . . . . . . . . . . . . . . 15 1451 4.5. Providing Traffic Security . . . . . . . . . . . . . . . . . . 15 1452 4.5.1. IPsec . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1453 4.5.2. SSL/TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1454 4.5.3. Remote Login . . . . . . . . . . . . . . . . . . . . . . . . 17 1455 4.6. Denial of Service Attacks and Countermeasures . . . . . . . . . 18 1456 4.6.1. Blind Denial of Service . . . . . . . . . . . . . . . . . . . 18 1457 4.6.2. Distributed Denial of Service . . . . . . . . . . . . . . . . 19 1458 4.6.3. Avoiding Denial of Service . . . . . . . . . . . . . . . . . 19 1459 4.6.3.1. Make your attacker do more work than you do . . . . . . . . 19 1460 4.6.3.2. Make your attacker prove they can receive data from you 1461 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1462 4.6.4. Example: TCP SYN Floods . . . . . . . . . . . . . . . . . . . 20 1463 4.6.5. Example: Photuris . . . . . . . . . . . . . . . . . . . . . . 20 1464 4.7. Object vs. Channel Security . . . . . . . . . . . . . . . . . . 20 1465 5. Writing Security Considerations Sections . . . . . . . . . . . . 21 1466 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1467 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1468 6.1. SMTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1469 6.1.1. SMTP Security Considerations . . . . . . . . . . . . . . . . 23 1470 6.1.1.1. SMTP over IPSEC . . . . . . . . . . . . . . . . . . . . . . 24 1471 6.1.1.2. SMTP/TLS . . . . . . . . . . . . . . . . . . . . . . . . . 24 1472 6.1.1.3. S/MIME and PGP/MIME . . . . . . . . . . . . . . . . . . . . 25 1473 6.1.1.4. Denial of Service . . . . . . . . . . . . . . . . . . . . . 25 1474 6.1.1.5. Inappropriate Usage . . . . . . . . . . . . . . . . . . . . 25 1475 6.2. VRRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1476 6.2.1. Security Considerations . . . . . . . . . . . . . . . . . . . 25 1477 6.2.1.1. No Authentication . . . . . . . . . . . . . . . . . . . . . 26 1478 6.2.1.2. Simple Text Password . . . . . . . . . . . . . . . . . . . 26 1479 6.2.1.3. IP Authentication Header . . . . . . . . . . . . . . . . . 27 1480 6.2.1.3. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 27 1481 6.2.1.3. References . . . . . . . . . . . . . . . . . . . . . . . . 28 1482 Security Considerations . . . . . . . . . . . . . . . . . . . . . . 30 1483 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . . 30