idnits 2.17.1 draft-nir-saag-rfc3552bis-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (August 25, 2016) is 2800 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 7230 (Obsoleted by RFC 9110, RFC 9112) ** Obsolete normative reference: RFC 5751 (Obsoleted by RFC 8551) ** Obsolete normative reference: RFC 5246 (Obsoleted by RFC 8446) -- Obsolete informational reference (is this intentional?): RFC 2818 (Obsoleted by RFC 9110) -- Obsolete informational reference (is this intentional?): RFC 1738 (Obsoleted by RFC 4248, RFC 4266) Summary: 3 errors (**), 0 flaws (~~), 1 warning (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Y. Nir 3 Internet-Draft Check Point 4 Intended status: Standards Track M. Westerlund 5 Expires: February 26, 2017 Ericsson 6 August 25, 2016 8 Guidelines for Writing RFC Text on Security Considerations 9 draft-nir-saag-rfc3552bis-00 11 Abstract 13 All RFCs are required to have a Security Considerations section. 14 Historically, such sections have been relatively weak. This document 15 provides guidelines to RFC authors on how to write a good Security 16 Considerations section. 18 Status of This Memo 20 This Internet-Draft is submitted in full conformance with the 21 provisions of BCP 78 and BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF). Note that other groups may also distribute 25 working documents as Internet-Drafts. The list of current Internet- 26 Drafts is at http://datatracker.ietf.org/drafts/current/. 28 Internet-Drafts are draft documents valid for a maximum of six months 29 and may be updated, replaced, or obsoleted by other documents at any 30 time. It is inappropriate to use Internet-Drafts as reference 31 material or to cite them other than as "work in progress." 33 This Internet-Draft will expire on February 26, 2017. 35 Copyright Notice 37 Copyright (c) 2016 IETF Trust and the persons identified as the 38 document authors. All rights reserved. 40 This document is subject to BCP 78 and the IETF Trust's Legal 41 Provisions Relating to IETF Documents 42 (http://trustee.ietf.org/license-info) in effect on the date of 43 publication of this document. Please review these documents 44 carefully, as they describe your rights and restrictions with respect 45 to this document. Code Components extracted from this document must 46 include Simplified BSD License text as described in Section 4.e of 47 the Trust Legal Provisions and are provided without warranty as 48 described in the Simplified BSD License. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 53 1.1. Conventions Used in This Document . . . . . . . . . . . . 3 54 2. The Goals of Security . . . . . . . . . . . . . . . . . . . . 3 55 2.1. Communication Security . . . . . . . . . . . . . . . . . 4 56 2.1.1. Confidentiality . . . . . . . . . . . . . . . . . . . 4 57 2.1.2. Data Integrity . . . . . . . . . . . . . . . . . . . 4 58 2.1.3. Peer Entity Authentication . . . . . . . . . . . . . 5 59 2.2. Non-Repudiation . . . . . . . . . . . . . . . . . . . . . 5 60 2.3. Systems Security . . . . . . . . . . . . . . . . . . . . 6 61 2.3.1. Unauthorized Usage . . . . . . . . . . . . . . . . . 6 62 2.3.2. Inappropriate Usage . . . . . . . . . . . . . . . . . 6 63 2.3.3. Denial of Service . . . . . . . . . . . . . . . . . . 6 64 3. The Internet Threat Model . . . . . . . . . . . . . . . . . . 7 65 3.1. Limited Threat Models . . . . . . . . . . . . . . . . . . 7 66 3.2. Passive Attacks . . . . . . . . . . . . . . . . . . . . . 8 67 3.2.1. Confidentiality Violations . . . . . . . . . . . . . 8 68 3.2.2. Password Sniffing . . . . . . . . . . . . . . . . . . 9 69 3.2.3. Offline Cryptographic Attacks . . . . . . . . . . . . 9 70 3.3. Active Attacks . . . . . . . . . . . . . . . . . . . . . 10 71 3.4. Replay Attacks . . . . . . . . . . . . . . . . . . . . . 10 72 3.5. Message Insertion . . . . . . . . . . . . . . . . . . . . 11 73 3.6. Message Deletion . . . . . . . . . . . . . . . . . . . . 11 74 3.7. Message Modification . . . . . . . . . . . . . . . . . . 11 75 3.8. Man-In-The-Middle . . . . . . . . . . . . . . . . . . . . 12 76 3.9. Topological Issues . . . . . . . . . . . . . . . . . . . 13 77 3.9.1. On-path versus off-path . . . . . . . . . . . . . . . 13 78 3.9.2. Link-local . . . . . . . . . . . . . . . . . . . . . 13 79 4. Common Issues . . . . . . . . . . . . . . . . . . . . . . . . 14 80 4.1. User Authentication . . . . . . . . . . . . . . . . . . . 14 81 4.1.1. Username/Password . . . . . . . . . . . . . . . . . . 14 82 4.1.2. Challenge Response and One Time Passwords . . . . . . 14 83 4.1.3. Shared Keys . . . . . . . . . . . . . . . . . . . . . 15 84 4.1.4. Key Distribution Centers . . . . . . . . . . . . . . 15 85 4.1.5. Certificates . . . . . . . . . . . . . . . . . . . . 15 86 4.1.6. Some Uncommon Systems . . . . . . . . . . . . . . . . 16 87 4.1.7. Host Authentication . . . . . . . . . . . . . . . . . 16 88 4.2. Generic Security Frameworks . . . . . . . . . . . . . . . 16 89 4.3. Non-repudiation . . . . . . . . . . . . . . . . . . . . . 17 90 4.4. Authorization vs. Authentication . . . . . . . . . . . . 18 91 4.4.1. Access Control Lists . . . . . . . . . . . . . . . . 18 92 4.4.2. Certificate Based Systems . . . . . . . . . . . . . . 19 93 4.5. Providing Traffic Security . . . . . . . . . . . . . . . 19 94 4.5.1. IPsec . . . . . . . . . . . . . . . . . . . . . . . . 19 95 4.5.2. SSL/TLS . . . . . . . . . . . . . . . . . . . . . . . 21 96 4.5.3. Remote Login . . . . . . . . . . . . . . . . . . . . 22 97 4.6. Denial of Service Attacks and Countermeasures . . . . . . 23 98 4.6.1. Blind Denial of Service . . . . . . . . . . . . . . . 23 99 4.6.2. Distributed Denial of Service . . . . . . . . . . . . 24 100 4.6.3. Avoiding Denial of Service . . . . . . . . . . . . . 24 101 4.6.4. Example: TCP SYN Floods . . . . . . . . . . . . . . . 25 102 4.6.5. Example: Photuris . . . . . . . . . . . . . . . . . . 25 103 4.7. Object vs. Channel Security . . . . . . . . . . . . . . . 25 104 4.8. Firewalls . . . . . . . . . . . . . . . . . . . . . . . . 26 105 5. Writing Security Considerations Sections . . . . . . . . . . 27 106 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 29 107 6.1. SMTP . . . . . . . . . . . . . . . . . . . . . . . . . . 29 108 6.1.1. Security Considerations . . . . . . . . . . . . . . . 29 109 6.1.2. Communications security issues (NEW) . . . . . . . . 34 110 6.1.3. Denial of Service (NEW) . . . . . . . . . . . . . . . 36 111 6.2. VRRP . . . . . . . . . . . . . . . . . . . . . . . . . . 36 112 6.2.1. Security Considerations . . . . . . . . . . . . . . . 36 113 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 38 114 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 39 115 9. Security Considerations . . . . . . . . . . . . . . . . . . . 39 116 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 39 117 10.1. Normative References . . . . . . . . . . . . . . . . . . 39 118 10.2. Informative References . . . . . . . . . . . . . . . . . 41 119 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 43 121 1. Introduction 123 All RFCs are required by [RFC7322] to contain a Security 124 Considerations section. The purpose of this is both to encourage 125 document authors to consider security in their designs and to inform 126 the reader of relevant security issues. This memo is intended to 127 provide guidance to RFC authors in service of both ends. 129 This document is structured in three parts. The first is a 130 combination security tutorial and definition of common terms; the 131 second is a series of guidelines for writing Security Considerations; 132 the third is a series of examples. 134 1.1. Conventions Used in This Document 136 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 137 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 138 document are to be interpreted as described in [RFC2119]. 140 2. The Goals of Security 142 Most people speak of security as if it were a single monolithic 143 property of a protocol or system, however, upon reflection, one 144 realizes that it is clearly not true. Rather, security is a series 145 of related but somewhat independent properties. Not all of these 146 properties are required for every application. 148 We can loosely divide security goals into those related to protecting 149 communications (COMMUNICATION SECURITY, also known as COMSEC) and 150 those relating to protecting systems (ADMINISTRATIVE SECURITY or 151 SYSTEM SECURITY). Since communications are carried out by systems 152 and access to systems is through communications channels, these goals 153 obviously interlock, but they can also be independently provided. 155 2.1. Communication Security 157 Different authors partition the goals of communication security 158 differently. The partitioning we've found most useful is to divide 159 them into three major categories: CONFIDENTIALITY, DATA INTEGRITY and 160 PEER ENTITY AUTHENTICATION. 162 2.1.1. Confidentiality 164 When most people think of security, they think of CONFIDENTIALITY. 165 Confidentiality means that your data is kept secret from unintended 166 listeners. Usually, these listeners are simply eavesdroppers. When 167 an adversary taps your phone, it poses a risk to your 168 confidentiality. 170 Obviously, if you have secrets, then you are probably concerned about 171 others discovering them. Thus, at the very least, you want to 172 maintain confidentiality. When you see spies in the movies go into 173 the bathroom and turn on all the water to foil bugging, the property 174 they're looking for is confidentiality. 176 2.1.2. Data Integrity 178 The second primary goal is DATA INTEGRITY. The basic idea here is 179 that we want to make sure that the data we receive is the same data 180 that the sender has sent. In paper-based systems, some data 181 integrity comes automatically. When you receive a letter written in 182 pen you can be fairly certain that no words have been removed by an 183 attacker because pen marks are difficult to remove from paper. 184 However, an attacker could have easily added some marks to the paper 185 and completely changed the meaning of the message. Similarly, it's 186 easy to shorten the page to truncate the message. 188 On the other hand, in the electronic world, since all bits look 189 alike, it's trivial to tamper with messages in transit. You simply 190 remove the message from the wire, copy out the parts you like, add 191 whatever data you want, and generate a new message of your choosing, 192 and the recipient is no wiser. This is the moral equivalent of the 193 attacker taking a letter you wrote, buying some new paper and 194 recopying the message, changing it as he does it. It's just a lot 195 easier to do electronically since all bits look alike. 197 2.1.3. Peer Entity Authentication 199 The third property we're concerned with is PEER ENTITY 200 AUTHENTICATION. What we mean by this is that we know that one of the 201 endpoints in the communication is the one we intended. Without peer 202 entity authentication, it's very difficult to provide either 203 confidentiality or data integrity. For instance, if we receive a 204 message from Alice, the property of data integrity doesn't do us much 205 good unless we know that it was in fact sent by Alice and not the 206 attacker. Similarly, if we want to send a confidential message to 207 Bob, it's not of much value to us if we're actually sending a 208 confidential message to the attacker. 210 Note that peer entity authentication can be provided asymmetrically. 211 When you call someone on the phone, you can be fairly certain that 212 you have the right person -- or at least that you got a person who's 213 actually at the phone number you called. On the other hand, if they 214 don't have caller ID, then the receiver of a phone call has no idea 215 who's calling them. Calling someone on the phone is an example of 216 recipient authentication, since you know who the recipient of the 217 call is, but they don't know anything about the sender. 219 In messaging situations, you often wish to use peer entity 220 authentication to establish the identity of the sender of a certain 221 message. In such contexts, this property is called DATA ORIGIN 222 AUTHENTICATION. 224 2.2. Non-Repudiation 226 A system that provides endpoint authentication allows one party to be 227 certain of the identity of someone with whom he is communicating. 228 When the system provides data integrity a receiver can be sure of 229 both the sender's identity and that he is receiving the data that 230 that sender meant to send. However, he cannot necessarily 231 demonstrate this fact to a third party. The ability to make this 232 demonstration is called NON-REPUDIATION. 234 There are many situations in which non-repudiation is desirable. 235 Consider the situation in which two parties have signed a contract 236 which one party wishes to unilaterally abrogate. He might simply 237 claim that he had never signed it in the first place. Non- 238 repudiation prevents him from doing so, thus protecting the 239 counterparty. 241 Unfortunately, non-repudiation can be very difficult to achieve in 242 practice and naive approaches are generally inadequate. Section 4.3 243 describes some of the difficulties, which generally stem from the 244 fact that the interests of the two parties are not aligned -- one 245 party wishes to prove something that the other party wishes to deny. 247 2.3. Systems Security 249 In general, systems security is concerned with protecting one's 250 machines and data. The intent is that machines should be used only 251 by authorized users and for the purposes that the owners intend. 252 Furthermore, they should be available for those purposes. Attackers 253 should not be able to deprive legitimate users of resources. 255 2.3.1. Unauthorized Usage 257 Most systems are not intended to be completely accessible to the 258 public. Rather, they are intended to be used only by certain 259 authorized individuals. Although many Internet services are 260 available to all Internet users, even those servers generally offer a 261 larger subset of services to specific users. For instance, Web 262 Servers often will serve data to any user, but restrict the ability 263 to modify pages to specific users. Such modifications by the general 264 public would be UNAUTHORIZED USAGE. 266 2.3.2. Inappropriate Usage 268 Being an authorized user does not mean that you have free run of the 269 system. As we said above, some activities are restricted to 270 authorized users, some to specific users, and some activities are 271 generally forbidden to all but administrators. Moreover, even 272 activities which are in general permitted might be forbidden in some 273 cases. For instance, users may be permitted to send email but 274 forbidden from sending files above a certain size, or files which 275 contain viruses. These are examples of INAPPROPRIATE USAGE. 277 2.3.3. Denial of Service 279 Recall that our third goal was that the system should be available to 280 legitimate users. A broad variety of attacks are possible which 281 threaten such usage. Such attacks are collectively referred to as 282 DENIAL OF SERVICE attacks. Denial of service attacks are often very 283 easy to mount and difficult to stop. Many such attacks are designed 284 to consume machine resources, making it difficult or impossible to 285 serve legitimate users. Other attacks cause the target machine to 286 crash, completely denying service to users. 288 3. The Internet Threat Model 290 A THREAT MODEL describes the capabilities that an attacker is assumed 291 to be able to deploy against a resource. It should contain such 292 information as the resources available to an attacker in terms of 293 information, computing capability, and control of the system. The 294 purpose of a threat model is twofold. First, we wish to identify the 295 threats we are concerned with. Second, we wish to rule some threats 296 explicitly out of scope. Nearly every security system is vulnerable 297 to a sufficiently dedicated and resourceful attacker. 299 The Internet environment has a fairly well understood threat model. 300 In general, we assume that the end-systems engaging in a protocol 301 exchange have not themselves been compromised. Protecting against an 302 attack when one of the end-systems has been compromised is 303 extraordinarily difficult. It is, however, possible to design 304 protocols which minimize the extent of the damage done under these 305 circumstances. 307 By contrast, we assume that the attacker has nearly complete control 308 of the communications channel over which the end-systems communicate. 309 This means that the attacker can read any PDU (Protocol Data Unit) on 310 the network and undetectably remove, change, or inject forged packets 311 onto the wire. This includes being able to generate packets that 312 appear to be from a trusted machine. Thus, even if the end-system 313 with which you wish to communicate is itself secure, the Internet 314 environment provides no assurance that packets which claim to be from 315 that system in fact are. 317 It's important to realize that the meaning of a PDU is different at 318 different levels. At the IP level, a PDU means an IP packet. At the 319 TCP level, it means a TCP segment. At the application layer, it 320 means some kind of application PDU. For instance, at the level of 321 email, it might either mean an [RFC5322] message or a single SMTP 322 command. At the HTTP level, it might mean a request or response. 324 3.1. Limited Threat Models 326 As we've said, a resourceful and dedicated attacker can control the 327 entire communications channel. However, a large number of attacks 328 can be mounted by an attacker with fewer resources. A number of 329 currently known attacks can be mounted by an attacker with limited 330 control of the network. For instance, password sniffing attacks can 331 be mounted by an attacker who can only read arbitrary packets. This 332 is generally referred to as a PASSIVE ATTACK [RFC1704]. 334 By contrast, Morris' sequence number guessing attack [SEQNUM] can be 335 mounted by an attacker who can write but not read arbitrary packets. 337 Any attack which requires the attacker to write to the network is 338 known as an ACTIVE ATTACK. 340 Thus, a useful way of organizing attacks is to divide them based on 341 the capabilities required to mount the attack. The rest of this 342 section describes these categories and provides some examples of each 343 category. 345 3.2. Passive Attacks 347 In a passive attack, the attacker reads packets off the network but 348 does not write them. The simplest way to mount such an attack is to 349 simply be on the same LAN as the victim. On most common LAN 350 configurations, including Ethernet, 802.3, and FDDI, any machine on 351 the wire can read all traffic destined for any other machine on the 352 same LAN. Note that switching hubs make this sort of sniffing 353 substantially more difficult, since traffic destined for a machine 354 only goes to the network segment which that machine is on. 356 Similarly, an attacker who has control of a host in the 357 communications path between two victim machines is able to mount a 358 passive attack on their communications. It is also possible to 359 compromise the routing infrastructure to specifically arrange that 360 traffic passes through a compromised machine. This might involve an 361 active attack on the routing infrastructure to facilitate a passive 362 attack on a victim machine. 364 Wireless communications channels deserve special consideration, 365 especially with the recent and growing popularity of wireless-based 366 LANs, such as those using 802.11. Since the data is simply broadcast 367 on well known radio frequencies, an attacker simply needs to be able 368 to receive those transmissions. Such channels are especially 369 vulnerable to passive attacks. Although many such channels include 370 cryptographic protection, it is often of such poor quality as to be 371 nearly useless [WEP]. 373 In general, the goal of a passive attack is to obtain information 374 which the sender and receiver would prefer to remain private. This 375 private information may include credentials useful in the electronic 376 world and/or passwords or credentials useful in the outside world, 377 such as confidential business information. 379 3.2.1. Confidentiality Violations 381 The classic example of passive attack is sniffing some inherently 382 private data off of the wire. For instance, despite the wide 383 availability of SSL, many credit card transactions still traverse the 384 Internet in the clear. An attacker could sniff such a message and 385 recover the credit card number, which can then be used to make 386 fraudulent transactions. Moreover, confidential business information 387 is routinely transmitted over the network in the clear in email. 389 3.2.2. Password Sniffing 391 Another example of a passive attack is PASSWORD SNIFFING. Password 392 sniffing is directed towards obtaining unauthorized use of resources. 393 Many protocols, including TELNET [RFC0854], POP [RFC1939], and NNTP 394 [RFC3977] use a shared password to authenticate the client to the 395 server. Frequently, this password is transmitted from the client to 396 the server in the clear over the communications channel. An attacker 397 who can read this traffic can therefore capture the password and 398 REPLAY it. In other words, the attacker can initiate a connection to 399 the server and pose as the client and login using the captured 400 password. 402 Note that although the login phase of the attack is active, the 403 actual password capture phase is passive. Moreover, unless the 404 server checks the originating address of connections, the login phase 405 does not require any special control of the network. 407 3.2.3. Offline Cryptographic Attacks 409 Many cryptographic protocols are subject to OFFLINE ATTACKS. In such 410 a protocol, the attacker recovers data which has been processed using 411 the victim's secret key and then mounts a cryptanalytic attack on 412 that key. Passwords make a particularly vulnerable target because 413 they are typically low entropy. A number of popular password-based 414 challenge response protocols are vulnerable to DICTIONARY ATTACK. 415 The attacker captures a challenge-response pair and then proceeds to 416 try entries from a list of common words (such as a dictionary file) 417 until he finds a password that produces the right response. 419 A similar such attack can be mounted on a local network when NIS is 420 used. The Unix password is crypted using a one-way function, but 421 tools exist to break such crypted passwords [KLEIN]. When NIS is 422 used, the crypted password is transmitted over the local network and 423 an attacker can thus sniff the password and attack it. 425 Historically, it has also been possible to exploit small operating 426 system security holes to recover the password file using an active 427 attack. These holes can then be bootstrapped into an actual account 428 by using the aforementioned offline password recovery techniques. 429 Thus we combine a low-level active attack with an offline passive 430 attack. 432 3.3. Active Attacks 434 When an attack involves writing data to the network, we refer to this 435 as an ACTIVE ATTACK. When IP is used without IPsec, there is no 436 authentication for the sender address. As a consequence, it's 437 straightforward for an attacker to create a packet with a source 438 address of his choosing. We'll refer to this as a SPOOFING ATTACK. 440 Under certain circumstances, such a packet may be screened out by the 441 network. For instance, many packet filtering firewalls screen out 442 all packets with source addresses on the INTERNAL network that arrive 443 on the EXTERNAL interface. Note, however, that this provides no 444 protection against an attacker who is inside the firewall. In 445 general, designers should assume that attackers can forge packets. 447 However, the ability to forge packets does not go hand in hand with 448 the ability to receive arbitrary packets. In fact, there are active 449 attacks that involve being able to send forged packets but not 450 receive the responses. We'll refer to these as BLIND ATTACKS. 452 Note that not all active attacks require forging addresses. For 453 instance, the TCP SYN denial of service attack [TCPSYN] can be 454 mounted successfully without disguising the sender's address. 455 However, it is common practice to disguise one's address in order to 456 conceal one's identity if an attack is discovered. 458 Each protocol is susceptible to specific active attacks, but 459 experience shows that a number of common patterns of attack can be 460 adapted to any given protocol. The next sections describe a number 461 of these patterns and give specific examples of them as applied to 462 known protocols. 464 3.4. Replay Attacks 466 In a REPLAY ATTACK, the attacker records a sequence of messages off 467 of the wire and plays them back to the party which originally 468 received them. Note that the attacker does not need to be able to 469 understand the messages. He merely needs to capture and retransmit 470 them. 472 For example, consider the case where an S/MIME message is being used 473 to request some service, such as a credit card purchase or a stock 474 trade. An attacker might wish to have the service executed twice, if 475 only to inconvenience the victim. He could capture the message and 476 replay it, even though he can't read it, causing the transaction to 477 be executed twice. 479 3.5. Message Insertion 481 In a MESSAGE INSERTION attack, the attacker forges a message with 482 some chosen set of properties and injects it into the network. Often 483 this message will have a forged source address in order to disguise 484 the identity of the attacker. 486 For example, a denial-of-service attack can be mounted by inserting a 487 series of spurious TCP SYN packets directed towards the target host. 488 The target host responds with its own SYN and allocates kernel data 489 structures for the new connection. The attacker never completes the 490 3-way handshake, so the allocated connection endpoints just sit there 491 taking up kernel memory. Typical TCP stack implementations only 492 allow some limited number of connections in this "half-open" state 493 and when this limit is reached, no more connections can be initiated, 494 even from legitimate hosts. Note that this attack is a blind attack, 495 since the attacker does not need to process the victim's SYNs. 497 3.6. Message Deletion 499 In a MESSAGE DELETION attack, the attacker removes a message from the 500 wire. Morris' sequence number guessing attack [SEQNUM] often 501 requires a message deletion attack to be performed successfully. In 502 this blind attack, the host whose address is being forged will 503 receive a spurious TCP SYN packet from the host being attacked. 504 Receipt of this SYN packet generates a RST, which would tear the 505 illegitimate connection down. In order to prevent this host from 506 sending a RST so that the attack can be carried out successfully, 507 Morris describes flooding this host to create queue overflows such 508 that the SYN packet is lost and thus never responded to. 510 3.7. Message Modification 512 In a MESSAGE MODIFICATION attack, the attacker removes a message from 513 the wire, modifies it, and reinjects it into the network. This sort 514 of attack is particularly useful if the attacker wants to send some 515 of the data in the message but also wants to change some of it. 517 Consider the case where the attacker wants to attack an order for 518 goods placed over the Internet. He doesn't have the victim's credit 519 card number so he waits for the victim to place the order and then 520 replaces the delivery address (and possibly the goods description) 521 with his own. Note that this particular attack is known as a CUT- 522 AND-PASTE attack since the attacker cuts the credit card number out 523 of the original message and pastes it into the new message. 525 Another interesting example of a cut-and-paste attack is provided by 526 [IPSPPROB]. If IPsec ESP is used without any MAC then it is possible 527 for the attacker to read traffic encrypted for a victim on the same 528 machine. The attacker attaches an IP header corresponding to a port 529 he controls onto the encrypted IP packet. When the packet is 530 received by the host it will automatically be decrypted and forwarded 531 to the attacker's port. Similar techniques can be used to mount a 532 session hijacking attack. Both of these attacks can be avoided by 533 always using message authentication when you use encryption. Note 534 that this attack only works if (1) no MAC check is being used, since 535 this attack generates damaged packets (2) a host-to-host SA is being 536 used, since a user-to-user SA will result in an inconsistency between 537 the port associated with the SA and the target port. If the 538 receiving machine is single-user than this attack is infeasible. 540 3.8. Man-In-The-Middle 542 A MAN-IN-THE-MIDDLE attack combines the above techniques in a special 543 form: The attacker subverts the communication stream in order to pose 544 as the sender to receiver and the receiver to the sender: 546 What Alice and Bob think: 547 Alice <----------------------------------------------> Bob 549 What's happening: 550 Alice <----------------> Attacker <----------------> Bob 552 This differs fundamentally from the above forms of attack because it 553 attacks the identity of the communicating parties, rather than the 554 data stream itself. Consequently, many techniques which provide 555 integrity of the communications stream are insufficient to protect 556 against man-in-the-middle attacks. 558 Man-in-the-middle attacks are possible whenever a protocol lacks PEER 559 ENTITY AUTHENTICATION. For instance, if an attacker can hijack the 560 client TCP connection during the TCP handshake (perhaps by responding 561 to the client's SYN before the server does), then the attacker can 562 open another connection to the server and begin a man-in-the-middle 563 attack. It is also trivial to mount man-in-the-middle attacks on 564 local networks via ARP spoofing -- the attacker forges an ARP with 565 the victim's IP address and his own MAC address. Tools to mount this 566 sort of attack are readily available. 568 Note that it is only necessary to authenticate one side of the 569 transaction in order to prevent man-in-the-middle attacks. In such a 570 situation the the peers can establish an association in which only 571 one peer is authenticated. In such a system, an attacker can 572 initiate an association posing as the unauthenticated peer but cannot 573 transmit or access data being sent on a legitimate connection. This 574 is an acceptable situation in contexts such as Web e-commerce where 575 only the server needs to be authenticated (or the client is 576 independently authenticated via some non-cryptographic mechanism such 577 as a credit card number). 579 3.9. Topological Issues 581 In practice, the assumption that it's equally easy for an attacker to 582 read and generate all packets is false, since the Internet is not 583 fully connected. This has two primary implications. 585 3.9.1. On-path versus off-path 587 In order for a datagram to be transmitted from one host to another, 588 it generally must traverse some set of intermediate links and 589 gateways. Such gateways are naturally able to read, modify, or 590 remove any datagram transmitted along that path. This makes it much 591 easier to mount a wide variety of attacks if you are on-path. 593 Off-path hosts can, of course, transmit arbitrary datagrams that 594 appear to come from any hosts but cannot necessarily receive 595 datagrams intended for other hosts. Thus, if an attack depends on 596 being able to receive data, off-path hosts must first subvert the 597 topology in order to place themselves on-path. This is by no means 598 impossible but is not necessarily trivial. 600 Applications protocol designers MUST NOT assume that all attackers 601 will be off-path. Where possible, protocols SHOULD be designed to 602 resist attacks from attackers who have complete control of the 603 network. However, designers are expected to give more weight to 604 attacks which can be mounted by off-path attackers as well as on-path 605 ones. 607 3.9.2. Link-local 609 One specialized case of on-path is being on the same link. In some 610 situations, it's desirable to distinguish between hosts who are on 611 the local network and those who are not. The standard technique for 612 this is verifying the IP TTL value [RFC0791]. Since the TTL must be 613 decremented by each forwarder, a protocol can demand that TTL be set 614 to 255 and that all receivers verify the TTL. A receiver then has 615 some reason to believe that conforming packets are from the same 616 link. Note that this technique must be used with care in the 617 presence of tunneling systems, since such systems may pass packets 618 without decrementing TTL. 620 4. Common Issues 622 Although each system's security requirements are unique, certain 623 common requirements appear in a number of protocols. Often, when 624 naive protocol designers are faced with these requirements, they 625 choose an obvious but insecure solution even though better solutions 626 are available. This section describes a number of issues seen in 627 many protocols and the common pieces of security technology that may 628 be useful in addressing them. 630 4.1. User Authentication 632 Essentially every system which wants to control access to its 633 resources needs some way to authenticate users. A nearly uncountable 634 number of such mechanisms have been designed for this purpose. The 635 next several sections describe some of these techniques. 637 4.1.1. Username/Password 639 The most common access control mechanism is simple USERNAME/PASSWORD 640 The user provides a username and a reusable password to the host 641 which he wishes to use. This system is vulnerable to a simple 642 passive attack where the attacker sniffs the password off the wire 643 and then initiates a new session, presenting the password. This 644 threat can be mitigated by hosting the protocol over an encrypted 645 connection such as TLS or IPSEC. Unprotected (plaintext) username/ 646 password systems are not acceptable in IETF standards. 648 4.1.2. Challenge Response and One Time Passwords 650 Systems which desire greater security than USERNAME/PASSWORD often 651 employ either a ONE TIME PASSWORD [RFC2289] scheme or a CHALLENGE- 652 RESPONSE. In a one time password scheme, the user is provided with a 653 list of passwords, which must be used in sequence, one time each. 654 (Often these passwords are generated from some secret key so the user 655 can simply compute the next password in the sequence.) SecureID and 656 DES Gold are variants of this scheme. In a challenge-response 657 scheme, the host and the user share some secret (which often is 658 represented as a password). In order to authenticate the user, the 659 host presents the user with a (randomly generated) challenge. The 660 user computes some function based on the challenge and the secret and 661 provides that to the host, which verifies it. Often this computation 662 is performed in a handheld device, such as a DES Gold card. 664 Both types of scheme provide protection against replay attack, but 665 often still vulnerable to an OFFLINE KEYSEARCH ATTACK (a form of 666 passive attack): As previously mentioned, often the one-time password 667 or response is computed from a shared secret. If the attacker knows 668 the function being used, he can simply try all possible shared 669 secrets until he finds one that produces the right output. This is 670 made easier if the shared secret is a password, in which case he can 671 mount a DICTIONARY ATTACK -- meaning that he tries a list of common 672 words (or strings) rather than just random strings. 674 These systems are also often vulnerable to an active attack. Unless 675 communication security is provided for the entire session, the 676 attacker can simply wait until authentication has been performed and 677 hijack the connection. 679 4.1.3. Shared Keys 681 CHALLENGE-RESPONSE type systems can be made secure against dictionary 682 attack by using randomly generated shared keys instead of user- 683 generated passwords. If the keys are sufficiently large then 684 keysearch attacks become impractical. This approach works best when 685 the keys are configured into the end nodes rather than memorized and 686 typed in by users, since users have trouble remembering sufficiently 687 long keys. 689 Like password-based systems, shared key systems suffer from 690 management problems. Each pair of communicating parties must have 691 their own agreed-upon key, which leads to there being a lot of keys. 693 4.1.4. Key Distribution Centers 695 One approach to solving the large number of keys problem is to use an 696 online "trusted third party" that mediates between the authenticating 697 parties. The trusted third party (generally called a a KEY 698 DISTRIBUTION CENTER (KDC)) shares a symmetric key or password with 699 each party in the system. It first contacts the KDC which gives it a 700 TICKET containing a randomly generated symmetric key encrypted under 701 both peer's keys. Since only the proper peers can decrypt the 702 symmetric key the ticket can be used to establish a trusted 703 association. By far the most popular KDC system is Kerberos 704 [RFC4120]. 706 4.1.5. Certificates 708 A simple approach is to have all users have CERTIFICATES [RFC5280] 709 which they then use to authenticate in some protocol-specific way, as 710 in TLS [RFC5246] or S/MIME [RFC5751]. A certificate is a signed 711 credential binding an entity's identity to its public key. The 712 signer of a certificate is a CERTIFICATE AUTHORITY (CA), whose 713 certificate may itself be signed by some superior CA. In order for 714 this system to work, trust in one or more CAs must be established in 715 an out-of-band fashion. Such CAs are referred to as TRUSTED ROOTS or 716 ROOT CAS. The primary obstacle to this approach in client-server 717 type systems is that it requires clients to have certificates, which 718 can be a deployment problem. 720 4.1.6. Some Uncommon Systems 722 There are ways to do a better job than the schemes mentioned above, 723 but they typically don't add much security unless communications 724 security (at least message integrity) will be employed to secure the 725 connection, because otherwise the attacker can merely hijack the 726 connection after authentication has been performed. A number of 727 protocols ([EKE], [SPEKE], [SRP]) allow one to securely bootstrap a 728 user's password into a shared key which can be used as input to a 729 cryptographic protocol. One major obstacle to the deployment of 730 these protocols has been that their Intellectual Property status is 731 extremely unclear. Similarly, the user can authenticate using public 732 key certificates (e.g., S-HTTP client authentication). Typically 733 these methods are used as part of a more complete security protocol. 735 4.1.7. Host Authentication 737 Host authentication presents a special problem. Quite commonly, the 738 addresses of services are presented using a DNS hostname, for 739 instance as a Uniform Resource Locator (URL) [RFC1738]. When 740 requesting such a service, one has to ensure that the entity that one 741 is talking to not only has a certificate but that that certificate 742 corresponds to the expected identity of the server. The important 743 thing to have is a secure binding between the certificate and the 744 expected hostname. 746 For instance, it is usually not acceptable for the certificate to 747 contain an identity in the form of an IP address if the request was 748 for a given hostname. This does not provide end-to-end security 749 because the hostname-IP mapping is not secure unless secure name 750 resolution (DNSSEC) is being used. This is a particular problem when 751 the hostname is presented at the application layer but the 752 authentication is performed at some lower layer. 754 4.2. Generic Security Frameworks 756 Providing security functionality in a protocol can be difficult. In 757 addition to the problem of choosing authentication and key 758 establishment mechanisms, one needs to integrate it into a protocol. 759 One response to this problem (embodied in IPsec and TLS) is to create 760 a lower-level security protocol and then insist that new protocols be 761 run over that protocol. Another approach that has recently become 762 popular is to design generic application layer security frameworks. 763 The idea is that you design a protocol that allows you to negotiate 764 various security mechanisms in a pluggable fashion. Application 765 protocol designers then arrange to carry the security protocol PDUs 766 in their application protocol. Examples of such frameworks include 767 GSS-API [RFC2743] and SASL [RFC4422]. 769 The generic framework approach has a number of problems. First, it 770 is highly susceptible to DOWNGRADE ATTACKS. In a downgrade attack, 771 an active attacker tampers with the negotiation in order to force the 772 parties to negotiate weaker protection than they otherwise would. 773 It's possible to include an integrity check after the negotiation and 774 key establishment have both completed, but the strength of this 775 integrity check is necessarily limited to the weakest common 776 algorithm. This problem exists with any negotiation approach, but 777 generic frameworks exacerbate it by encouraging the application 778 protocol author to just specify the framework rather than think hard 779 about the appropriate underlying mechanisms, particularly since the 780 mechanisms can very widely in the degree of security offered. 782 Another problem is that it's not always obvious how the various 783 security features in the framework interact with the application 784 layer protocol. For instance, SASL can be used merely as an 785 authentication framework -- in which case the SASL exchange occurs 786 but the rest of the connection is unprotected, but can also negotiate 787 traffic protection, such as via GSS, as a mechanism. Knowing under 788 what circumstances traffic protection is optional and which it is 789 required requires thinking about the threat model. 791 In general, authentication frameworks are most useful in situations 792 where new protocols are being added to systems with pre-existing 793 legacy authentication systems. A framework allows new installations 794 to provide better authentication while not forcing existing sites 795 completely redo their legacy authentication systems. When the 796 security requirements of a system can be clearly identified and only 797 a few forms of authentication are used, choosing a single security 798 mechanism leads to greater simplicity and predictability. In 799 situations where a framework is to be used, designers SHOULD 800 carefully examine the framework's options and specify only the 801 mechanisms that are appropriate for their particular threat model. 802 If a framework is necessary, designers SHOULD choose one of the 803 established ones instead of designing their own. 805 4.3. Non-repudiation 807 The naive approach to non-repudiation is simply to use public-key 808 digital signatures over the content. The party who wishes to be 809 bound (the SIGNING PARTY) digitally signs the message in question. 810 The counterparty (the RELYING PARTY) can later point to the digital 811 signature as proof that the signing party at one point agreed to the 812 disputed message. Unfortunately, this approach is insufficient. 814 The easiest way for the signing party to repudiate the message is by 815 claiming that his private key has been compromised and that some 816 attacker (though not necessarily the relying party) signed the 817 disputed message. In order to defend against this attack the relying 818 party needs to demonstrate that the signing party's key had not been 819 compromised at the time of the signature. This requires substantial 820 infrastructure, including archival storage of certificate revocation 821 information and timestamp servers to establish the time that the 822 message was signed. 824 Additionally, the relying party might attempt to trick the signing 825 party into signing one message while thinking he's signing another. 826 This problem is particularly severe when the relying party controls 827 the infrastructure that the signing party uses for signing, such as 828 in kiosk situations. In many such situations the signing party's key 829 is kept on a smartcard but the message to be signed is displayed by 830 the relying party. 832 All of these complications make non-repudiation a difficult service 833 to deploy in practice. 835 4.4. Authorization vs. Authentication 837 AUTHORIZATION is the process by which one determines whether an 838 authenticated party has permission to access a particular resource or 839 service. Although tightly bound, it is important to realize that 840 authentication and authorization are two separate mechanisms. 841 Perhaps because of this tight coupling, authentication is sometimes 842 mistakenly thought to imply authorization. Authentication simply 843 identifies a party, authorization defines whether they can perform a 844 certain action. 846 Authorization necessarily relies on authentication, but 847 authentication alone does not imply authorization. Rather, before 848 granting permission to perform an action, the authorization mechanism 849 must be consulted to determine whether that action is permitted. 851 4.4.1. Access Control Lists 853 One common form of authorization mechanism is an access control list 854 (ACL), which lists users that are permitted access to a resource. 855 Since assigning individual authorization permissions to each resource 856 is tedious, resources are often hierarchically arranged so that the 857 parent resource's ACL is inherited by child resources. This allows 858 administrators to set top level policies and override them when 859 necessary. 861 4.4.2. Certificate Based Systems 863 While the distinction between authentication and authorization is 864 intuitive when using simple authentication mechanisms such as 865 username and password (i.e., everyone understands the difference 866 between the administrator account and a user account), with more 867 complex authentication mechanisms the distinction is sometimes lost. 869 With certificates, for instance, presenting a valid signature does 870 not imply authorization. The signature must be backed by a 871 certificate chain that contains a trusted root, and that root must be 872 trusted in the given context. For instance, users who possess 873 certificates issued by the Acme MIS CA may have different web access 874 privileges than users who possess certificates issued by the Acme 875 Accounting CA, even though both of these CAs are "trusted" by the 876 Acme web server. 878 Mechanisms for enforcing these more complicated properties have not 879 yet been completely explored. One approach is simply to attach 880 policies to ACLs describing what sorts of certificates are trusted. 881 Another approach is to carry that information with the certificate, 882 either as a certificate extension/attribute (PKIX [RFC5280], SPKI 883 [RFC2693]) or as a separate "Attribute Certificate". 885 4.5. Providing Traffic Security 887 Securely designed protocols should provide some mechanism for 888 securing (meaning integrity protecting, authenticating, and possibly 889 encrypting) all sensitive traffic. One approach is to secure the 890 protocol itself, as in DNSSEC [RFC4033], S/MIME [RFC5751] or S-HTTP 891 [RFC2660]. Although this provides security which is most fitted to 892 the protocol, it also requires considerable effort to get right. 894 Many protocols can be adequately secured using one of the available 895 channel security systems. We'll discuss the two most common, IPsec 896 [RFC4302][RFC4303] and TLS [RFC5246]. 898 4.5.1. IPsec 900 The IPsec protocols (specifically, AH and ESP) can provide 901 transmission security for all traffic between two hosts. The IPsec 902 protocols support varying granularities of user identification, 903 including for example "IP Subnet", "IP Address", "Fully Qualified 904 Domain Name", and individual user ("Mailbox name"). These varying 905 levels of identification are employed as inputs to access control 906 facilities that are an intrinsic part of IPsec. However, a given 907 IPsec implementation might not support all identity types. In 908 particular, security gateways may not provide user-to-user 909 authentication or have mechanisms to provide that authentication 910 information to applications. 912 When AH or ESP is used, the application programmer might not need to 913 do anything (if AH or ESP has been enabled system-wide) or might need 914 to make specific software changes (e.g., adding specific setsockopt() 915 calls) -- depending on the AH or ESP implementation being used. 916 Unfortunately, APIs for controlling IPsec implementations are not yet 917 standardized. 919 The primary obstacle to using IPsec to secure other protocols is 920 deployment. The major use of IPsec at present is for VPN 921 applications, especially for remote network access. Without 922 extremely tight coordination between security administrators and 923 application developers, VPN usage is not well suited to providing 924 security services for individual applications since it is difficult 925 for such applications to determine what security services have in 926 fact been provided. 928 IPsec deployment in host-to-host environments has been slow. Unlike 929 application security systems such as TLS, adding IPsec to a non-IPsec 930 system generally involves changing the operating system, either by 931 modifying with the kernel or installing new drivers. This is a 932 substantially greater undertaking than simply installing a new 933 application. However, recent versions of a number of commodity 934 operating systems include IPsec stacks, so deployment is becoming 935 easier. 937 In environments where IPsec is sure to be available, it represents a 938 viable option for protecting application communications traffic. If 939 the traffic to be protected is UDP, IPsec and application-specific 940 object security are the only options. However, designers MUST NOT 941 assume that IPsec will be available. A security policy for a generic 942 application layer protocol SHOULD NOT simply state that IPsec must be 943 used, unless there is some reason to believe that IPsec will be 944 available in the intended deployment environment. In environments 945 where IPsec may not be available and the traffic is solely TCP, TLS 946 is the method of choice, since the application developer can easily 947 ensure its presence by including a TLS implementation in his package. 949 In the special-case of IPv6, both AH and ESP are mandatory to 950 implement. Hence, it is reasonable to assume that AH/ESP are already 951 available for IPv6-only protocols or IPv6-only deployments. However, 952 automatic key management (IKE) is not required to implement so 953 protocol designers should not assume it will be present. [RFC5406] 954 provides quite a bit of guidance on when IPsec is a good choice. 956 4.5.2. SSL/TLS 958 Currently, the most common approach is to use SSL or its successor 959 TLS. They provide channel security for a TCP connection at the 960 application level. That is, they run over TCP. SSL implementations 961 typically provide a Berkeley Sockets-like interface for easy 962 programming. The primary issue when designing a protocol solution 963 around TLS is to differentiate between connections protected using 964 TLS and those which are not. 966 The two primary approaches used have a separate well-known port for 967 TLS connections (e.g., the HTTP over TLS port is 443) [RFC2818] or to 968 have a mechanism for negotiating upward from the base protocol to TLS 969 as in UPGRADE [RFC2817] or STARTTLS [RFC3207]. When an upward 970 negotiation strategy is used, care must be taken to ensure that an 971 attacker can not force a clear connection when both parties wish to 972 use TLS. 974 Note that TLS depends upon a reliable protocol such as TCP or SCTP. 975 This produces two notable difficulties. First, it cannot be used to 976 secure datagram protocols that use UDP. Second, TLS is susceptible 977 to IP layer attacks that IPsec is not. Typically, these attacks take 978 some form of denial of service or connection assassination. For 979 instance, an attacker might forge a TCP RST to shut down SSL 980 connections. TLS has mechanisms to detect truncation attacks but 981 these merely allow the victim to know he is being attacked and do not 982 provide connection survivability in the face of such attacks. By 983 contrast, if IPsec were being used, such a forged RST could be 984 rejected without affecting the TCP connection. If forged RSTs or 985 other such attacks on the TCP connection are a concern, then AH/ESP 986 or the TCP Authentication Option (TCP-AO) [RFC5925] are the preferred 987 choices. 989 4.5.2.1. Virtual Hosts 991 If the "separate ports" approach to TLS is used, then TLS will be 992 negotiated before any application-layer traffic is sent. This can 993 cause a problem with protocols that use virtual hosts, such as HTTP 994 [RFC7230], since the server does not know which certificate to offer 995 the client during the TLS handshake. The TLS hostname extension 996 [RFC5246] can be used to solve this problem, although it is too new 997 to have seen wide deployment. 999 4.5.2.2. Remote Authentication and TLS 1001 One difficulty with using TLS is that the server is authenticated via 1002 a certificate. This can be inconvenient in environments where 1003 previously the only form of authentication was a password shared 1004 between client and server. It's tempting to use TLS without an 1005 authenticated server (i.e., with anonymous DH or a self-signed RSA 1006 certificate) and then authenticate via some challenge-response 1007 mechanism such as SASL with CRAM-MD5. 1009 Unfortunately, this composition of SASL and TLS is less strong than 1010 one would expect. It's easy for an active attacker to hijack this 1011 connection. The client man-in-the-middles the SSL connection 1012 (remember we're not authenticating the server, which is what 1013 ordinarily prevents this attack) and then simply proxies the SASL 1014 handshake. From then on, it's as if the connection were in the 1015 clear, at least as far as that attacker is concerned. In order to 1016 prevent this attack, the client needs to verify the server's 1017 certificate. 1019 However, if the server is authenticated, challenge-response becomes 1020 less desirable. If you already have a hardened channel then simple 1021 passwords are fine. In fact, they're arguably superior to challenge- 1022 response since they do not require that the password be stored in the 1023 clear on the server. Thus, compromise of the key file with 1024 challenge-response systems is more serious than if simple passwords 1025 were used. 1027 Note that if the client has a certificate than SSL-based client 1028 authentication can be used. To make this easier, SASL provides the 1029 EXTERNAL mechanism, whereby the SASL client can tell the server 1030 "examine the outer channel for my identity". Obviously, this is not 1031 subject to the layering attacks described above. 1033 4.5.3. Remote Login 1035 In some special cases it may be worth providing channel-level 1036 security directly in the application rather than using IPSEC or SSL/ 1037 TLS. One such case is remote terminal security. Characters are 1038 typically delivered from client to server one character at a time. 1039 Since SSL/TLS and AH/ESP authenticate and encrypt every packet, this 1040 can mean a data expansion of 20-fold. The telnet encryption option 1041 [RFC2946] prevents this expansion by foregoing message integrity. 1043 When using remote terminal service, it's often desirable to securely 1044 perform other sorts of communications services. In addition to 1045 providing remote login, SSH [RFC4253] also provides secure port 1046 forwarding for arbitrary TCP ports, thus allowing users run arbitrary 1047 TCP-based applications over the SSH channel. Note that SSH Port 1048 Forwarding can be security issue if it is used improperly to 1049 circumvent firewall and improperly expose insecure internal 1050 applications to the outside world. 1052 4.6. Denial of Service Attacks and Countermeasures 1054 Denial of service attacks are all too frequently viewed as an fact of 1055 life. One problem is that an attacker can often choose from one of 1056 many denial of service attacks to inflict upon a victim, and because 1057 most of these attacks cannot be thwarted, common wisdom frequently 1058 assumes that there is no point protecting against one kind of denial 1059 of service attack when there are many other denial of service attacks 1060 that are possible but that cannot be prevented. 1062 However, not all denial of service attacks are equal and more 1063 importantly, it is possible to design protocols so that denial of 1064 service attacks are made more difficult, if not impractical. Recent 1065 SYN flood attacks [TCPSYN] demonstrate both of these properties: SYN 1066 flood attacks are so easy, anonymous, and effective that they are 1067 more attractive to attackers than other attacks; and because the 1068 design of TCP enables this attack. 1070 Because complete DoS protection is so difficult, security against DoS 1071 must be dealt with pragmatically. In particular, some attacks which 1072 would be desirable to defend against cannot be defended against 1073 economically. The goal should be to manage risk by defending against 1074 attacks with sufficiently high ratios of severity to cost of defense. 1075 Both severity of attack and cost of defense change as technology 1076 changes and therefore so does the set of attacks which should be 1077 defended against. 1079 Authors of internet standards MUST describe which denial of service 1080 attacks their protocol is susceptible to. This description MUST 1081 include the reasons it was either unreasonable or out of scope to 1082 attempt to avoid these denial of service attacks. 1084 4.6.1. Blind Denial of Service 1086 BLIND denial of service attacks are particularly pernicious. With a 1087 blind attack the attacker has a significant advantage. If the 1088 attacker must be able to receive traffic from the victim, then he 1089 must either subvert the routing fabric or use his own IP address. 1090 Either provides an opportunity for the victim to track the attacker 1091 and/or filter out his traffic. With a blind attack the attacker can 1092 use forged IP addresses, making it extremely difficult for the victim 1093 to filter out his packets. The TCP SYN flood attack is an example of 1094 a blind attack. Designers should make every attempt possible to 1095 prevent blind denial of service attacks. 1097 4.6.2. Distributed Denial of Service 1099 Even more dangerous are DISTRIBUTED denial of service attacks (DDoS) 1100 [DDOS]. In a DDoS the attacker arranges for a number of machines to 1101 attack the target machine simultaneously. Usually this is 1102 accomplished by infecting a large number of machines with a program 1103 that allows remote initiation of attacks. The machines actually 1104 performing the attack are called ZOMBIEs and are likely owned by 1105 unsuspecting third parties in an entirely different location from the 1106 true attacker. DDoS attacks can be very hard to counter because the 1107 zombies often appear to be making legitimate protocol requests and 1108 simply crowd out the real users. DDoS attacks can be difficult to 1109 thwart, but protocol designers are expected to be cognizant of these 1110 forms of attack while designing protocols. 1112 4.6.3. Avoiding Denial of Service 1114 There are two common approaches to making denial of service attacks 1115 more difficult: 1117 4.6.3.1. Make your attacker do more work than you do 1119 If an attacker consumes more of his resources than yours when 1120 launching an attack, attackers with fewer resources than you will be 1121 unable to launch effective attacks. One common technique is to 1122 require the attacker perform a time-intensive operation, such as a 1123 cryptographic operation. Note that an attacker can still mount a 1124 denial of service attack if he can muster substantially sufficient 1125 CPU power. For instance, this technique would not stop the 1126 distributed attacks described in [TCPSYN]. 1128 4.6.3.2. Make your attacker prove they can receive data from you 1130 A blind attack can be subverted by forcing the attacker to prove that 1131 they can can receive data from the victim. A common technique is to 1132 require that the attacker reply using information that was gained 1133 earlier in the message exchange. If this countermeasure is used, the 1134 attacker must either use his own address (making him easy to track) 1135 or to forge an address which will be routed back along a path that 1136 traverses the host from which the attack is being launched. 1138 Hosts on small subnets are thus useless to the attacker (at least in 1139 the context of a spoofing attack) because the attack can be traced 1140 back to a subnet (which should be sufficient for locating the 1141 attacker) so that anti-attack measures can be put into place (for 1142 instance, a boundary router can be configured to drop all traffic 1143 from that subnet). A common technique is to require that the 1144 attacker reply using information that was gained earlier in the 1145 message exchange. 1147 4.6.4. Example: TCP SYN Floods 1149 TCP/IP is vulnerable to SYN flood attacks (which are described in 1150 Section 3.5) because of the design of the 3-way handshake. First, an 1151 attacker can force a victim to consume significant resources (in this 1152 case, memory) by sending a single packet. Second, because the 1153 attacker can perform this action without ever having received data 1154 from the victim, the attack can be performed anonymously (and 1155 therefore using a large number of forged source addresses). 1157 4.6.5. Example: Photuris 1159 Photuris [RFC2522] specifies an anti-clogging mechanism that prevents 1160 attacks on Photuris that resemble the SYN flood attack. Photuris 1161 employs a time-variant secret to generate a "cookie" which is 1162 returned to the attacker. This cookie must be returned in subsequent 1163 messages for the exchange to progress. The interesting feature is 1164 that this cookie can be regenerated by the victim later in the 1165 exchange, and thus no state need be retained by the victim until 1166 after the attacker has proven that he can receive packets from the 1167 victim. 1169 4.7. Object vs. Channel Security 1171 It's useful to make the conceptual distinction between object 1172 security and channel security. Object security refers to security 1173 measures which apply to entire data objects. Channel security 1174 measures provide a secure channel over which objects may be carried 1175 transparently but the channel has no special knowledge about object 1176 boundaries. 1178 Consider the case of an email message. When it's carried over an 1179 IPSEC or TLS secured connection, the message is protected during 1180 transmission. However, it is unprotected in the receiver's mailbox, 1181 and in intermediate spool files along the way. Moreover, since mail 1182 servers generally run as a daemon, not a user, authentication of 1183 messages generally merely means authentication of the daemon not the 1184 user. Finally, since mail transport is hop-by-hop, even if the user 1185 authenticates to the first hop relay the authentication can't be 1186 safely verified by the receiver. 1188 By contrast, when an email message is protected with S/MIME or 1189 OpenPGP, the entire message is encrypted and integrity protected 1190 until it is examined and decrypted by the recipient. It also 1191 provides strong authentication of the actual sender, as opposed to 1192 the machine the message came from. This is object security. 1193 Moreover, the receiver can prove the signed message's authenticity to 1194 a third party. 1196 Note that the difference between object and channel security is a 1197 matter of perspective. Object security at one layer of the protocol 1198 stack often looks like channel security at the next layer up. So, 1199 from the perspective of the IP layer, each packet looks like an 1200 individually secured object. But from the perspective of a web 1201 client, IPSEC just provides a secure channel. 1203 The distinction isn't always clear-cut. For example, S-HTTP provides 1204 object level security for a single HTTP transaction, but a web page 1205 typically consists of multiple HTTP transactions (the base page and 1206 numerous inline images). Thus, from the perspective of the total web 1207 page, this looks rather more like channel security. Object security 1208 for a web page would consist of security for the transitive closure 1209 of the page and all its embedded content as a single unit. 1211 4.8. Firewalls 1213 It's common security practice in modern networks to partition the 1214 network into external and internal networks using a firewall. The 1215 internal network is then assumed to be secure and only limited 1216 security measures are used there. The internal portion of such a 1217 network is often called a WALLED GARDEN. 1219 Internet protocol designers cannot safely assume that their protocols 1220 will be deployed in such an environment, for three reasons. First, 1221 protocols which were originally designed to be deployed in closed 1222 environments often are later deployed on the Internet, thus creating 1223 serious vulnerabilities. 1225 Second, networks which appear to be topologically disconnected may 1226 not be. One reason may be that the network has been reconfigured to 1227 allow access by the outside world. Moreover, firewalls are 1228 increasingly passing generic application layer protocols such as 1229 [SOAP] or HTTP. Network protocols which are based on these generic 1230 protocols cannot in general assume that a firewall will protect them. 1231 Finally, one of the most serious security threats to systems is from 1232 insiders, not outsiders. Since insiders by definition have access to 1233 the internal network, topological protections such as firewalls will 1234 not protect them. 1236 5. Writing Security Considerations Sections 1238 While it is not a requirement that any given protocol or system be 1239 immune to all forms of attack, it is still necessary for authors to 1240 consider as many forms as possible. Part of the purpose of the 1241 Security Considerations section is to explain what attacks are out of 1242 scope and what countermeasures can be applied to defend against them. 1244 There should be a clear description of the kinds of threats on the 1245 described protocol or technology. This should be approached as an 1246 effort to perform "due diligence" in describing all known or 1247 foreseeable risks and threats to potential implementers and users. 1249 Authors MUST describe 1251 1. which attacks are out of scope (and why!) 1253 2. which attacks are in-scope 1255 2.1. and the protocol is susceptible to 1257 2.2. and the protocol protects against 1259 At least the following forms of attack MUST be considered: 1260 eavesdropping, replay, message insertion, deletion, modification, and 1261 man-in-the-middle. Potential denial of service attacks MUST be 1262 identified as well. If the protocol incorporates cryptographic 1263 protection mechanisms, it should be clearly indicated which portions 1264 of the data are protected and what the protections are (i.e., 1265 integrity only, confidentiality, and/or endpoint authentication, 1266 etc.). Some indication should also be given to what sorts of attacks 1267 the cryptographic protection is susceptible. Data which should be 1268 held secret (keying material, random seeds, etc.) should be clearly 1269 labeled. 1271 If the technology involves authentication, particularly user-host 1272 authentication, the security of the authentication method MUST be 1273 clearly specified. That is, authors MUST document the assumptions 1274 that the security of this authentication method is predicated upon. 1275 For instance, in the case of the UNIX username/password login method, 1276 a statement to the effect of: 1278 Authentication in the system is secure only to the extent that it 1279 is difficult to guess or obtain a ASCII password that is a maximum 1280 of 8 characters long. These passwords can be obtained by sniffing 1281 telnet sessions or by running the 'crack' program using the 1282 contents of the /etc/passwd file. Attempts to protect against on- 1283 line password guessing by (1) disconnecting after several 1284 unsuccessful login attempts and (2) waiting between successive 1285 password prompts is effective only to the extent that attackers 1286 are impatient. 1288 Because the /etc/passwd file maps usernames to user ids, groups, 1289 etc. it must be world readable. In order to permit this usage but 1290 make running crack more difficult, the file is often split into 1291 /etc/passwd and a 'shadow' password file. The shadow file is not 1292 world readable and contains the encrypted password. The regular 1293 /etc/passwd file contains a dummy password in its place. 1295 It is insufficient to simply state that one's protocol should be run 1296 over some lower layer security protocol. If a system relies upon 1297 lower layer security services for security, the protections those 1298 services are expected to provide MUST be clearly specified. In 1299 addition, the resultant properties of the combined system need to be 1300 specified. 1302 Note: In general, the IESG will not approve standards track protocols 1303 which do not provide for strong authentication, either internal to 1304 the protocol or through tight binding to a lower layer security 1305 protocol. 1307 The threat environment addressed by the Security Considerations 1308 section MUST at a minimum include deployment across the global 1309 Internet across multiple administrative boundaries without assuming 1310 that firewalls are in place, even if only to provide justification 1311 for why such consideration is out of scope for the protocol. It is 1312 not acceptable to only discuss threats applicable to LANs and ignore 1313 the broader threat environment. All IETF standards-track protocols 1314 are considered likely to have deployment in the global Internet. In 1315 some cases, there might be an Applicability Statement discouraging 1316 use of a technology or protocol in a particular environment. 1317 Nonetheless, the security issues of broader deployment should be 1318 discussed in the document. 1320 There should be a clear description of the residual risk to the user 1321 or operator of that protocol after threat mitigation has been 1322 deployed. Such risks might arise from compromise in a related 1323 protocol (e.g., IPsec is useless if key management has been 1324 compromised), from incorrect implementation, compromise of the 1325 security technology used for risk reduction (e.g., a cipher with a 1326 40-bit key), or there might be risks that are not addressed by the 1327 protocol specification (e.g., denial of service attacks on an 1328 underlying link protocol). Particular care should be taken in 1329 situations where the compromise of a single system would compromise 1330 an entire protocol. For instance, in general protocol designers 1331 assume that end-systems are inviolate and don't worry about physical 1332 attack. However, in cases (such as a certificate authority) where 1333 compromise of a single system could lead to widespread compromises, 1334 it is appropriate to consider systems and physical security as well. 1336 There should also be some discussion of potential security risks 1337 arising from potential misapplications of the protocol or technology 1338 described in the RFC. This might be coupled with an Applicability 1339 Statement for that RFC. 1341 6. Examples 1343 This section consists of some example security considerations 1344 sections, intended to give the reader a flavor of what's intended by 1345 this document. 1347 The first example is a 'retrospective' example, applying the criteria 1348 of this document to an existing widely deployed protocol, SMTP. The 1349 second example is a good security considerations section clipped from 1350 a current protocol. 1352 6.1. SMTP 1354 When RFC 821 was written, Security Considerations sections were not 1355 required in RFCs, and none is contained in that document. [RFC5231] 1356 updated RFC 821 and added a detailed security considerations section. 1357 We reproduce here the Security Considerations section from that 1358 document (with new section numbers). Our comments are indented and 1359 prefaced with 'NOTE:'. We also add a number of new sections to cover 1360 topics we consider important. Those sections are marked with (NEW) 1361 in the section header. 1363 6.1.1. Security Considerations 1365 6.1.1.1. Mail Security and Spoofing 1367 SMTP mail is inherently insecure in that it is feasible for even 1368 fairly casual users to negotiate directly with receiving and relaying 1369 SMTP servers and create messages that will trick a naive recipient 1370 into believing that they came from somewhere else. Constructing such 1371 a message so that the "spoofed" behavior cannot be detected by an 1372 expert is somewhat more difficult, but not sufficiently so as to be a 1373 deterrent to someone who is determined and knowledgeable. 1374 Consequently, as knowledge of Internet mail increases, so does the 1375 knowledge that SMTP mail inherently cannot be authenticated, or 1376 integrity checks provided, at the transport level. Real mail 1377 security lies only in end-to-end methods involving the message 1378 bodies, such as those which use digital signatures (see e.g., 1379 Section 6.1.2.3). 1381 NOTE: One bad approach to sender authentication is IDENT [RFC1414] 1382 in which the receiving mail server contacts the alleged sender and 1383 asks for the username of the sender. This is a bad idea for a 1384 number of reasons, including but not limited to relaying, TCP 1385 connection hijacking, and simple lying by the origin server. 1386 Aside from the fact that IDENT is of low security value, use of 1387 IDENT by receiving sites can lead to operational problems. Many 1388 sending sites blackhole IDENT requests, thus causing mail to be 1389 held until the receiving server's IDENT request times out. 1391 Various protocol extensions and configuration options that provide 1392 authentication at the transport level (e.g., from an SMTP client to 1393 an SMTP server) improve somewhat on the traditional situation 1394 described above. However, unless they are accompanied by careful 1395 handoffs of responsibility in a carefully-designed trust environment, 1396 they remain inherently weaker than end-to-end mechanisms which use 1397 digitally signed messages rather than depending on the integrity of 1398 the transport system. 1400 Efforts to make it more difficult for users to set envelope return 1401 path and header "From" fields to point to valid addresses other than 1402 their own are largely misguided: they frustrate legitimate 1403 applications in which mail is sent by one user on behalf of another 1404 or in which error (or normal) replies should be directed to a special 1405 address. (Systems that provide convenient ways for users to alter 1406 these fields on a per-message basis should attempt to establish a 1407 primary and permanent mailbox address for the user so that Sender 1408 fields within the message data can be generated sensibly.) 1410 This specification does not further address the authentication issues 1411 associated with SMTP other than to advocate that useful functionality 1412 not be disabled in the hope of providing some small margin of 1413 protection against an ignorant user who is trying to fake mail. 1415 NOTE: We have added additional material on communications security 1416 and SMTP in Section 6.1.2 In a final specification, the above text 1417 would be edited somewhat to reflect that fact. 1419 6.1.1.2. Blind Copies 1421 Addresses that do not appear in the message headers may appear in the 1422 RCPT commands to an SMTP server for a number of reasons. The two 1423 most common involve the use of a mailing address as a "list exploder" 1424 (a single address that resolves into multiple addresses) and the 1425 appearance of "blind copies". Especially when more than one RCPT 1426 command is present, and in order to avoid defeating some of the 1427 purpose of these mechanisms, SMTP clients and servers SHOULD NOT copy 1428 the full set of RCPT command arguments into the headers, either as 1429 part of trace headers or as informational or private-extension 1430 headers. Since this rule is often violated in practice, and cannot 1431 be enforced, sending SMTP systems that are aware of "bcc" use MAY 1432 find it helpful to send each blind copy as a separate message 1433 transaction containing only a single RCPT command. 1435 There is no inherent relationship between either "reverse" (from 1436 MAIL, SAML, etc., commands) or "forward" (RCPT) addresses in the SMTP 1437 transaction ("envelope") and the addresses in the headers. Receiving 1438 systems SHOULD NOT attempt to deduce such relationships and use them 1439 to alter the headers of the message for delivery. The popular 1440 "Apparently-to" header is a violation of this principle as well as a 1441 common source of unintended information disclosure and SHOULD NOT be 1442 used. 1444 6.1.1.3. VRFY, EXPN, and Security 1446 As discussed in Section 3.9.1, individual sites may want to disable 1447 either or both of VRFY or EXPN for security reasons. As a corollary 1448 to the above, implementations that permit this MUST NOT appear to 1449 have verified addresses that are not, in fact, verified. If a site 1450 disables these commands for security reasons, the SMTP server MUST 1451 return a 252 response, rather than a code that could be confused with 1452 successful or unsuccessful verification. 1454 Returning a 250 reply code with the address listed in the VRFY 1455 command after having checked it only for syntax violates this rule. 1456 Of course, an implementation that "supports" VRFY by always returning 1457 550 whether or not the address is valid is equally not in 1458 conformance. 1460 Within the last few years, the contents of mailing lists have become 1461 popular as an address information source for so-called "spammers." 1462 The use of EXPN to "harvest" addresses has increased as list 1463 administrators have installed protections against inappropriate uses 1464 of the lists themselves. Implementations SHOULD still provide 1465 support for EXPN, but sites SHOULD carefully evaluate the tradeoffs. 1466 As authentication mechanisms are introduced into SMTP, some sites may 1467 choose to make EXPN available only to authenticated requesters. 1469 NOTE: It's not clear that disabling VRFY adds much protection, 1470 since it's often possible to discover whether an address is valid 1471 using RCPT TO. 1473 6.1.1.4. Information Disclosure in Announcements 1475 There has been an ongoing debate about the tradeoffs between the 1476 debugging advantages of announcing server type and version (and, 1477 sometimes, even server domain name) in the greeting response or in 1478 response to the HELP command and the disadvantages of exposing 1479 information that might be useful in a potential hostile attack. The 1480 utility of the debugging information is beyond doubt. Those who 1481 argue for making it available point out that it is far better to 1482 actually secure an SMTP server rather than hope that trying to 1483 conceal known vulnerabilities by hiding the server's precise identity 1484 will provide more protection. Sites are encouraged to evaluate the 1485 tradeoff with that issue in mind; implementations are strongly 1486 encouraged to minimally provide for making type and version 1487 information available in some way to other network hosts. 1489 6.1.1.5. Information Disclosure in Trace Fields 1491 In some circumstances, such as when mail originates from within a LAN 1492 whose hosts are not directly on the public Internet, trace 1493 ("Received") fields produced in conformance with this specification 1494 may disclose host names and similar information that would not 1495 normally be available. This ordinarily does not pose a problem, but 1496 sites with special concerns about name disclosure should be aware of 1497 it. Also, the optional FOR clause should be supplied with caution or 1498 not at all when multiple recipients are involved lest it 1499 inadvertently disclose the identities of "blind copy" recipients to 1500 others. 1502 6.1.1.6. Information Disclosure in Message Forwarding 1504 As discussed in Section 3.9, use of the 251 or 551 reply codes to 1505 identify the replacement address associated with a mailbox may 1506 inadvertently disclose sensitive information. Sites that are 1507 concerned about those issues should ensure that they select and 1508 configure servers appropriately. 1510 6.1.1.7. Scope of Operation of SMTP Servers 1512 It is a well-established principle that an SMTP server may refuse to 1513 accept mail for any operational or technical reason that makes sense 1514 to the site providing the server. However, cooperation among sites 1515 and installations makes the Internet possible. If sites take 1516 excessive advantage of the right to reject traffic, the ubiquity of 1517 email availability (one of the strengths of the Internet) will be 1518 threatened; considerable care should be taken and balance maintained 1519 if a site decides to be selective about the traffic it will accept 1520 and process. 1522 In recent years, use of the relay function through arbitrary sites 1523 has been used as part of hostile efforts to hide the actual origins 1524 of mail. Some sites have decided to limit the use of the relay 1525 function to known or identifiable sources, and implementations SHOULD 1526 provide the capability to perform this type of filtering. When mail 1527 is rejected for these or other policy reasons, a 550 code SHOULD be 1528 used in response to EHLO, MAIL, or RCPT as appropriate. 1530 6.1.1.8. Inappropriate Usage (NEW) 1532 SMTP itself provides no protection is provided against unsolicited 1533 commercial mass e-mail (aka spam). It is extremely difficult to tell 1534 a priori whether a given message is spam or not. From a protocol 1535 perspective, spam is indistinguishable from other e-mail -- the 1536 distinction is almost entirely social and often quite subtle. (For 1537 instance, is a message from a merchant from whom you've purchased 1538 items before advertising similar items spam?) SMTP spam-suppression 1539 mechanisms are generally limited to identifying known spam senders 1540 and either refusing to service them or target them for punishment/ 1541 disconnection. [RFC2505] provides extensive guidance on making SMTP 1542 servers spam-resistant. We provide a brief discussion of the topic 1543 here. 1545 The primary tool for refusal to service spammers is the blacklist. 1546 Some authority such as Mail Abuse Protection System (MAPS) collects 1547 and publishes a list of known spammers. Individual SMTP servers then 1548 block the blacklisted offenders (generally by IP address). 1550 In order to avoid being blacklisted or otherwise identified, spammers 1551 often attempt to obscure their identity, either simply by sending a 1552 false SMTP identity or by forwarding their mail through an Open Relay 1553 -- an SMTP server which will perform mail relaying for any sender. 1554 As a consequence, there are now blacklists such as Open Relay 1555 Blocking System (ORBS) of open relays as well. 1557 6.1.1.8.1. Closed Relaying (NEW) 1559 To avoid being used for spam forwarding, many SMTP servers operate as 1560 closed relays, providing relaying service only for clients who they 1561 can identify. Such relays should generally insist that senders 1562 advertise a sending address consistent with their known identity. If 1563 the relay is providing service for an identifiable network (such as a 1564 corporate network or an ISP's network) then it is sufficient to block 1565 all other IP addresses). In other cases, explicit authentication 1566 must be used. The two standard choices for this are TLS through 1567 STARTTLS [RFC3207] and SASL [RFC4954]. 1569 6.1.1.8.2. Endpoints (NEW) 1571 Realistically, SMTP endpoints cannot refuse to deny service to 1572 unauthenticated senders. Since the vast majority of senders are 1573 unauthenticated, this would break Internet mail interoperability. 1574 The exception to this is when the endpoint server should only be 1575 receiving mail from some other server which can itself receive 1576 unauthenticated messages. For instance, a company might operate a 1577 public gateway but configure its internal servers to only talk to the 1578 gateway. 1580 6.1.2. Communications security issues (NEW) 1582 SMTP itself provides no communications security, and therefore a 1583 large number of attacks are possible. A passive attack is sufficient 1584 to recover the text of messages transmitted with SMTP. No endpoint 1585 authentication is provided by the protocol. Sender spoofing is 1586 trivial, and therefore forging email messages is trivial. Some 1587 implementations do add header lines with hostnames derived through 1588 reverse name resolution (which is only secure to the extent that it 1589 is difficult to spoof DNS -- not very), although these header lines 1590 are normally not displayed to users. Receiver spoofing is also 1591 fairly straight-forward, either using TCP connection hijacking or DNS 1592 spoofing. Moreover, since email messages often pass through SMTP 1593 gateways, all intermediate gateways must be trusted, a condition 1594 nearly impossible on the global Internet. 1596 Several approaches are available for alleviating these threats. In 1597 order of increasingly high level in the protocol stack, we have: 1599 o SMTP over IPSEC 1601 o SMTP/TLS 1603 o S/MIME and PGP/MIME 1605 6.1.2.1. SMTP over IPSEC (NEW) 1607 An SMTP connection run over IPSEC can provide confidentiality for the 1608 message between the sender and the first hop SMTP gateway, or between 1609 any pair of connected SMTP gateways. That is to say, it provides 1610 channel security for the SMTP connections. In a situation where the 1611 message goes directly from the client to the receiver's gateway, this 1612 may provide substantial security (though the receiver must still 1613 trust the gateway). Protection is provided against replay attacks, 1614 since the data itself is protected and the packets cannot be 1615 replayed. 1617 Endpoint identification is a problem, however, unless the receiver's 1618 address can be directly cryptographically authenticated. Sender 1619 identification is not generally available, since generally only the 1620 sender's machine is authenticated, not the sender himself. 1621 Furthermore, the identity of the sender simply appears in the From 1622 header of the message, so it is easily spoofable by the sender. 1623 Finally, unless the security policy is set extremely strictly, there 1624 is also an active downgrade to cleartext attack. 1626 Another problem with IPsec as a security solution for SMTP is the 1627 lack of a standard IPsec API. In order to take advantage of IPsec, 1628 applications in general need to be able to instruct the IPsec 1629 implementation about their security policies and discover what 1630 protection has been applied to their connections. Without a standard 1631 API this is very difficult to do portably. 1633 Implementors of SMTP servers or SMTP administrators MUST NOT assume 1634 that IPsec will be available unless they have reason to believe that 1635 it will be (such as the existence of preexisting association between 1636 two machines). However, it may be a reasonable procedure to attempt 1637 to create an IPsec association opportunistically to a peer server 1638 when mail is delivered. Note that in cases where IPsec is used to 1639 provide a VPN tunnel between two sites, this is of substantial 1640 security value, particularly to the extent that confidentiality is 1641 provided, subject to the caveats mentioned above. Also see USEIPSEC 1642 [RFC5406] for general guidance on the applicability of IPsec. 1644 6.1.2.2. SMTP/TLS (NEW) 1646 SMTP can be combined with TLS as described in STARTTLS [RFC3207]. 1647 This provides similar protection to that provided when using IPsec. 1648 Since TLS certificates typically contain the server's host name, 1649 recipient authentication may be slightly more obvious, but is still 1650 susceptible to DNS spoofing attacks. Notably, common implementations 1651 of TLS contain a US exportable (and hence low security) mode. 1652 Applications desiring high security should ensure that this mode is 1653 disabled. Protection is provided against replay attacks, since the 1654 data itself is protected and the packets cannot be replayed. [Note: 1655 The Security Considerations section of the SMTP over TLS document is 1656 quite good and bears reading as an example of how to do things.] 1658 6.1.2.3. S/MIME and PGP/MIME (NEW) 1660 S/MIME and PGP/MIME are both message oriented security protocols. 1661 They provide object security for individual messages. With various 1662 settings, sender and recipient authentication and confidentiality may 1663 be provided. More importantly, the identification is not of the 1664 sending and receiving machines, but rather of the sender and 1665 recipient themselves. (Or, at least, of cryptographic keys 1666 corresponding to the sender and recipient.) Consequently, end-to-end 1667 security may be obtained. Note, however, that no protection is 1668 provided against replay attacks. Note also that S/MIME and PGP/MIME 1669 generally provide identifying marks for both sender and receiver. 1670 Thus even when confidentiality is provided, traffic analysis is still 1671 possible. 1673 6.1.3. Denial of Service (NEW) 1675 None of these security measures provides any real protection against 1676 denial of service. SMTP connections can easily be used to tie up 1677 system resources in a number of ways, including excessive port 1678 consumption, excessive disk usage (email is typically delivered to 1679 disk files), and excessive memory consumption (sendmail, for 1680 instance, is fairly large, and typically forks a new process to deal 1681 with each message.) 1683 If transport- or application-layer security is used for SMTP 1684 connections, it is possible to mount a variety of attacks on 1685 individual connections using forged RSTs or other kinds of packet 1686 injection. 1688 6.2. VRRP 1690 The second example is from VRRP, the Virtual Router Redundance 1691 Protocol [RFC5798]. We reproduce here the Security Considerations 1692 section from that document (with new section numbers). Our comments 1693 are indented and prefaced with 'NOTE:'. 1695 6.2.1. Security Considerations 1697 VRRP is designed for a range of internetworking environments that may 1698 employ different security policies. The protocol includes several 1699 authentication methods ranging from no authentication, simple clear 1700 text passwords, and strong authentication using IP Authentication 1701 with MD5 HMAC. The details on each approach including possible 1702 attacks and recommended environments follows. 1704 Independent of any authentication type VRRP includes a mechanism 1705 (setting TTL=255, checking on receipt) that protects against VRRP 1706 packets being injected from another remote network. This limits most 1707 vulnerabilities to local attacks. 1709 NOTE: The security measures discussed in the following sections 1710 only provide various kinds of authentication. No confidentiality 1711 is provided at all. This should be explicitly described as 1712 outside the scope. 1714 6.2.1.1. No Authentication 1716 The use of this authentication type means that VRRP protocol 1717 exchanges are not authenticated. This type of authentication SHOULD 1718 only be used in environments were there is minimal security risk and 1719 little chance for configuration errors (e.g., two VRRP routers on a 1720 LAN). 1722 6.2.1.2. Simple Text Password 1724 The use of this authentication type means that VRRP protocol 1725 exchanges are authenticated by a simple clear text password. 1727 This type of authentication is useful to protect against accidental 1728 misconfiguration of routers on a LAN. It protects against routers 1729 inadvertently backing up another router. A new router must first be 1730 configured with the correct password before it can run VRRP with 1731 another router. This type of authentication does not protect against 1732 hostile attacks where the password can be learned by a node snooping 1733 VRRP packets on the LAN. The Simple Text Authentication combined 1734 with the TTL check makes it difficult for a VRRP packet to be sent 1735 from another LAN to disrupt VRRP operation. 1737 This type of authentication is RECOMMENDED when there is minimal risk 1738 of nodes on a LAN actively disrupting VRRP operation. If this type 1739 of authentication is used the user should be aware that this clear 1740 text password is sent frequently, and therefore should not be the 1741 same as any security significant password. 1743 NOTE: This section should be clearer. The basic point is that no 1744 authentication and Simple Text are only useful for a very limited 1745 threat model, namely that none of the nodes on the local LAN are 1746 hostile. The TTL check prevents hostile nodes off-LAN from posing 1747 as valid nodes, but nothing stops hostile nodes on-LAN from 1748 impersonating authorized nodes. This is not a particularly 1749 realistic threat model in many situations. In particular, it's 1750 extremely brittle: the compromise of any node the LAN allows 1751 reconfiguration of the VRRP nodes. 1753 6.2.1.3. IP Authentication Header 1755 The use of this authentication type means the VRRP protocol exchanges 1756 are authenticated using the mechanisms defined by the IP 1757 Authentication Header [RFC4302] using HMAC [RFC2403]. This provides 1758 strong protection against configuration errors, replay attacks, and 1759 packet corruption/modification. 1761 This type of authentication is RECOMMENDED when there is limited 1762 control over the administration of nodes on a LAN. While this type 1763 of authentication does protect the operation of VRRP, there are other 1764 types of attacks that may be employed on shared media links (e.g., 1765 generation of bogus ARP replies) which are independent from VRRP and 1766 are not protected. 1768 NOTE: It's a mistake to have AH be a RECOMMENDED in this context. 1769 Since AH is the only mechanism that protects VRRP against attack 1770 from other nodes on the same LAN, it should be a MUST for cases 1771 where there are untrusted nodes on the same network. In any case, 1772 AH should be a MUST implement. 1774 NOTE: There's an important piece of security analysis that's only 1775 hinted at in this document, namely the cost/benefit tradeoff of 1776 VRRP authentication. 1778 [The rest of this section is NEW material] 1780 The threat that VRRP authentication is intended to prevent is an 1781 attacker arranging to be the VRRP master. This would be done by 1782 joining the group (probably multiple times), gagging the master and 1783 then electing oneself master. Such a node could then direct traffic 1784 in arbitrary undesirable ways. 1786 However, it is not necessary for an attacker to be the VRRP master to 1787 do this. An attacker can do similar kinds of damage to the network 1788 by forging ARP packets or (on switched networks) fooling the switch 1789 VRRP authentication offers no real protection against these attacks. 1791 Unfortunately, authentication makes VRRP networks very brittle in the 1792 face of misconfiguration. Consider what happens if two nodes are 1793 configured with different passwords. Each will reject messages from 1794 the other and therefore both will attempt to be master. This creates 1795 substantial network instability. 1797 This set of cost/benefit tradeoffs suggests that VRRP authentication 1798 is a bad idea, since the incremental security benefit is marginal but 1799 the incremental risk is high. This judgment should be revisited if 1800 the current set of non-VRRP threats are removed. 1802 7. Acknowledgments 1804 The previous version of this document was heavily based on a note 1805 written by Ran Atkinson in 1997. That note was written after the IAB 1806 Security Workshop held in early 1997, based on input from everyone at 1807 that workshop. Some of the specific text above was taken from Ran's 1808 original document, and some of that text was taken from an email 1809 message written by Fred Baker. The other primary source for that 1810 document was specific comments received from Steve Bellovin. Early 1811 review of that document was done by Lisa Dusseault and Mark 1812 Schertler. Other useful comments were received from Bill Fenner, Ned 1813 Freed, Lawrence Greenfield, Steve Kent, Allison Mankin and Kurt 1814 Zeilenga. 1816 The previous version of this document was edited by Eric Rescorla and 1817 Brian Korver with inputs from the other then current members of the 1818 IAB. 1820 8. IANA Considerations 1822 This document makes no requests of IANA. 1824 9. Security Considerations 1826 This entire document is about security considerations. 1828 10. References 1830 10.1. Normative References 1832 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1833 Requirement Levels", BCP 14, RFC 2119, 1834 DOI 10.17487/RFC2119, March 1997, 1835 . 1837 [RFC4302] Kent, S., "IP Authentication Header", RFC 4302, 1838 DOI 10.17487/RFC4302, December 2005, 1839 . 1841 [RFC4033] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1842 Rose, "DNS Security Introduction and Requirements", 1843 RFC 4033, DOI 10.17487/RFC4033, March 2005, 1844 . 1846 [RFC2946] Ts'o, T., "Telnet Data Encryption Option", RFC 2946, 1847 DOI 10.17487/RFC2946, September 2000, 1848 . 1850 [RFC4303] Kent, S., "IP Encapsulating Security Payload (ESP)", 1851 RFC 4303, DOI 10.17487/RFC4303, December 2005, 1852 . 1854 [RFC2743] Linn, J., "Generic Security Service Application Program 1855 Interface Version 2, Update 1", RFC 2743, 1856 DOI 10.17487/RFC2743, January 2000, 1857 . 1859 [RFC7230] Fielding, R., Ed. and J. Reschke, Ed., "Hypertext Transfer 1860 Protocol (HTTP/1.1): Message Syntax and Routing", 1861 RFC 7230, DOI 10.17487/RFC7230, June 2014, 1862 . 1864 [RFC2403] Madson, C. and R. Glenn, "The Use of HMAC-MD5-96 within 1865 ESP and AH", RFC 2403, DOI 10.17487/RFC2403, November 1866 1998, . 1868 [RFC4120] Neuman, C., Yu, T., Hartman, S., and K. Raeburn, "The 1869 Kerberos Network Authentication Service (V5)", RFC 4120, 1870 DOI 10.17487/RFC4120, July 2005, 1871 . 1873 [RFC2289] Haller, N., Metz, C., Nesser, P., and M. Straw, "A One- 1874 Time Password System", STD 61, RFC 2289, 1875 DOI 10.17487/RFC2289, February 1998, 1876 . 1878 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 1879 Housley, R., and W. Polk, "Internet X.509 Public Key 1880 Infrastructure Certificate and Certificate Revocation List 1881 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 1882 . 1884 [RFC2505] Lindberg, G., "Anti-Spam Recommendations for SMTP MTAs", 1885 BCP 30, RFC 2505, DOI 10.17487/RFC2505, February 1999, 1886 . 1888 [RFC5231] Segmuller, W. and B. Leiba, "Sieve Email Filtering: 1889 Relational Extension", RFC 5231, DOI 10.17487/RFC5231, 1890 January 2008, . 1892 [RFC4422] Melnikov, A., Ed. and K. Zeilenga, Ed., "Simple 1893 Authentication and Security Layer (SASL)", RFC 4422, 1894 DOI 10.17487/RFC4422, June 2006, 1895 . 1897 [RFC4954] Siemborski, R., Ed. and A. Melnikov, Ed., "SMTP Service 1898 Extension for Authentication", RFC 4954, 1899 DOI 10.17487/RFC4954, July 2007, 1900 . 1902 [RFC3207] Hoffman, P., "SMTP Service Extension for Secure SMTP over 1903 Transport Layer Security", RFC 3207, DOI 10.17487/RFC3207, 1904 February 2002, . 1906 [RFC5751] Ramsdell, B. and S. Turner, "Secure/Multipurpose Internet 1907 Mail Extensions (S/MIME) Version 3.2 Message 1908 Specification", RFC 5751, DOI 10.17487/RFC5751, January 1909 2010, . 1911 [RFC0854] Postel, J. and J. Reynolds, "Telnet Protocol 1912 Specification", STD 8, RFC 854, DOI 10.17487/RFC0854, May 1913 1983, . 1915 [RFC5246] Dierks, T. and E. Rescorla, "The Transport Layer Security 1916 (TLS) Protocol Version 1.2", RFC 5246, 1917 DOI 10.17487/RFC5246, August 2008, 1918 . 1920 [RFC2817] Khare, R. and S. Lawrence, "Upgrading to TLS Within 1921 HTTP/1.1", RFC 2817, DOI 10.17487/RFC2817, May 2000, 1922 . 1924 [RFC5798] Nadas, S., Ed., "Virtual Router Redundancy Protocol (VRRP) 1925 Version 3 for IPv4 and IPv6", RFC 5798, 1926 DOI 10.17487/RFC5798, March 2010, 1927 . 1929 [RFC4253] Ylonen, T. and C. Lonvick, Ed., "The Secure Shell (SSH) 1930 Transport Layer Protocol", RFC 4253, DOI 10.17487/RFC4253, 1931 January 2006, . 1933 10.2. Informative References 1935 [RFC1414] St. Johns, M. and M. Rose, "Identification MIB", RFC 1414, 1936 DOI 10.17487/RFC1414, February 1993, 1937 . 1939 [RFC1704] Haller, N. and R. Atkinson, "On Internet Authentication", 1940 RFC 1704, DOI 10.17487/RFC1704, October 1994, 1941 . 1943 [RFC3977] Feather, C., "Network News Transfer Protocol (NNTP)", 1944 RFC 3977, DOI 10.17487/RFC3977, October 2006, 1945 . 1947 [RFC0791] Postel, J., "Internet Protocol", STD 5, RFC 791, 1948 DOI 10.17487/RFC0791, September 1981, 1949 . 1951 [RFC5322] Resnick, P., Ed., "Internet Message Format", RFC 5322, 1952 DOI 10.17487/RFC5322, October 2008, 1953 . 1955 [RFC1939] Myers, J. and M. Rose, "Post Office Protocol - Version 3", 1956 STD 53, RFC 1939, DOI 10.17487/RFC1939, May 1996, 1957 . 1959 [RFC5406] Bellovin, S., "Guidelines for Specifying the Use of IPsec 1960 Version 2", BCP 146, RFC 5406, DOI 10.17487/RFC5406, 1961 February 2009, . 1963 [RFC5925] Touch, J., Mankin, A., and R. Bonica, "The TCP 1964 Authentication Option", RFC 5925, DOI 10.17487/RFC5925, 1965 June 2010, . 1967 [RFC2818] Rescorla, E., "HTTP Over TLS", RFC 2818, 1968 DOI 10.17487/RFC2818, May 2000, 1969 . 1971 [RFC2522] Karn, P. and W. Simpson, "Photuris: Session-Key Management 1972 Protocol", RFC 2522, DOI 10.17487/RFC2522, March 1999, 1973 . 1975 [RFC2693] Ellison, C., Frantz, B., Lampson, B., Rivest, R., Thomas, 1976 B., and T. Ylonen, "SPKI Certificate Theory", RFC 2693, 1977 DOI 10.17487/RFC2693, September 1999, 1978 . 1980 [RFC2660] Rescorla, E. and A. Schiffman, "The Secure HyperText 1981 Transfer Protocol", RFC 2660, DOI 10.17487/RFC2660, August 1982 1999, . 1984 [RFC7322] Flanagan, H. and S. Ginoza, "RFC Style Guide", RFC 7322, 1985 DOI 10.17487/RFC7322, September 2014, 1986 . 1988 [RFC1738] Berners-Lee, T., Masinter, L., and M. McCahill, "Uniform 1989 Resource Locators (URL)", RFC 1738, DOI 10.17487/RFC1738, 1990 December 1994, . 1992 [TCPSYN] "TCP SYN Flooding and IP Spoofing Attacks", CERT Advisory 1993 CA-1996-21, September 1996, 1994 . 1996 [DDOS] "Denial-Of-Service Tools", CERT Advisory CA-1999-17, 1997 December 1999, 1998 . 2000 [EKE] Bellovin, S. and M. Merritt, "Encrypted Key Exchange: 2001 Password-Based Protocols Secure Against Dictionary 2002 Attacks", Proc. IEEE Symp. on Research in Security and 2003 Privacy, May 1992. 2005 [IPSPPROB] 2006 Bellovin, S., "Problem Areas for the IP Security 2007 Protocols", Proceedings of the Sixth Usenix UNIX Security 2008 Symposium, July 1996. 2010 [KLEIN] Klein, D., "Foiling the Cracker: A Survey of and 2011 Improvements to Password Security", Proceedings of the 2012 Second Usenix UNIX Security Workshop, August 1990. 2014 [SEQNUM] Morris, R., "A Weakness in the 4.2 BSD UNIX TCP/IP 2015 Software", AT&T Bell Laboratories, CSTR 117, 1985. 2017 [SOAP] Box, D., Ehnebuske, D., Kakivaya, G., Layman, A., 2018 Mendelsohn, N., Nielsen, H., Thatte, S., and D. Winer, 2019 "Simple Object Access Protocol (SOAP) 1.1", W3C NOTE NOTE- 2020 SOAP-20000508, May 2000. 2022 [SPEKE] Jablon, D., "The SPEKE Password-Based Key Agreement 2023 Methods", draft-jablon-speke-02 (work in progress), 2024 November 2002. 2026 [SRP] Wu, T., "The Secure Remote Password Protocol", ISOC NDSS 2027 Symposium , 1998. 2029 [WEP] Borisov, N., Goldberg, I., and D. Wagner, "Intercepting 2030 Mobile Communications: The Insecurity of 802.11", 2031 proceedings of the Seventh Annual International Conference 2032 on Mobile Computing And Networking , July 2001. 2034 Authors' Addresses 2036 Yoav Nir 2037 Check Point Software Technologies Ltd. 2038 5 Hasolelim st. 2039 Tel Aviv 6789735 2040 Israel 2042 EMail: ynir.ietf@gmail.com 2043 Magnus Westerlund 2044 Ericsson 2045 Farogatan 6 2046 Stockholm SE-164 80 2047 Sweden 2049 Phone: +46 10 714 82 87 2050 EMail: magnus.westerlund@ericsson.com