idnits 2.17.1 draft-rescorla-sec-cons-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 2 instances of too long lines in the document, the longest one being 6 characters in excess of 72. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 667: '...ternet standards MUST describe which d...' RFC 2119 keyword, line 668: '...susceptable to. This description MUST...' RFC 2119 keyword, line 793: '... Authors MUST describe...' RFC 2119 keyword, line 800: '... forms of attack MUST be considered: e...' RFC 2119 keyword, line 802: '... service attacks MUST be identified as...' (7 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC1543' is mentioned on line 30, but not defined ** Obsolete undefined reference: RFC 1543 (Obsoleted by RFC 2223) == Missing Reference: 'REF' is mentioned on line 698, but not defined == Missing Reference: 'HA94' is mentioned on line 436, but not defined == Missing Reference: 'OTP' is mentioned on line 453, but not defined == Missing Reference: 'PKIX' is mentioned on line 568, but not defined == Missing Reference: 'TLS' is mentioned on line 485, but not defined == Missing Reference: 'EKE' is mentioned on line 496, but not defined == Missing Reference: 'SPEKE' is mentioned on line 496, but not defined == Missing Reference: 'SRP2' is mentioned on line 496, but not defined == Missing Reference: 'SPKI' is mentioned on line 568, but not defined == Missing Reference: 'RFC2522' is mentioned on line 731, but not defined == Missing Reference: 'SMTPTLS' is mentioned on line 939, but not defined == Missing Reference: 'RFC2388' is mentioned on line 991, but not defined ** Obsolete undefined reference: RFC 2388 (Obsoleted by RFC 7578) == Missing Reference: 'AUTH' is mentioned on line 1047, but not defined == Missing Reference: 'HMAC' is mentioned on line 1048, but not defined == Unused Reference: 'RFC 1704' is defined on line 1083, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1704 Summary: 12 errors (**), 0 flaws (~~), 17 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 E. Rescorla 2 RTFM, Inc. 3 B. Korver 4 INTERNET-DRAFT Network Alchemy 5 (June 2000 (Expires December 2000) 7 Guidelines for Writing RFC Text on Security Considerations 9 Status of this Memo 11 This document is an Internet-Draft and is in full conformance with 12 all provisions of Section 10 of RFC2026. Internet-Drafts are working 13 documents of the Internet Engineering Task Force (IETF), its areas, 14 and its working groups. Note that other groups may also distribute 15 working documents as Internet-Drafts. 17 Internet-Drafts are draft documents valid for a maximum of six months 18 and may be updated, replaced, or obsoleted by other documents at any 19 time. It is inappropriate to use Internet-Drafts as reference mate- 20 rial or to cite them other than as ``work in progress.'' 22 The list of current Internet-Drafts can be accessed at 23 http://www.ietf.org/ietf/1id-abstracts.txt 25 The list of Internet-Draft Shadow Directories can be accessed at 26 http://www.ietf.org/shadow.html. 28 1. Introduction 30 All RFCs are required by [RFC1543] to contain a Security Considera- 31 tions section. The purpose of this is both to encourage document 32 authors to consider security in their designs and to inform the 33 reader of relevant security issues. This memo is intended to provide 34 guidance to RFC authors in service of both ends. 36 This document is structured in three parts. The first is a combina- 37 tion security tutorial and definition of common terms; the second is 38 a series of guidelines for writing Security Considerations; the third 39 is a series of examples. 41 2. The Goals of Security 43 Most people speak of security as if it were a single monolithic prop- 44 erty of a protocol or system, but upon reflection that's very clearly 45 not true. Rather, security is a series of related but somewhat inde- 46 pendent properties. Not all of these properties are required for 47 every application. 49 We can loosely divide security goals into those related to protecting 50 communications (COMMUNICATIONS SECURITY) and those relating to pro- 51 tecting systems (SYSTEMS SECURITY). Since communications are carried 52 out by systems and access to systems is through communications chan- 53 nels, these goals obviously interlock, but they can also be indepen- 54 dently provided. 56 2.1. Communications Security 58 Different authors partition the goals of communications security dif- 59 ferently. The partitioning we've found most useful is to divide them 60 into three major categories: CONFIDENTIALITY, DATA INTEGRITY and END- 61 POINT AUTHENTICATION. 63 2.1.1. Confidentiality 65 When most people think of security, they think of CONFIDENTIALITY. 66 Confidentiality means that your data is kept secret from unintended 67 listeners. Usually, these listeners are simply eavesdroppers. When 68 the government taps your phone, that poses a risk to your confiden- 69 tiality. 71 Obviously, if you have secrets, you're concerned that no-one else 72 knows them and so at minimum you want confidentiality. When you see 73 spies in the movies go into the bathroom and turn on all the water to 74 foil bugging, the property they're looking for is confidentiality. 76 2.1.2. Data Integrity 78 The second primary goal is DATA INTEGRITY. The basic idea here is 79 that we want to be sure that the data we receive is the one that the 80 sender sent. In paper-based systems, some data integrity comes auto- 81 matically. When you receive a letter written in pen you can be fairly 82 certain that no words have been removed by an attacker because pen 83 marks are difficult to remove from paper. However, an attacker could 84 have easily added some marks to the paper and completely changed the 85 meaning of the message. Similarly, it's easy to shorten the page to 86 truncate the message. 88 On the other hand, in the electronic world, since all bits look 89 alike, it's trivial to tamper with messages in transit. You simply 90 remove the message from the wire, copy out the parts you like, add 91 whatever data you want, and generate a new message of your choosing, 92 and the recipient is no wiser. This is the moral equivalent of the 93 attacker taking a letter you wrote, buying some new paper and recopy- 94 ing the message, changing it as he does it. It's just a lot easier to 95 do electronically since all bits look alike. 97 2.1.3. Endpoint authentication 99 The third property we're concerned with is ENDPOINT AUTHENTICATION. 100 What we mean by this is that we know that one of the endpoints in the 101 communication is the one we intended. Without endpoint authentica- 102 tion, it's very difficult to provide either confidentiality or data 103 integrity. For instance, if we receive a message from Alice, the 104 property of data integrity doesn't do us much good unless we know 105 that it was in fact sent by Alice and not the attacker. Similarly, if 106 we want to send a confidential message to Bob, it's not of much value 107 to us if we're actually sending a confidential message to the 108 attacker. 110 Note that endpoint authentication can be provided asymmetrically. 111 When you call someone on the phone, you can be fairly certain that 112 you have the right person -- or at least that you got a person who's 113 actually at the phone number you called. On the other hand, if they 114 don't have caller ID, then the receiver of a phone call has no idea 115 who's calling them. Calling someone on the phone is an example of 116 recipient authentication, since you know who the recipient of the 117 call is, but they don't know anything about the sender. 119 On the other hand, cash is an example of sender authentication. A 120 dollar bill is like a message signed by the government. The govern- 121 ment has no idea who's got any given dollar bill but you can be 122 fairly certain that any bill was actually printed by the US Mint 123 because currency is difficult to forge. 125 2.2. Systems Security 127 In general, systems security is concerned with protecting one's 128 machines and data. The intent is that machines should be used only by 129 authorized users and for the purposes that the owners intend. Fur- 130 thermore, they should be available for those purposes. Attackers 131 should not be able to deprive legitimate users of resources. 133 2.2.1. Unauthorized Usage 135 Most systems are not intended to be completely accessible to the pub- 136 lic. Rather, they are intended to be used only by certain authorized 137 individuals. Although many Internet services are available to all 138 Internet users, even those servers generally offer a larger subset of 139 services to specific users. For instance, Web Servers often will 140 serve data to any user, but restrict the ability to modify pages to 141 specific users. Such modifications by the general public would be 142 UNAUTHORIZED USAGE. 144 2.2.2. Inappropriate Usage 146 Being an authorized user does not mean that you have free run of the 147 system. As we said above, some activities are restricted to autho- 148 rized users, some to specific users, and some activities are gener- 149 ally forbidden to all but administrators. Moreover, even activities 150 which are in general permitted might be forbidden in some cases. For 151 instance, users may be permitted to send email but forbidden from 152 sending files above a certain size, or files which contain viruses. 153 These are examples of INAPPROPRIATE USAGE. 155 2.2.3. Denial of Service 157 Recall that our third goal was that the system should be available to 158 legitimate users. A broad variety of attacks are possible which 159 threaten such usage. Such attacks are collectively referred to as 160 DENIAL OF SERVICE attacks. Denial of service attacks are often very 161 easy to mount and difficult to stop. Many such attacks are designed 162 to consume machine resources, making it difficult or impossible to 163 serve legitimate users. Other attacks cause the target machine to 164 crash, completely denying service to users. 166 3. The Internet Threat Model 168 A THREAT MODEL describes the capabilities that an attacker is assumed 169 to be able to deploy against a resource. It should contain such 170 information as the resources available to an attacker in terms of 171 information, computing capability, and control of the system. The 172 purpose of a threat model is twofold. First, we wish to identify the 173 threats we are concerned with. Second, we wish to rule some threats 174 explicitly out of scope. Nearly every security system is vulnerable 175 to a sufficiently dedicated and resourceful attacker. 177 The Internet environment has a fairly well understood threat model. 178 In general, we assumed that the end-systems engaging in a protocol 179 exchange have not themselves been compromised. Protecting against an 180 attack when one of the end-systems has been compromised is extraordi- 181 narily difficult. It is, however, possible to design protocols which 182 minimize the extent of the damage done under these circumstances. 184 By contrast, we assume that the attacker has nearly complete control 185 of the communications channel over which the end-systems communicate. 186 This means that the attacker can read any PDU (Protocol Data Unit) on 187 the network and undetectably remove, change, or inject forged packets 188 onto the wire. This includes being able to generate packets that 189 appear to be from a trusted machine. Thus, even if the end-system 190 with which you wish to communicate is itself secure, the Internet 191 environment provides no assurance that packets which claim to be from 192 that system in fact are. 194 It's important to realize that the meaning of a PDU is different at 195 different levels. At the IP level, a PDU means an IP packet. At the 196 TCP level, it means a TCP segment. At the application layer, it means 197 some kind of application PDU. For instance, at the level of email, it 198 might either mean an RFC-822 message or a single SMTP command. At the 199 HTTP level, it might mean a request or response. 201 3.1. Limited Threat Models 203 As we've said, a resourceful and dedicated attacker can control the 204 entire communications channel. However, a large number of attacks can 205 be mounted by an attacker with fewer resources. A number of cur- 206 rently known attacks can be mounted by an attacker with limited con- 207 trol of the network. For instance, password sniffing attacks can be 208 mounted by an attacker who can only read arbitrary packets. This is 209 generally referred to as a PASSIVE ATTACK. 211 By contrast, Morris's sequence number guessing attack [REF] can be 212 mounted by an attacker who can write but not read arbitrary packets. 213 Any attack which requires the attacker to write to the network is 214 known as an ACTIVE ATTACK. 216 Thus, a useful way of organizing attacks is to divide them based on 217 the capabilities required to mount the attack. The rest of this sec- 218 tion describes these categories and provides some examples of each 219 category. 221 3.2. Passive Attacks 223 In a passive attack, the attacker reads packets off the network but 224 does not write them. The simplest way to mount such an attack is to 225 simply be on the same LAN as the victim. On most common LAN configu- 226 rations, including Ethernet, 802.3, and FDDI, any machine on the wire 227 can read all traffic destined for any other machine on the same LAN. 228 Note that switching hubs make this sort of sniffing substantially 229 more difficult, since traffic destined for a machine only goes to the 230 network segment which that machine is on. 232 Similarly, an attacker who has control of a host in the communica- 233 tions path between two victim machines is able to mount a passive 234 attack on their communications. It is also possible to compromise 235 the routing infrastructure to specifically arrange that traffic 236 passes through a compromised machine. This might involve an active 237 attack on the routing infrastructure to facilitate a passive attack 238 on a victim machine. 240 Wireless communications channels deserve special consideration. 241 Since the data is simply broadcast on well-known radio frequencies, 242 an attacker simply needs to be able to receive those transmissions. 243 Such channels are especially vulnerable to passive attacks. 245 In general, the goal of a passive attack is to obtain information 246 which the sender and receiver would rather remain private. Examples 247 of such information include credentials useful in the electronic 248 world such as passwords or credentials useful in the outside world, 249 such as confidential business information. 251 3.2.1. Privacy Violations 253 The classic example of passive attack is sniffing some inherently 254 private data off of the wire. For instance, despite the wide avail- 255 ability of SSL, many credit card transactions still traverse the 256 Internet in the clear. An attacker could sniff such a message and 257 recover the credit card number, which can then be used to make fraud- 258 ulent transactions. Moreover, confidential business information is 259 routinely transmitted over the network in the clear in email. 261 3.2.2. Password Sniffing 263 Another example of a passive attack is PASSWORD SNIFFING. Password 264 sniffing is directed towards obtaining unauthorized use of resources. 265 Many protocols, including TELNET [REF], POP [REF], and NNTP [REF], 266 use a shared password to authenticate the client to the server. Fre- 267 quently, this password is transmitted from the client to the server 268 in the clear over the communications channel. An attacker who can 269 read this traffic can therefore capture the password and REPLAY it. 270 That is to say that he can initiate a connection to the server and 271 pose as the client and login using the captured password. 273 Note that although the login phase of the attack is active, the 274 actual password capture phase is passive. Moreover, unless the server 275 checks the originating address of connections, the login phase does 276 not require any special control of the network. 278 3.2.3. Offline Cryptographic Attacks 280 Many cryptographic protocols are subject to OFFLINE ATTACKS. In such 281 a protocol, the attacker recovers data which has been processed using 282 the victim's secret key and then mounts a cryptanalytic attack on 283 that key. Passwords make a particularly vulnerable target because 284 they are typically low entropy. A number of popular password-based 285 challenge response protocols are vulnerable to DICTIONARY SEARCH. The 286 attacker captures a challenge-response pair and then proceeds to try 287 entries from a list of common words (such as a dictionary file) until 288 he finds a password that produces the right response. 290 A similar such attack can be mounted on a local network when NIS is 291 used. The Unix password is crypted using a one-way function, but 292 tools exist to break such crypted passwords [REF: Crack]. When NIS 293 is used, the crypted password is transmitted over the local network 294 and an attacker can thus sniff the password and attack it. 296 Historically, it has also been possible to exploit small operating 297 system security holes to recover the password file using an active 298 attack. These holes can then be bootstrapped into an actual account 299 by using the aforementioned offline password recovery techniques. 300 Thus we combine a low-level active attack with an offline passive 301 attack. 303 3.3. Active Attacks 305 When an attack involves writing data to the network, we refer to this 306 as an ACTIVE ATTACK. When IP is used without IPSEC, there is no 307 authentication for the sender address. As a consequence, it's 308 straightforward for an attacker to create a packet with a source 309 address of his choosing. We'll refer to this as a SPOOFING ATTACK. 311 Under certain circumstances, such a packet may be screened out by the 312 network. For instance, many packet filtering firewalls screen out all 313 packets with source addresses on the INTERNAL network that arrive on 314 the EXTERNAL interface. Note, however, that this provides no protec- 315 tion against an attacker who is inside the firewall. In general, 316 designers should assume that attackers can forge packets. 318 However, the ability to forge packets does not go hand in hand with 319 the ability to receive arbitrary packets. In fact, there are active 320 attacks that involve being able to send forged packets but not 321 receive the responses. We'll refer to these as BLIND ATTACKS. 323 Note that not all active attacks require forging addresses. For 324 instance, the TCP SYN denial of service attack [REF] can be mounted 325 successfully without disguising the sender's address. However, it is 326 common practice to disguise one's address in order to conceal one's 327 identity if an attack is discovered. 329 Each protocol is susceptible to specific active attacks, but experi- 330 ence shows that a number of common patterns of attack can be adapted 331 to any given protocol. The next sections describe a number of these 332 patterns and give specific examples of them as applied to known pro- 333 tocols. 335 3.3.1. Replay Attacks 337 In a REPLAY ATTACK, the attacker records a sequence of messages off 338 of the wire and plays them back to the party which originally 339 received them. Note that the attacker does not need to be able to 340 understand the messages. He merely needs to capture and retransmit 341 them. 343 For example, consider the case where an S/MIME message is being used 344 to request some service, such as a credit card purchase or a stock 345 trade. An attacker might wish to have the service executed twice, if 346 only to inconvenience the victim. He could capture the message and 347 replay it, even though he can't read it, causing the transaction to 348 be executed twice. 350 3.3.2. Message Insertion 352 In a MESSAGE INSERTION attack, the attacker forges a message with 353 some chosen set of properties and injects it into the network. Often 354 this message will have a forged source address in order to disguise 355 the identity of the attacker. 357 For example, a denial-of-service attack can be mounted by inserting a 358 series of spurious TCP SYN packets [REF] directed towards the target 359 host. The target host responds with its own SYN and allocates kernel 360 data structures for the new connection. The attacker never completes 361 the 3-way handshake, so the allocated connection endpoints just sit 362 there taking up kernel memory. Typical TCP stack implementations only 363 allow some limited number of connections in this "half-open" state 364 and when this limit is reached, no more connections can be initiated, 365 even from legitimate hosts. Note that this attack is a blind attack, 366 since the attacker does not need to process the victim's SYNs. 368 3.3.3. Message Deletion 370 In a MESSAGE DELETION attack, the attacker removes a message from the 371 wire. Morris's sequence number guessing attack [REF] often requires 372 a message deletion attack to be performed successfully. In this blind 373 attack, the host whose address is being forged will receive a spuri- 374 ous TCP SYN packet from the host being attacked. Receipt of this SYN 375 packet generates a RST, which would tear the illegitimate connection 376 down. In order to prevent this host from sending a RST so that the 377 attack can be carried out successfully, Morris describes flooding 378 this host to create queue overflows such that the SYN packet is lost 379 and thus never responded to. 381 3.3.4. Message Modification 383 In a MESSAGE MODIFICATION attack, the attacker removes a message from 384 the wire, modifies it, and reinjects it into the network. This sort 385 of attack is particularly useful if the attacker wants to send some 386 of the data in the message but also wants to change some of it. 388 Consider the case where the attacker wants to attack an order for 389 goods placed over the Internet. He doesn't have the victim's credit 390 card number so he waits for the victim to place the order and then 391 replaces the delivery address (and possibly the goods description) 392 with his own. Note that this particular attack is known as a CUT-AND- 393 PASTE attack since the attacker cuts the credit card number out of 394 the original message and pastes it into the new message. 396 3.3.5. Man-In-The-Middle 398 A MAN-IN-THE-MIDDLE attack combines the above techniques in a special 399 form: The attacker subverts the communication stream in order to pose 400 as the sender to receiver and the receiver to the sender: 402 What Alice and Bob think: 403 Alice <----------------------------------------------> Bob 405 What's happening: 406 Alice <----------------> Attacker <----------------> Bob 408 This differs fundamentally from the above forms of attack because it 409 attacks the identity of the communicating parties, rather than the 410 data stream itself. Consequently, many techniques which provide 411 integrity of the communications stream are insufficient to protect 412 against man-in-the-middle attacks. 414 Man-in-the-middle attacks are possible whenever a protocol lacks 415 MUTUAL ENDPOINT AUTHENTICATION. For instance, if an attacker can 416 hijack the client TCP connection during the TCP handshake (perhaps by 417 responding to the client's SYN before the server does), then the 418 attacker can open another connection to the server and begin a man- 419 in-the-middle attack. 421 4. Common Issues 423 Although each system's security requirements are unique, certain com- 424 mon requirements appear in a number of protocols. Often, naive proto- 425 col designers are faced with these requirements, they choose an obvi- 426 ous but insecure solution even though better solutions are available. 427 This section describes a number of issues seen in many protocols and 428 the common pieces of security technology that may be useful in 429 addressing them. 431 4.1. User Authentication 433 Essentially every system which wants to control access to its 434 resources needs some way to authenticate users. A nearly uncountable 435 number of such mechanisms have been designed for this purpose. The 436 next several sections describe some of these techniques. [HA94] cov- 437 ers this topic in detail. 439 4.1.1. Username/Password 441 The most common access control mechanism is simple USERNAME/PASSWORD 442 The user provides a username and a reusable password to the host 443 which he wishes to use. This system is vulnerable to a simple passive 444 attack where the attacker sniffs the password off the wire and then 445 initiates a new session, presenting the password. This threat can be 446 mitigated by hosting the protocol across an encrypted connection such 447 as TLS or IPSEC. Unprotected (plaintext) username/password systems 448 are not acceptable in IETF standards. 450 4.1.2. Challenge Response and One Time Passwords 452 Systems which desire greater security than USERNAME/PASSWORD often 453 employ either a ONE TIME PASSWORD [OTP] scheme or a CHALLENGE- 454 RESPONSE. In a one time password scheme, the user is provided with a 455 list of passwords, which must be used in sequence, one time each. 456 (Often these passwords are generated from some secret key so the user 457 can simply compute the next password in the sequence.) SecureID and 458 DES Gold are variants of this scheme. In a challenge-response scheme, 459 the host and the user share some secret (which often is represented 460 as a password). In order to authenticate the user, the host presents 461 the user with a (randomly generated) challenge. The user computes 462 some function based on the challenge and the secret and provides that 463 to the host, which verifies it. Often this computation is performed 464 in a handheld device, such as a DES Gold card. 466 Both types of scheme provide protection against replay attack, but 467 often still vulnerable to an OFFLINE KEYSEARCH ATTACK (a form of pas- 468 sive attack): As previously mentioned, often the one-time password or 469 response is computed from a shared secret. If the attacker knows the 470 function being used, he can simply try all possible shared secrets 471 until he finds one that produces the right output. This is made eas- 472 ier if the shared secret is a password, in which case he can mount a 473 DICTIONARY ATTACK--meaning that he tries a list of common words (or 474 strings) rather than just random strings. 476 These systems are also often vulnerable to an active attack. Unless 477 communications security is provided for the entire session, the 478 attacker can simply wait until authentication has been performed and 479 hijack the connection. 481 4.1.3. Certificates 483 A simple approach is to have all users have certificates [PKIX] which 484 they then use to authenticate in some protocol-specific way, as in 485 [TLS] or [S/MIME]. The primary obstacle to this approach in client- 486 server type systems is that it requires clients to have certificates, 487 which can be a deployment problem. 489 4.1.4. Some Uncommon Systems 491 There are ways to do a better job than the schemes mentioned above, 492 but they typically don't add much security unless communications 493 security (at least message integrity) will be employed to secure the 494 connection, because otherwise the attacker can merely hijack the con- 495 nection after authentication has been performed. A number of proto- 496 cols ([EKE], [SPEKE], [SRP2]) allow one to securely bootstrap a 497 user's password into a shared key which can be used as input to a 498 cryptographic protocol. Similarly, the user can authenticate using 499 public key certificates. (e.g. S-HTTP client authentication). Typi- 500 cally these methods are used as part of a more complete security pro- 501 tocol. 503 4.1.5. Host Authentication 505 Host authentication presents a special problem. Quite commonly, the 506 addresses of services are presented using a DNS hostname, for 507 instance as a URL [REF]. When requesting such a service, one has 508 ensure that the entity that one is talking to not only has a certifi- 509 cate but that that certificate corresponds to the expected identity 510 of the server. The important thing to have is a secure binding 511 between the certificate and the expected hostname. 513 For instance, it is usually not acceptable for the certificate to 514 contain an identity in the form of an IP address if the request was 515 for a given hostname. This does not provide end-to-end security 516 because the hostname-IP mapping is not secure unless secure DNS [REF] 517 is being used. This is a particular problem when the hostname is pre- 518 sented at the application layer but the authentication is performed 519 at some lower layer. 521 4.2. Authorization vs. Authentication 523 AUTHORIZATION is the process by which one determines whether an 524 authenticated party has permission to access a particular resource or 525 service. Although tightly bound, it is important to realize that 526 authentication and authorization are two separate mechanisms. Per- 527 haps because of this tight coupling, authentication is sometimes mis- 528 takenly thought to imply authorization. Authentication simply iden- 529 tifies a party, authorization defines whether they can perform a cer- 530 tain action. 532 Authorization necessarily relies on authentication, but authentica- 533 tion alone does not imply authorization. Rather, before granting 534 permission to perform an action, the authorization mechanism must be 535 consulted to determine whether that action is permitted. 537 4.2.1. Access Control Lists 539 One common form of authorization mechanism is an access control list 540 (ACL) that lists users that are permitted access to a resource. Since 541 assigning individual authorization permissions to each resource is 542 tedious, often resources are hierarchically arranged such that the 543 parent resource's ACL is inherited by child resources. This allows 544 administrators to set top level policies and override them when nec- 545 essary. 547 4.2.2. Certificate Based Systems 549 While the distinction between authentication and authorization is 550 intuitive when using simple authentication mechanisms such as user- 551 name and password (i.e., everyone understands the difference between 552 the administrator account and a user account), with more complex 553 authentication mechanisms the distinction is sometimes lost. 555 With certificates, for instance, presenting a valid signature does 556 not imply authorization. The signature must be backed by a certifi- 557 cate chain that contains a trusted root, and that root must be 558 trusted in the given context. For instance, users who possess cer- 559 tificates issued by the Acme MIS CA may have different web access 560 privileges than users who possess certificates issued by the Acme 561 Accounting CA, even though both of these CAs are "trusted" by the 562 Acme web server. 564 Mechanisms for enforcing these more complicated properties have not 565 yet been completely explored. One approach is simply to attack poli- 566 cies to ACLs describing what sorts of certificates are trusted. 567 Another approach is to carry that information with the certificate, 568 either as a certificate extension/attribute [PKIX, SPKI] or as a 569 separate "Attribute Certificate". 571 4.3. Providing Traffic Security 573 Securely designed protocols should provide some mechanism for secur- 574 ing (meaning integrity protecting, authenticating, and possibly 575 encrypting) all sensitive traffic. One approach is to secure the pro- 576 tocol itself, as in Secure DNS [REF], S/MIME [REF] or S-HTTP [REF]. 577 Although this provides security which is most fitted to the protocol, 578 it also requires considerable effort to get right. 580 Many protocols can be adequately secured using one of the available 581 channel security systems. We'll discuss the two most common, IPSEC 582 [REF] and SSL/TLS [REF]. 584 4.3.1. IPSEC 586 The IPsec protocols (specifically, AH and ESP) can provide transmis- 587 sion security for all traffic between two hosts. The IPsec protocols 588 support varying granularities of user identification, including for 589 example "IP Subnet", "IP Address", "Fully Qualified Domain Name", and 590 indivudual user ("Mailbox name"). However, a given IPsec implementa- 591 tion might not support all identity types. For example, an encrypt- 592 ing security gateway usually does not have per-user information 593 available so it can only provide host-to-host or subnet-to-subnet 594 protections. 596 When AH or ESP is used, the application programmer might not need to 597 do anything (if AH or ESP has been enabled system-wide) or might need 598 to make specific software changes (e.g. adding specific setsockopt() 599 calls) -- depending on the AH or ESP implementation being used. 601 The primary difficulty with IPsec is that it is not widely deployed 602 at present. In particular, initial IPsec deployment has largely been 603 in security gateways rather than in hosts. In turn, this means that 604 fine-grained identity information (e.g. user identification) is not 605 yet practical to use in the global Internet. Because AH and ESP are 606 integrated with the Internet Protocol implementation, adding support 607 for AH or ESP generally means a new operating system version or new 608 kernel version needs to be deployed. This is a more daunting under- 609 taking than simply adding some application to an existing system, so 610 it will take some time for host implementations of AH or ESP to 611 become widespread. 613 4.3.2. SSL/TLS 615 The currently most common approach is to use SSL or it's successor 616 TLS. They provide channel security for a TCP connection at the 617 application level. That is, they run over TCP. SSL implementations 618 typically provide a Berkeley Sockets-like interface for easy program- 619 ming. The primary issue when designing a protocol solution around 620 TLS is to differentiate between connections protected using TLS and 621 those which are not. Note that TLS will not operate over UDP. 623 The two primary approaches used are to have a separate well-known 624 port for TLS connections (e.g. the HTTP over TLS port is 443) [REF] 625 or to have a mechanism for negotiating upward from the base protocol 626 to TLS. [REF: SMTP/TLS, HTTP Upgrade] When an upward negotiation 627 strategy is used, care must be taken to ensure that an attacker can 628 not force a clear connection when both parties wish to use TLS. 630 4.3.3. Remote Login 632 In some special cases it may be worth providing channel-level secu- 633 rity directly in the application rather than using IPSEC or SSL/TLS. 634 One such case is remote terminal security. Characters are typically 635 delivered from client to server one character at a time. Since 636 SSL/TLS and AH/ESP MAC and encrypt every packet, this can mean a data 637 expansion of 20-fold. The telnet encryption option [REF] prevents 638 this expansion by foregoing message integrity. 640 When using remote terminal service, it's often desirable to securely 641 perform other sorts of communications services. In addition to pro- 642 viding remote login, SSH [REF] also provides secure port forwarding 643 for arbitrary TCP ports, thus allowing users run arbitrary TCP-based 644 applications over the SSH channel. Note that this capability also 645 represents a security vulnerability in that it circumvents firewalls 646 and may potentially expose insecure applications to the outside 647 world. 649 4.4. Denial of Service Attacks and Countermeasures 651 Denial of service attacks are all too frequently viewed as an fact of 652 life. One problem is that an attacker can often choose from one of 653 many denial of service attacks to inflict upon a victim, and because 654 most of these attacks cannot be thwarted, common wisdom frequently 655 assumes that there is no point protecting against one kind of denial 656 of service attack when there are many other denial of service attacks 657 that are possible but that cannot be prevented. 659 However, not all denial of service attacks are equal and more impor- 660 tantly, it is possible to design protocols such that denial of ser- 661 vice attacks are made more difficult if not impractical. Recent SYN 662 flood attacks [REF??] demonstrate both of these properties: SYN 663 flood attacks are so easy, anonymous, and effective that they are 664 more attractive to attackers than other attacks; and because the 665 design of TCP enables this attack. 667 Authors of internet standards MUST describe which denial of service 668 attacks their protocol is susceptable to. This description MUST 669 include the reasons it was either unreasonable or out of scope to 670 attempt to avoid these denial of service attacks. 672 4.4.1. Blind Denial of Service 674 BLIND denial of service attacks are particularly pernicious. With a 675 blind attack the attacker has a significant advantage. If the 676 attacker must be able to receive traffic from the victim then he must 677 either subvert the routing fabric or must use his own IP address. 678 Either provides an opportunity for victim to track the attacker 679 and/or filter out his traffic. With a blind attack the attacker can 680 use forged IP addresses, making it extremely difficult for the victim 681 to filter out his packets. The TCP SYN flood attack is an example of 682 a blind attack. Designers should make every attempt possible to pre- 683 vent blind denial of service attacks. 685 4.4.2. Avoiding Denial of Service 686 There are two common approaches to making denial of service attacks more 687 difficult: 689 4.4.2.1. Make your attacker do more work than you do 691 If an attacker consumes more of his resources than yours when launch- 692 ing an attack, attackers with fewer resources than you will be unable 693 to launch effective attacks. One common technique is to require the 694 attacker perform a time-intensive operation, such as a cryptographic 695 operation. Note that an attacker can still mount a denial of service 696 attack if he can muster substantially sufficient CPU power. For 697 instance, this technique would not stop the distributed attacks 698 described in [REF]. 700 4.4.2.2. Make your attacker prove they can receive data from you 702 A blind attack can be subverted by forcing the attack prove that they 703 can can receive data from the victim. A common technique is to 704 require that the attacker reply using information that was gained 705 earlier in the message exchange. If this countermeasure is used, the 706 attacker must either use his own address (making him easy to track) 707 or to forge an address which will be routed back along a path that 708 traverses the host from which the attack is being launched. 710 Hosts on small subnets are thus useless to the attacker (at least in 711 the context of a spoofing attack) because the attack can be traced 712 back to a subnet (which should be sufficient for locating the 713 attacker) so that anti-attack measures can be put into place (for 714 instance, a boundary router can be configured to drop all traffic 715 from that subnet). A common technique is to require that the 716 attacker reply using information that was gained earlier in the mes- 717 sage exchange. 719 4.4.3. Example: TCP SYN Floods 721 TCP/IP is vulnerable to SYN flood attacks (which are described in 722 section 3.3.2) because of the design of the 3-way handshake. First, 723 an attacker can force a victim to consume significant resources (in 724 this case, memory) by sending a single packet. Second, because the 725 attacker can perform this action without ever having received data 726 from the victim, the attack can be performed anonymously (and there- 727 fore using a large number of forged source addresses). 729 4.4.4. Example: Photuris 731 Photuris [RFC2522] implements an anti-clogging mechanism that pre- 732 vents attacks on Photuris that resemble the SYN flood attack. Pho- 733 turis employs a time-variant secret to generate a "cookie" which is 734 returned to the attacker. This cookie must be returned in subsequent 735 messages for the exchange to progress. The interesting feature is 736 that this cookie can be re-generated by the victim later in the 737 exchange, and thus no state need be retained by the victim until 738 after the attacker has proven that he can receive packets from the 739 victim. 741 4.5. Object vs. Channel Security 743 It's useful to make the conceptual distinction between object secu- 744 rity and channel security. Object security refers to security mea- 745 sures which apply to entire data objects. Channel security measures 746 provide a secure channel over which objects may be carried transpar- 747 ently but the channel has no special knowledge about object bound- 748 aries. 750 Consider the case of an email message. When it's carried over an 751 IPSEC or TLS secured connection, the message is protected during 752 transmission. However, it is unprotected in the receiver's mailbox, 753 and in intermediate spool files along the way. 755 By contrast, when an email message is protected with S/MIME or 756 OpenPGP, the entire message is encrypted and integrity protected 757 until it is examined and decrypted by the recipient. It also pro- 758 vides strong authentication of the actual sender, as opposed to the 759 machine the message came from. This is object security. Moreover, 760 the receiver can prove the signed message's authenticity to a third 761 party. 763 Note that the difference between object and channel security is a 764 matter of perspective. Object security at one layer of the protocol 765 stack often looks like channel security at the next layer up. So, 766 from the perspective of the IP layer, each packet looks like an indi- 767 vidually secured object. But from the perspective of a web client, 768 IPSEC just provides a secure channel. 770 The distinction isn't always clear-cut. For example, S-HTTP provides 771 object level security for a single HTTP transaction, but a web page 772 typically consists of multiple HTTP transactions (the base page and 773 numerous inline images.) Thus, from the perspective of the total web 774 page, this looks rather more like channel security. Object security 775 for a web page would consist of security for the transitive closure 776 of the page and all its embedded content as a single unit. 778 4.6. Defending Aganst Denial of Service 780 5. Writing Security Considerations Sections 782 While it is not a requirement that any given protocol or system be 783 immune to all forms of attack, it is still necessary for authors to 784 consider them. Part of the purpose of the Security Considerations 785 section is to explain what attacks are out of scope and what counter- 786 measures can be applied to defend against them. 788 There should be a clear description of the kinds of threats on the 789 described protocol or technology. This should be approached as an 790 effort to perform "due diligence" in describing all known or foresee- 791 able risks and threats to potential implementers and users. 793 Authors MUST describe 795 1. which attacks are out of scope (and why!) 796 2. which attacks are in-scope 797 2.1 and the protocol is susceptable to 798 2.2 and the protocol protects against 800 At least the following forms of attack MUST be considered: eavesdrop- 801 ping, replay, message insertion, deletion, modification, and man-in- 802 the-middle. Potential denial of service attacks MUST be identified as 803 well. If the protocol incorporates cryptographic protection mecha- 804 nisms, it should be clearly indicated which portions of the data are 805 protected and what the protections are (i.e. integrity only, 806 confidentiality, and/or endpoint authentication, etc.). Some indica- 807 tion should also be given to what sorts of attacks the cryptographic 808 protection is susceptible. Data which should be held secret (keying 809 material, random seeds, etc.) should be clearly labeled. 811 If the technology involves authentication, particularly, user-host 812 authentication, the security of the authentication method MUST be 813 clearly specified. That is, authors MUST document the assumptions 814 that the security of this authentication method is predicated upon. 815 For instance, in the case of the UNIX username/password login method, 816 a statement to the effect of: 818 Authentication in the system is secure only to the extent that it 819 is difficult to guess or obtain a ASCII password that is a maximum 820 of 8 characters long. These passwords can be obtained by sniffing 821 telnet sessions or by running the 'crack' program using the con- 822 tents of the /etc/passwd file. Attempts to protect against on-line 823 password guessing by (1) disconnecting after several unsuccessful 824 login attempts and (2) waiting between successive password prompts 825 is effective only to the extent that attackers are impatient. 827 Because the /etc/passwd file maps usernames to user ids, groups, 828 etc. it must be world readable. In order to permit this usage but 829 make running crack more difficult, the file is often split into 830 /etc/passwd and a 'shadow' password file. The shadow file is not 831 world readable and contains the encrypted password. The regular 832 /etc/passwd file contains a dummy password in its place. 834 It is insufficient to simply state that one's protocol should be run 835 over some lower layer security protocol. If a system relies upon 836 lower layer security services for security, the protections those 837 services are expected to provide MUST be clearly specified. In addi- 838 tion, the resultant properties of the combined system need to be 839 specified. 841 Note: In general, the IESG will not approve standards track protocols 842 which do not provide for strong authentication, either internal to 843 the protocol or through tight binding to a lower layer security pro- 844 tocol. 846 The threat environment addressed by the Security Considerations sec- 847 tion MUST at a minimum include deployment across the global Internet 848 across multiple administrative boundaries without assuming that fire- 849 walls are in place, even if only to provide justification for why 850 such consideration is out of scope for the protocool. It is not 851 acceptable to only discuss threats applicable to LANs and ignore the 852 broader threat environment. All IETF standards-track protocols are 853 considered likely to have deployment in the global Internet. In some 854 cases, there might be an Applicability Statement discouraging use of 855 a technology or protocol in a particular environment. Nonetheless, 856 the security issues of broader deployment should be discussed in the 857 document. 859 There should be a clear description of the residual risk to the user 860 or operator of that protocol after threat mitigation has been 861 deployed. Such risks might arise from compromise in a related proto- 862 col (e.g. IPSEC is useless if key management has been compromised), 863 from incorrect implementation, compromise of the security technology 864 used for risk reduction (e.g. 40-bit DES), or there might be risks 865 that are not addressed by the protocol specification (e.g. denial of 866 service attacks on an underlying link protocol). 868 There should also be some discussion of potential security risks 869 arising from potential misapplications of the protocol or technology 870 described in the RFC. This might be coupled with an Applicability 871 Statement for that RFC. 873 6. Examples 875 This section consists of some example security considerations sec- 876 tions, intended to give the reader a flavor of what's intended by 877 this document. 879 The first example is a 'retrospective' example, applying the criteria 880 of this document to a historical document, RFC-821. The second exam- 881 ple is a good security considerations section clipped from a current 882 protocol. 884 6.1. SMTP 886 When RFC-821 was written, Security Considerations sections were not 887 required in RFCs, and none is contained in that document. Had that 888 document been written today, the Security Considerations section 889 might look something like this: 891 6.1.1. SMTP Security Considerations 893 SMTP as-is provides no security precautions of any kind. All the 894 attacks we are about to describe must be provided by a different pro- 895 tocol layer. 897 A passive attack is sufficient to recover message text. No endpoint 898 authentication is provided by the protocol. Sender spoofing is triv- 899 ial, and therefore forging email messages is trivial. Some implemen- 900 tations do add header lines with hostnames derived through reverse 901 name resolution (which is only secure to the extent that it is 902 difficult to spoof DNS -- not very), although these header lines are 903 normally not displayed to users. Receiver spoofing is also fairly 904 straight-forward, either using TCP connection hijacking or DNS spoof- 905 ing. Moreover, since email messages often pass through SMTP gateways, 906 all intermediate gateways must be trusted, a condition nearly impos- 907 sible on the global Internet. 909 Several approaches are available for alleviating these threats. In 910 order of increasingly high level in the protocol stack, we have: 912 SMTP over IPSEC 913 SMTP/TLS 914 S/MIME and PGP/MIME 916 6.1.1.1. SMTP over IPSEC 918 An SMTP connection run over IPSEC can provide confidentiality for the 919 message between the sender and the first hop SMTP gateway, or between 920 any pair of connected SMTP gateways. That is to say, it provides 921 channel security for the SMTP connections. In a situation where the 922 message goes directly from the client to the receiver's gateway, this 923 may provide substantial security (though the receiver must still 924 trust the gateway). Protection is provided against replay attacks, 925 since the data itself is protected and the packets cannot be 926 replayed. 928 Endpoint identification is a problem, however, unless the receiver's 929 address can be directly cryptographically authenticated. No sender 930 identification is available, since the sender's machine is authenti- 931 cated, not the sender himself. Furthermore, the identity of the 932 sender simply appears in the From header of the message, so it is 933 easily spoofable by the sender. Finally, unless the security policy 934 is set extremely strictly, there is also an active downgrade to 935 cleartext attack. 937 6.1.1.2. SMTP/TLS 939 SMTP can be combined with TLS as described in [SMTPTLS]. This pro- 940 vides similar protection to that provided when using IPSEC. Since 941 TLS certificates typically contain the server's host name, recipient 942 authentication may be slightly more obvious, but is still susceptible 943 to DNS spoofing attacks. Notably, common implementations of TLS con- 944 tain a US exportable (and hence low security) mode. Applications 945 desiring high security should ensure that this mode is disabled. 946 Protection is provided against replay attacks, since the data itself 947 is protected and the packets cannot be replayed. [note: The Security 948 Considerations section of the SMTP over TLS draft is quite good and 949 bears reading as an example of how to do things.] 951 6.1.1.3. S/MIME and PGP/MIME 953 S/MIME and PGP/MIME are both message oriented security protocols. 954 They provide object security for individual messages. With various 955 settings, sender and recipient authentication and confidentiality may 956 be provided. More importantly, the identification is not of the send- 957 ing and receiving machines, but rather of the sender and recipient 958 themselves. (Or, at least, of cryptographic keys corresponding to the 959 sender and recipient.) Consequently, end-to-end security may be 960 obtained. Note, however, that no protection is provided against 961 replay attacks. 963 6.1.1.4. Denial of Service 965 None of these security measures provides any real protection against 966 denial of service. SMTP connections can easily be used to tie up sys- 967 tem resources in a number of ways, including excessive port consump- 968 tion, excessive disk usage (email is typically delivered to disk 969 files), and excessive memory consumption (sendmail, for instance, is 970 fairly large, and typically forks a new process to deal with each 971 message.) 973 6.1.1.5. Inappropriate Usage 975 In particular, there is no protection provided against unsolicited 976 mass email (aka SPAM). 978 SMTP also includes several commands which may be used by attackers to 979 explore the machine on which the SMTP server runs. The VRFY command 980 permits an attacker to convert user-names to mailbox name and often 981 real name. This is often useful in mounting a password guessing 982 attack, as many users use their name as their password. EXPN permits 983 an attacker to expand an email list to the names of the subscribers. 984 This may be used in order to generate a list of legitimate users in 985 order to attack their accounts, as well as to build mailing lists for 986 future SPAM. Administrators may choose to disable these commands. 988 6.2. VRRP 990 The second example is from VRRP, the Virtual Router Redundance Proto- 991 col ([RFC2388]). We reproduce here the Security Considerations sec- 992 tion from that document (with new section numbers). Our comments are 993 indented and prefaced with 'NOTE:'. 995 6.2.1. Security Considerations 997 VRRP is designed for a range of internetworking environments that may 998 employ different security policies. The protocol includes several 999 authentication methods ranging from no authentication, simple clear 1000 text passwords, and strong authentication using IP Authentication 1001 with MD5 HMAC. The details on each approach including possible 1002 attacks and recommended environments follows. 1004 Independent of any authentication type VRRP includes a mechanism 1005 (setting TTL=255, checking on receipt) that protects against VRRP 1006 packets being injected from another remote network. This limits most 1007 vulnerabilities to local attacks. 1009 NOTE: The security measures discussed in the following sections 1010 only provide various kinds of authentication. No confidentiality 1011 is provided at all. This should be explicitly described as outside 1012 the scope. 1014 6.2.1.1. No Authentication 1016 The use of this authentication type means that VRRP protocol 1017 exchanges are not authenticated. This type of authentication SHOULD 1018 only be used in environments were there is minimal security risk and 1019 little chance for configuration errors (e.g., two VRRP routers on a 1020 LAN). 1022 6.2.1.2. Simple Text Password 1024 The use of this authentication type means that VRRP protocol 1025 exchanges are authenticated by a simple clear text password. 1027 This type of authentication is useful to protect against accidental 1028 misconfiguration of routers on a LAN. It protects against routers 1029 inadvertently backing up another router. A new router must first be 1030 configured with the correct password before it can run VRRP with 1031 another router. This type of authentication does not protect against 1032 hostile attacks where the password can be learned by a node snooping 1033 VRRP packets on the LAN. The Simple Text Authentication combined 1034 with the TTL check makes it difficult for a VRRP packet to be sent 1035 from another LAN to disrupt VRRP operation. 1037 This type of authentication is RECOMMENDED when there is minimal risk 1038 of nodes on a LAN actively disrupting VRRP operation. If this type 1039 of authentication is used the user should be aware that this clear 1040 text password is sent frequently, and therefore should not be the 1041 same as any security significant password. 1043 6.2.1.3. IP Authentication Header 1045 The use of this authentication type means the VRRP protocol exchanges 1046 are authenticated using the mechanisms defined by the IP Authentica- 1047 tion Header [AUTH] using "The Use of HMAC-MD5-96 within ESP and AH", 1048 [HMAC]. This provides strong protection against configuration 1049 errors, replay attacks, and packet corruption/modification. 1051 This type of authentication is RECOMMENDED when there is limited con- 1052 trol over the administration of nodes on a LAN. While this type of 1053 authentication does protect the operation of VRRP, there are other 1054 types of attacks that may be employed on shared media links (e.g., 1055 generation of bogus ARP replies) which are independent from VRRP and 1056 are not protected. 1058 NOTE: Specifically, although securing VRRP prevents unauthorized machines 1059 from taking part in the election protocol, it does not protect 1060 hosts on the network from being deceived. For example, a gratutitous 1061 ARP reply from what purports to be the virtual router's IP address 1062 can redirect traffic to an unauthorized machine. Similarly, 1063 individual connections can be diverted by means of forged ICMP 1064 Redirect messages. 1066 Acknowledgments 1068 This document is heavily based on a note written by Ran Atkinson in 1069 1997. That note was written after the IAB Security Workshop held in 1070 early 1997, based on input from everyone at that workshop. Some of 1071 the specific text above was taken from Ran's original document, and 1072 some of that text was taken from an email message written by Fred 1073 Baker. 1075 The other primary source for this document is specific comments 1076 received from Steve Bellovin. Early review of this document was done 1077 by Lisa Dusseault and Mark Schertler. 1079 References 1080 [CA-96.21] "TCP SYN Flooding and IP Spoofing", CERT Advisory CA-96.21 1081 ftp://info.cert.org/pub/cert advisories/CA-96.21.tcp syn flooding 1083 [RFC 1704] 1084 The rest are TBD 1086 Security Considerations 1088 This entire document is about security considerations. 1090 Author's Address 1091 Eric Rescorla 1092 RTFM, Inc. 1093 30 Newell Road #16 1094 East Palo Alto, CA 94303 1095 Phone: (650) 328-8631 1097 Brian Korver 1098 Network Alchemy 1099 1538 Pacific Avenue 1100 Santa Cruz, CA 95060 1101 Phone: (831)-460-3800 1102 Table of Contents 1104 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1105 2. The Goals of Security . . . . . . . . . . . . . . . . . . . . . . 1 1106 2.1. Communications Security . . . . . . . . . . . . . . . . . . . . 2 1107 2.1.1. Confidentiality . . . . . . . . . . . . . . . . . . . . . . . 2 1108 2.1.2. Data Integrity . . . . . . . . . . . . . . . . . . . . . . . 2 1109 2.1.3. Endpoint authentication . . . . . . . . . . . . . . . . . . . 3 1110 2.2. Systems Security . . . . . . . . . . . . . . . . . . . . . . . 3 1111 2.2.1. Unauthorized Usage . . . . . . . . . . . . . . . . . . . . . 3 1112 2.2.2. Inappropriate Usage . . . . . . . . . . . . . . . . . . . . . 4 1113 2.2.3. Denial of Service . . . . . . . . . . . . . . . . . . . . . . 4 1114 3. The Internet Threat Model . . . . . . . . . . . . . . . . . . . . 4 1115 3.1. Limited Threat Models . . . . . . . . . . . . . . . . . . . . . 5 1116 3.2. Passive Attacks . . . . . . . . . . . . . . . . . . . . . . . . 5 1117 3.2.1. Privacy Violations . . . . . . . . . . . . . . . . . . . . . 6 1118 3.2.2. Password Sniffing . . . . . . . . . . . . . . . . . . . . . . 6 1119 3.2.3. Offline Cryptographic Attacks . . . . . . . . . . . . . . . . 6 1120 3.3. Active Attacks . . . . . . . . . . . . . . . . . . . . . . . . 7 1121 3.3.1. Replay Attacks . . . . . . . . . . . . . . . . . . . . . . . 8 1122 3.3.2. Message Insertion . . . . . . . . . . . . . . . . . . . . . . 8 1123 3.3.3. Message Deletion . . . . . . . . . . . . . . . . . . . . . . 8 1124 3.3.4. Message Modification . . . . . . . . . . . . . . . . . . . . 9 1125 3.3.5. Man-In-The-Middle . . . . . . . . . . . . . . . . . . . . . . 9 1126 4. Common Issues . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1127 4.1. User Authentication . . . . . . . . . . . . . . . . . . . . . . 10 1128 4.1.1. Username/Password . . . . . . . . . . . . . . . . . . . . . . 10 1129 4.1.2. Challenge Response and One Time Passwords . . . . . . . . . . 10 1130 4.1.3. Certificates . . . . . . . . . . . . . . . . . . . . . . . . 11 1131 4.1.4. Some Uncommon Systems . . . . . . . . . . . . . . . . . . . . 11 1132 4.1.5. Host Authentication . . . . . . . . . . . . . . . . . . . . . 11 1133 4.2. Authorization vs. Authentication . . . . . . . . . . . . . . . 12 1134 4.2.1. Access Control Lists . . . . . . . . . . . . . . . . . . . . 12 1135 4.2.2. Certificate Based Systems . . . . . . . . . . . . . . . . . . 12 1136 4.3. Providing Traffic Security . . . . . . . . . . . . . . . . . . 13 1137 4.3.1. IPSEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1138 4.3.2. SSL/TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1139 4.3.3. Remote Login . . . . . . . . . . . . . . . . . . . . . . . . 14 1140 4.4. Denial of Service Attacks and Countermeasures . . . . . . . . . 14 1141 4.4.1. Blind Denial of Service . . . . . . . . . . . . . . . . . . . 15 1142 4.4.2. Avoiding Denial of Service . . . . . . . . . . . . . . . . . 15 1143 4.4.2.1. Make your attacker do more work than you do . . . . . . . . 15 1144 4.4.2.2. Make your attacker prove they can receive data from you 1145 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1146 4.4.3. Example: TCP SYN Floods . . . . . . . . . . . . . . . . . . . 16 1147 4.4.4. Example: Photuris . . . . . . . . . . . . . . . . . . . . . . 16 1148 4.5. Object vs. Channel Security . . . . . . . . . . . . . . . . . . 16 1149 4.6. Defending Aganst Denial of Service . . . . . . . . . . . . . . 17 1150 5. Writing Security Considerations Sections . . . . . . . . . . . . 17 1151 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1152 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1153 6.1. SMTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1154 6.1.1. SMTP Security Considerations . . . . . . . . . . . . . . . . 19 1155 6.1.1.1. SMTP over IPSEC . . . . . . . . . . . . . . . . . . . . . . 20 1156 6.1.1.2. SMTP/TLS . . . . . . . . . . . . . . . . . . . . . . . . . 20 1157 6.1.1.3. S/MIME and PGP/MIME . . . . . . . . . . . . . . . . . . . . 21 1158 6.1.1.4. Denial of Service . . . . . . . . . . . . . . . . . . . . . 21 1159 6.1.1.5. Inappropriate Usage . . . . . . . . . . . . . . . . . . . . 21 1160 6.2. VRRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1161 6.2.1. Security Considerations . . . . . . . . . . . . . . . . . . . 22 1162 6.2.1.1. No Authentication . . . . . . . . . . . . . . . . . . . . . 22 1163 6.2.1.2. Simple Text Password . . . . . . . . . . . . . . . . . . . 22 1164 6.2.1.3. IP Authentication Header . . . . . . . . . . . . . . . . . 23 1165 6.2.1.3. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 23 1166 6.2.1.3. References . . . . . . . . . . . . . . . . . . . . . . . . 23 1167 Security Considerations . . . . . . . . . . . . . . . . . . . . . . 23 1168 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . . 24