idnits 2.17.1 draft-hanna-zeroconf-seccfg-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 610 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 49 instances of too long lines in the document, the longest one being 7 characters in excess of 72. ** There is 1 instance of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (January 2002) is 8135 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. '1' Summary: 5 errors (**), 0 flaws (~~), 2 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network working group Stephen R. Hanna, Sun Microsystems, Inc. 3 Internet Draft 4 January 2002 5 Expires: July 2002 draft-hanna-zeroconf-seccfg-00.txt 7 Configuring Security Parameters in Small Devices 9 Status of this Memo 11 This document is an Internet-Draft and is subject to all 12 provisions of Section 10 of RFC 2026. 14 Internet-Drafts are working documents of the Internet Engineering 15 Task Force (IETF), its areas, and its working groups. Note that 16 other groups may also distribute working documents as Internet- 17 Drafts. 19 Internet-Drafts are draft documents valid for a maximum of six months 20 and may be updated, replaced, or obsoleted by other documents at any 21 time. It is inappropriate to use Internet-Drafts as reference 22 material or to cite them other than as "work in progress." 24 The list of current Internet-Drafts can be accessed at 25 http://www.ietf.org/ietf/1id-abstracts.txt. 27 The list of Internet-Draft Shadow Directories can be accessed at 28 http://www.ietf.org/shadow.html. 30 Abstract 32 Before a device is installed into a secure network, certain security 33 parameters (such as keys) must be configured. This document describes 34 several techniques for establishing these parameters and analyzes 35 their advantages and disadvantages, considering especially their 36 suitability for inexpensive devices and inexperienced operators. 38 1. Introduction 40 The IETF's Zero Configuration Networking working group is working on 41 protocols that allow devices to form a network with little or no 42 configuration. Unfortunately, securing such a network is problematic. 43 All proposed security techniques require some configuration. Otherwise, 44 your new lamp won't know that it should respond to your commands and 45 not your neighbors'. 47 The challenge is to make this configuration as easy as possible and 48 keep the per-device cost low. This document describes several 49 techniques for establishing security parameters and analyzes their 50 advantages and disadvantages, especially their suitability for 51 inexpensive devices and inexperienced operators. 53 This document does not describe the exact set of security parameters 54 to be established or how those parameters should be used in a zeroconf 55 environment. Another document will tackle that problem. But a simple 56 example of such parameters would be a single symmetric key shared by 57 all trusted devices on the network and used for encrypting and 58 authenticating transmissions between those devices. 60 2. Evaluation Criteria 62 Several criteria are examined for each configuration technique. For 63 each criterion, I will rate the schemes on a 1-5 scale with 1 being 64 unacceptable and 5 being excellent. 66 No perfect solution has been found. Each technique provides different 67 tradeoffs among the various criteria. 69 2.1. Device Cost 71 Solutions should increase the cost of the device as little as possible. 72 Cost is a crucial criterion for consumer goods. My goal is to keep 73 incremental cost so low that anyone who creates a device with zeroconf 74 networking abilities will include security. 76 2.2. Ease of Use 78 Solutions should be as easy for the user as possible. A slow, complex, 79 or cumbersome process is a barrier to use. 81 2.3. Security 83 Solutions should be as secure as possible. Security is the point. 84 See section 3 for a threat analysis. 86 2.4. Flexibility 88 Solutions should be flexible, so that they can work with many 89 different kinds of devices. For instance, a solution that requires 90 the device to display a number won't work if the device doesn't 91 have a display. 93 3. Threat Analysis 95 The goal is to easily configure a device so that it can communicate 96 securely with other devices on a network. I will only analyze 97 configuration mechanisms, not the protocols used to communicate 98 among devices. But it is still important to have a clear understanding 99 of the threats against which these configuration mechanisms are 100 designed to protect. I will examine attacker's goals, motivations, 101 and abilities. 103 3.1. Attacker Goals and Motivations 105 Attackers may want to read messages sent by the device, such as 106 audio signals or other monitoring data from sensors. They may 107 also want to control the device by sending it commands, such as 108 the ability to open doors or windows, turn lights on and off, etc. 109 Or they may just want to detect homes that have wireless 110 networks and therefore might be good targets for burglary. 112 Attackers' motivations may include curiosity, mischief-making, 113 malicious destruction, or financial gain. 115 3.2. Ability to Read and Modify Network Data 117 I assume that attackers can read or modify data sent over the 118 network and inject arbitrary messages into the network. This is 119 consistent with networks commonly used in zero configuration 120 environments, such as a wireless or power line network. Layer 2 121 encryption can be used to secure such networks and should be 122 employed when available. However, some layer 2 encryption systems 123 are easily bypassed or broken. Also, some networks do not offer 124 level 2 encryption and a zero configuration network can span 125 multiple layer 2 networks, thus providing a vulnerability. 127 Even a network completely protected against outside participation 128 can be vulnerable to these attacks if some of the devices on the 129 network become compromised or act as gateways for outside networks. 130 In general, it's best to assume that attackers have access to the 131 network. 133 3.3. Physical Security 135 In general, I assume that attackers are not able to violate a 136 device's physical security. With inexpensive devices, even a brief 137 period of unmonitored physical access will generally allow a 138 skilled attacker to compromise a device or replace it with an 139 identical, compromised unit. 141 In the zero configuration environment, these assumptions may not 142 hold. Neighbors, workmen, and other partially trusted guests are 143 often welcomed into one's home with little supervision. Even 144 roommates or family members can become adversaries at times, 145 perhaps resulting in "insider attacks". Device security cannot 146 offer a great deal of protection against such problems. However, 147 some of the configuration systems noted below provide a bit of 148 protection against physical attacks, especially against unskilled 149 attackers. I will note such protection where present. 151 3.4. Device Vulnerabilities 153 The attacker may be able to take advantage of bugs and 154 vulnerabilities in the device software or hardware to compromise 155 the device (for instance, sending an invalid message that causes 156 a buffer overflow and allows the attacker to execute arbitrary code). 157 Configuration solutions cannot offer much protection against 158 such vulnerabilities. 160 Device manufacturers often include backdoors that their support 161 staff can use to help customers gain access. Such backdoors can 162 represent a serious security hole. If they decide that backdoors 163 are required, manufacturers should design them so that they 164 require physical access to the device. 166 3.5. Denial Of Service 168 An attacker who can send data on the network can probably easily 169 jam the network with packets (or just with noise). Configuration 170 techniques can't provide much protection against such attacks. 171 However, this sort of active attack may make it easier to find 172 an attacker. 174 3.6. Configuration Attack 176 Most people probably won't ever enable security on their device. 177 An attacker can mount a clever denial of service attack by 178 configuring security on the device (or reconfiguring it, if it's 179 already enabled) so that the user can't access the device. To 180 prevent such attacks, security should be hard to enable (requiring 181 physical access and perhaps a password shipped with the device) 182 and easy to disable (if you have physical access to the device). 184 Of course, this will make it easy for attackers with physical 185 access to the device to disable security. But a home controller 186 can quickly detect this and notify the owner. Requiring physical 187 access to disable security (pressing a button) is a reasonable 188 compromise. After all, device manufacturers will not include 189 security features if they are likely to increase support costs. 191 3.7. Traffic Analysis 193 Various attacks based on traffic analysis may be possible. 194 One of the more obvious is for a burglar to drive around 195 with a wireless networking antenna, detecting wireless network 196 traffic to decide which homes are likely to have computers inside. 197 Possible countermeasures could include using wired networking to 198 make detection more difficult. Installing a monitored security 199 alarm and purchasing sufficient theft insurance would also be 200 prudent. In any case, this has little to do with configuration. 202 3.8. Summary 204 Device security typically offers little protection against 205 physical attacks and insider attacks. However, some protection 206 can be offered by employing a controller that monitors devices 207 and notifies the owner when devices change. 209 Since device security can be used to lock someone out of their 210 devices, it should be easy to disable device security if you 211 have physical access to the device. The home controller should 212 detect and log such changes, in case they were unauthorized. 214 The primary area where device security can be effective is against 215 attacks that are mounted remotely (through the network). Such 216 attacks can be mounted with little risk or cost to the attacker 217 and without raising the suspicions of the attacked party, 218 especially when a wireless network is employed. The configuration 219 techniques described in this document provide a way to configure 220 security parameters (such as keys) that may be employed in 221 protocols to protect against remote attacks. 223 4. Security Configuration Mechanisms 225 On conventional systems, initial security parameters such as 226 trusted keys and security policies are usually typed in with 227 a keyboard or loaded from floppy disk or other removable storage 228 device such as a CD-ROM or smart card. Once those initial parameters 229 have been established, they can be used to securely deliver other 230 things such as software and updated security parameters. 232 Many consumer devices don't have a keyboard or removable storage 233 device. Adding these things would increase the cost of the device 234 too much. Some devices have a limited keyboard and display, but 235 entering a long key into my microwave oven isn't my idea of fun! 237 Here are some other ways that security parameters can be 238 configured into consumer devices. 240 4.1. Secret stored in device during manufacturing 242 With this solution, a cryptographic secret is stored in the device 243 during manufacturing. This secret generally cannot be changed. The 244 secret is also printed as a bar code, which is sealed in a 245 tamper-evident manner and shipped with the device. When the device 246 is purchased, the bar code is scanned into a home controller or 247 other security manager. The security manager can now use this 248 shared secret to send the security parameters to the device 249 over an untrusted network (encrypting the data with the secret). 251 The secret can be transferred to the security manager in many other 252 ways. It can be printed in a human readable format and typed on a 253 keyboard, stored on and read from a mag stripe card or a floppy disk, 254 etc. In general, I will say that it is transported to the 255 security manager via a "secure transport mechanism". Including a 256 keyboard, barcode reader, or removable storage device in the 257 security manager shouldn't be a big cost problem. You only need one 258 security manager per home and it will probably have a keyboard and 259 display to interact with the user, anyway. 261 The secret could be transmitted from the device to the security 262 manager over a secure network, such as a dedicated wire, a proximity 263 network (like infrared or Bluetooth), or by touching electrical contacts. 264 However, if a secure network is available, it is usually better to 265 generate the secret on the fly (as describe in section 4.2). This 266 reduces manufacturing cost and eliminates the risk of the user losing 267 the secret. Therefore, this option is not analyzed any further. 269 It is not good to just print the secret on the bottom of the device, 270 or a casual visitor can read it without detection. Putting the secret 271 on a storage mechanism (like paper) that is included with the device 272 allows the user to keep this copy of the secret in a safe place, away 273 from prying eyes. 275 Making the secure transport mechanism tamper-evident (by wrapping it with 276 a printed piece of plastic or using a sealed piece of paper) makes it less 277 likely that someone will copy the secret before the user receives it, 278 without their knowledge. This attack seems unlikely, especially 279 if the user buys the device from a large store. Therefore, I conclude that 280 tamper-resistance should only be used if it doesn't add much to the cost 281 or decrease ease of use much. 283 Another way to get the secret to the security manager is to have 284 the security manager download the secret from a central server by 285 providing the device's serial number or some other identifier. 286 Unfortunately, such identifiers are usually not secret (or there would 287 be no need to download the secret!). So it would be easy for an 288 unauthorized party to find out the serial number and download the 289 secret. So this isn't a great answer, either. 291 In order to protect against configuration attacks (where someone turns 292 on security and the owner doesn't know how to use it), there should be 293 a switch, button, or other control on the device that enables and 294 disables security. It should be shipped with security off, since most 295 people won't want to use security. 297 Someone with physical access to the device could use the switch to disable 298 security, but the security manager should be able to detect this and notify 299 the owner. Why not allow security to be enabled over the network? Because 300 that means you can mount a configuration attack without physical access. 302 4.1.1. Evaluation 304 Cost 306 The device will need a place to store the secret. However, it would 307 already need a place to store the security parameters, so this probably 308 will not add much to cost. 310 Including a switch to enable and disable security may add some 311 significant cost (up to $1). Devices that already have a keyboard 312 can use a special combination of keys for this, eliminating any 313 incremental cost. 315 Manufacturing cost will increase, due to the need to generate the secret, 316 store it on the device, and place it on the secure transport mechanism. 317 The cost of the secure transport mechanism must be considered, as well. 319 And there must be a security manager that knows all of the device 320 secrets and supports the secure transport mechanism (via a keyboard, a 321 barcode reader, or some similar mechanism). However, this cost may be 322 spread across many devices. Also, this device can serve other functions 323 as well, such as providing a UI for controlling and configuring devices. 325 The incremental cost per device will vary. For simple devices, the 326 cost of the security switch may be significant. But increased 327 manufacturing costs will probably be the greatest factor. 329 I'll rate this scheme a 3 for cost. 331 Ease of Use 333 Scanning the secret into the security manager should be pretty easy. 334 But what if the user loses the slip of paper first? Or what if their 335 security manager breaks? They need to find the slip of paper to scan 336 into the new security manager. In many households, the slip will be 337 lost quickly. Flipping the switch to enable security is a minor 338 irritant compared to this problem. 340 I'll rate this scheme a 2 for ease of use. 342 Security 344 What if someone nasty learns the secret? They might see the slip of 345 paper, take the security maneger, or some such. There's no way to 346 change the secret. So they will have total control over the device. 347 And the device's owner may not even know about this. 349 I'll rate this scheme a 2 for security. 351 Flexibility 353 This scheme requires a switch or similar control to enable and disable 354 security (unless you're willing to live with configuration attacks). 355 It will be inconvenient for devices that are hard to access, such as 356 a ceiling light. But enabling security should be a one-time event. 357 Overall, this scheme gets high marks for flexibility. It should work 358 with almost any kind of device. 360 I'll rate this scheme a 4 for flexibility. 362 4.2. Secret established over a secure network 364 When the user wants to enable security, she connects the device to 365 a security manager via a secure network (such as a wire directly 366 connecting the two). Then she presses a button on the device, which 367 generates a new secret and sends it to the security manager over 368 the network. Now the device can be removed from the secure network 369 and security parameters can be downloaded securely from the security 370 manager, using the secret. Security can be disabled by holding down 371 the button for a longer period of time. 373 As a further alternative, the secret can be generated by the security 374 manager instead of the device. This makes more sense, since the 375 security manager will probably include a good source of entropy. 376 The device can send a secret request when the security button is 377 depressed and the security manager can respond by sending a new secret. 378 This also means that security could be disabled by disconnecting the 379 device from the secure network, pressing the button to trigger a 380 security request, and noting that no response was received. 382 What kind of secure network can be used here? Many things will work: 383 a dedicated wire (like a USB cable), a proximity network (like infrared 384 or Bluetooth), or physical contact by touching electrical contacts. 385 Probably the simplest thing to do is to use power line networking. 386 Most devices have a power plug and the security manager can have a 387 special socket that's isolated from the rest of the power grid. 388 Devices that don't have a power plug may have a recharging stand 389 that has a power plug and could relay communications to the device. 391 There are a few obvious alternatives to the secure network. The security 392 manager could display the secret after generating it and the user could 393 type that value into the device's keyboard. But that's much too painful 394 for the user and it requires the device to have a keyboard (and a display, 395 to check for typos). If the device generates the secret and displays it, 396 then the device only needs a display but the user still has to type the 397 secret (which will typically be dozens of letters and numbers). The 398 device could print the secret as a barcode or store it on a floppy disk 399 or other storage device, but including a printer or storage device in 400 every toaster and light switch is much too expensive. 402 4.2.1. Evaluation 404 Cost 406 The cost of storing the secret and the cost of the security 407 button should be the same as with the previous scheme. 409 The incremental manufacturing cost of the last scheme isn't 410 required for this one, since there's no need to generate a secret, 411 print it, etc. 413 There may be extra cost for including the secure network. However, 414 many devices that support zero configuration networking will 415 probably already include support for power line networking, which 416 means that no extra cost will be incurred. 418 I'll rate this scheme a 4 for cost. 420 Ease of Use 422 Connecting the device to the security manager and pressing a 423 button is pretty easy. 425 I'll rate this scheme a 4 for ease of use. 427 Security 429 If a secret is compromised, it's easy to generate a new one with 430 this system. As with the previous system, anyone with physical 431 access to the device can disable security with the press of a 432 button. But that's a good thing, since it makes it easy to 433 recover from configuration attacks. And it should be easy for 434 the security manager to notice if security is disabled on a device. 436 I'll rate this scheme a 4 for security. 438 Flexibility 440 With a device that doesn't have a power plug (like a light 441 switch), it may be difficult to establish a secure network. 442 A secondary cable could be used, but this will require extra 443 cost and it may be difficult to agree on a standard connector 444 and cable type. 446 I'll rate this scheme a 3 for flexibility. 448 4.3. Secret established over an insecure network 450 Wouldn't it be nice if the device and the security manager could 451 establish a shared secret over the normal zeroconf network? Then 452 there would be no need to use a secure network, as in the last 453 scheme. 455 Actually, there is a way to do this, even if nasty folks are 456 listening in. It's a Diffie-Hellman exchange. Here's a simplified 457 description. A large prime number p and a generator g are chosen 458 in advance and are well-known. The device and security manager 459 each choose a random number between 1 and p-2. Call the device's 460 random number a and the security manager's number b. The device 461 computes g^a mod p and sends this value to the security manager. 462 The security manager computes g^b mod p and sends this value to 463 the device. Now each of them can compute g^(ab) mod p. And no 464 eavesdropper can determine g^(ab) mod p without a *lot* of work. 466 The problem with this system is that the device and the security 467 manager can't be sure they're talking to each other. There might be 468 a "man-in-the-middle" who has established a different shared secret 469 with each of them. Or maybe the interloper just impersonated the 470 security manager and the security manager wasn't aware of any exchange? 471 Since the network is not secure, this sort of thing can happen. 473 One way to overcome this problem is to have the device and the 474 security manager hash the newly-established secret and display a 475 few bits of this hash. The user can then compare the displayed 476 values. If they match, then the secret has been established properly. 478 This system has several problems. Most important, it requires too much 479 work from the user. Also, it requires a display on the device. This 480 isn't practical for light switches and other small or cheap devices. 481 Finally, computing g^a mod p and g^(ab) mod p in a few seconds requires 482 lots of compute power (or a p that's too small to be secure). And the 483 device must have a good-quality random number generator. Much too 484 expensive for a cheap device. 486 4.3.1. Evaluation 488 Cost 490 Including a display, high-quality random number generator, and 491 serious computing power in the device will increase cost of goods 492 substantially (probably $10 or more). 494 I'll rate this scheme a 1 for cost. 496 Ease of Use 498 Copying down a long string of numbers and letters and comparing 499 them to another set of numbers and letters isn't easy or pleasant. 501 I'll rate this scheme a 1 for ease of use. 503 Security 505 If the user fails to compare the secrets, a man-in-the-middle 506 attack is possible. The user has to do this for each device. 508 I'll rate this scheme a 2 for security. 510 Flexibility 512 Small devices may not have space for a display. 514 I'll rate this scheme a 3 for flexibility. 516 4.4. Other systems 518 Many other solutions have been considered and rejected. Here's 519 a brief description of some of these solutions. 521 The user could choose a secret and enter it into the device and the 522 security manager. But users don't generally choose good cryptographic 523 keys (which must be very long and truly random numbers). An attacker 524 with a computer could find a user-chosen key in a fraction of a second. 526 The security manager could choose a secret and then have the user 527 enter it into a keypad on the device. But this would not be easy for 528 the user and it would require a keypad on the device (and a display, 529 to check for errors). 531 The device and/or the security manager could have a public-private 532 key pair (and perhaps a certificate). But then I must securely configure 533 the public key or certificate identifier for one into the other. This is 534 similar to the problem of establishing a shared secret, above. And public 535 key cryptography requires much more CPU power than symmetric cryptography. 537 5. Conclusions 539 5.1. Configuring Security Parameters 541 Security parameters can be safely configured with the systems 542 described above. Several of these systems are practical and add 543 little or no extra cost per device. Some user configuration is 544 required, but that seems to be inevitable. 546 I recommend that a standard be created (maybe by the IETF, maybe 547 by someone else) that describes the first two schemes listed above 548 (Secret stored in device during manufacturing and Secret established 549 over a secure network) and develops them into a form that can be 550 incorporated into shipping products in an interoperable manner. 552 Most devices should probably use the secure network technique, 553 since this is easier for the user and less error-prone. Devices 554 that do not have a power plug may want to use the secret stored 555 during manufacturing technique. 557 Certainly, there may be other good techniques. I welcome ideas 558 from others. Please send them to my email address, listed below. 560 5.2. Zeroconf Security 562 Currently, each of the zeroconf protocol specifications has its 563 own security system. Each of these systems requires configuration 564 and complexity. This is not practical. I recommend that the 565 zeroconf working group create a security specification that either 566 explains how these several security systems can be automatically 567 bootstrapped using the mechanisms described above or (preferably) 568 replaces these security mechanisms with a simpler system such as 569 IPsec with a single group key shared by all authorized members 570 of the zeroconf network. 572 6. Acknowledgments 574 I would like to thank Erik Guttman, Radia Perlman, and Aiden Williams 575 for useful discussions on these topics. 577 Ross Anderson's Resurrecting Duckling paper [1] describes the need to 578 "imprint" a device with security parameters and describes one 579 solution (electrical contact). 581 7. Security Considerations 583 This document is full of security analysis and proposed security 584 solutions. 586 8. References 588 [1] Stajano, F. and R. Anderson, "The Resurrecting Duckling: Security 589 Issues for Ad-hoc Wireless Networks", from Security Protocols: 590 7th International Workshop Proceedings, Lecture Notes in Computer 591 Science, 1999, Springer-Verlag. 593 9. Authors' Addresses 595 Stephen R. Hanna 596 Sun Microsystems, Inc. 597 One Network Drive 598 Burlington, MA 01803 USA 599 Phone: +1.781.442.0166 600 Email: steve.hanna@sun.com 602 10. Intellectual Property Statement 604 Sun Microsystems holds intellectual property rights pertaining to 605 several of the ideas described in this document.