idnits 2.17.1 draft-moore-iot-security-bcp-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 31, 2016) is 2706 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-01) exists of draft-iab-iotsu-workshop-00 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group K. Moore 3 Internet-Draft Network Heretics 4 Intended status: Best Current Practice R. Barnes 5 Expires: May 4, 2017 Mozilla 6 H. Tschofenig 7 ARM Limited 8 October 31, 2016 10 Best Current Practices for Securing Internet of Things (IoT) Devices 11 draft-moore-iot-security-bcp-00.txt 13 Abstract 15 In recent years, embedded computing devices have increasingly been 16 provided with Internet interfaces, and the typically-weak network 17 security of such devices has become a challenge for the Internet 18 infrastructure. This document lists a number of minimum requirements 19 that vendors of Internet of Things (IoT) devices need to take into 20 account during development and when producing firmware updates, in 21 order to reduce the frequency and severity of security incidents in 22 which such devices are implicated. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on May 4, 2017. 41 Copyright Notice 43 Copyright (c) 2016 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 59 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 60 1.2. Note about version -00 of this document . . . . . . . . . 5 61 2. Design Considerations . . . . . . . . . . . . . . . . . . . . 5 62 2.1. General security design considerations . . . . . . . . . 5 63 2.1.1. Threat analysis . . . . . . . . . . . . . . . . . . . 6 64 2.1.2. Use of Standard Cryptographic Algorithms . . . . . . 6 65 2.1.3. Use of Standard Security Protocols . . . . . . . . . 6 66 2.1.4. Security protocols should support algorithm agility . 7 67 2.2. Authentication requirements . . . . . . . . . . . . . . . 7 68 2.2.1. Resistance to keyspace-searching attacks . . . . . . 7 69 2.2.2. Protection of authentication credentials . . . . . . 7 70 2.2.3. Resistance to authentication DoS attacks . . . . . . 8 71 2.2.4. Unauthenticated device use disabled by default . . . 8 72 2.2.5. Per-device unique authentication credentials . . . . 8 73 2.3. Encryption Requirements . . . . . . . . . . . . . . . . . 8 74 2.3.1. Encryption should be supported . . . . . . . . . . . 8 75 2.3.2. Opportunistic encryption discouraged . . . . . . . . 9 76 2.3.3. Encryption algorithm strength . . . . . . . . . . . . 9 77 2.3.4. Man in the middle attack . . . . . . . . . . . . . . 9 78 2.4. Firmware Updates . . . . . . . . . . . . . . . . . . . . 9 79 2.4.1. Automatic update capability . . . . . . . . . . . . . 9 80 2.4.2. Enable automatic firmware update by default . . . . . 9 81 2.4.3. Backward compatibility of firmware updates . . . . . 10 82 2.4.4. Automatic updates should be phased in . . . . . . . . 10 83 2.4.5. Authentication of firmware updates . . . . . . . . . 10 84 2.5. Private key management . . . . . . . . . . . . . . . . . 10 85 2.6. Operating system features . . . . . . . . . . . . . . . . 10 86 2.6.1. Use of memory compartmentalization . . . . . . . . . 10 87 2.6.2. Privilege minimization . . . . . . . . . . . . . . . 11 88 2.7. Miscellaneous . . . . . . . . . . . . . . . . . . . . . . 11 89 3. Implementation Considerations . . . . . . . . . . . . . . . . 11 90 3.1. Randomness . . . . . . . . . . . . . . . . . . . . . . . 11 91 4. Firmware Development Practices . . . . . . . . . . . . . . . 11 92 5. Documentation and Support Practices . . . . . . . . . . . . . 12 93 5.1. Support Commitment . . . . . . . . . . . . . . . . . . . 12 94 5.2. Bug Reporting . . . . . . . . . . . . . . . . . . . . . . 12 95 5.3. Labeling . . . . . . . . . . . . . . . . . . . . . . . . 12 96 5.4. Documentation . . . . . . . . . . . . . . . . . . . . . . 13 98 6. Security Considerations . . . . . . . . . . . . . . . . . . . 13 99 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 100 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 101 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 102 9.1. Normative References . . . . . . . . . . . . . . . . . . 13 103 9.2. Informative References . . . . . . . . . . . . . . . . . 13 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 106 1. Introduction 108 The weak security of Internet of Things devices has resulted in many 109 well-publicized security incidents over the last few years. 110 Unfortunately, it appears that very few lessons have been learned 111 from those incidents. The rate at which IoT devices are compromised 112 via network-based attacks appears to be increasing. The effect of 113 such security breaches goes far beyond the immediate effect on the 114 compromised devices and their users. A compromised device may, for 115 example, expose to an attacker secrets (such as passwords) stored in 116 the device. A compromised device also may be used to attack other 117 computers on the same local network as the device, or elsewhere on 118 the Internet. Attackers have constructed application networks of 119 compromised devices which have then been used for the purpose of 120 attacking other network hosts and services, for example distributed 121 password guessing attacks and distributed denial of service (DDoS) 122 attacks. [SNMP-DDOS][DDOS-KREBS] This document recommends a small 123 number of minimum security requirements to reduce some of the more 124 easily prevented security problems. 126 The scope of these recommendations is as follows: 128 - These measures described in this document are intended to impede 129 network-based attacks. These measures are not intended to impede 130 other kinds of attacks, e.g. those requiring physical access to 131 the device, though following these requirements may help reduce 132 the effectiveness of some such attacks. This document does not 133 address physical attacks because thwarting such attacks is 134 generally outside of IETF's expertise, and because it is 135 understood that the physical security requirements of Internet- 136 connected devices vary widely from one application to another. 137 However, because a device compromised by physical means may be 138 used to attack other devices or to obtain information that useful 139 in attacking other devices, it is strongly recommended that 140 vendors of Internet-connected devices carefully consider physical 141 security requirements when designing their products. 143 - In principle these requirements apply to all hosts that connect to 144 the Internet, but this list of requirements is specifically 145 targeted at devices that are constrained in their capabilities, 146 more than general-purpose programmable hosts (PCs, servers, 147 laptops, tablets, etc.), routers, middleboxes, etc. While this is 148 a fuzzy boundary, it reflects the current understanding of IoT. A 149 more detailed treatment of some of the constraints of IoT devices 150 can be found in [RFC7228]. 152 - These are MINIMUM requirements that apply to all devices. They 153 are unlikely to be sufficient by themselves, to ensure security of 154 hosts from attack. Because IoT devices are used in a large number 155 of different domains with different needs, each device will have 156 its own unique security considerations. It is not feasible to 157 completely list all security requirements in a document such as 158 this. Vendors should conduct threat assessments of each device 159 they produce, to determine which additional security 160 considerations are applicable for use in a given application 161 domain. 163 - It is expected that this list of requirements will be revised from 164 time to time, as new threats are identified, and/or new security 165 techniques become feasible. 167 - This document makes broad recommendations, but avoids recommending 168 specific technological solutions to security issues. A companion 169 document may be produced with suggestions for design choices and 170 implementations that may aid in meeting these requirements. 172 We expect that many of the requirements can easily be met by most 173 vendors, but may require additional documentation and transparency of 174 a vendor's development practices to improve credibility of their 175 security practices in the marketplace. 177 1.1. Terminology 179 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 180 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 181 "OPTIONAL" in this document are to be interpreted as described in RFC 182 2119 [RFC2119]. These key words describe normative requirements of 183 this specification. This specification also contains non-normative 184 recommendations that do not use these key words. 186 This document uses the term "firmware" to refer to the executable 187 code and associated data that, in combination with device hardware, 188 implements the functionality of an internet- connected device. 189 Traditionally the term "firmware" refers to code and data stored in 190 non-volatile memory as distinguished from "software" which presumably 191 refers to code stored in read/write or erasable memory, or code that 192 can be loaded from other devices. For the purpose of this document, 193 "firmware" applies to any kind of code or data that implements the 194 functions that the device provides. Both software and firmware 195 present similar issues regarding device security, and it is easier to 196 use "firmware" consistently than to write "software and firmware". 198 1.2. Note about version -00 of this document 200 The goal for the initial version of this document is to invite 201 discussion about what minimum security standards for internet- 202 connected devices are appropriate. Consequently, this draft suggests 203 a wide range of potential measures. The authors, however, understand 204 that imposing too many barriers to adoption might discourage device 205 manufacturers from attempting to comply with this standard. We seek 206 to find the right balance that helps improve the security of the 207 Internet. We understand that some of the requirements in this draft 208 may need to be removed or relaxed, at least in an initial version of 209 a BCP document, and that other requirements may require additional 210 refinement and justification. 212 2. Design Considerations 214 This section lists requirements and considerations that should affect 215 the design of an internet-connected device. Broadly speaking, such 216 considerations include device architecture, hardware and firmware 217 component choices, partitioning of function, design and/or choice of 218 protocols used to communicate with the device. 220 2.1. General security design considerations 222 In general an Internet connected device should: 224 - Protect itself from attacks that impair its function or allow it 225 to be used for unintended purposes without authorization; 227 - Protect its private authentication credentials and key material 228 from disclosure to unauthorized parties; 230 - Protect the information received from the device, transmitted from 231 the device, or stored on the device, from inappropriate disclosure 232 to unauthorized parties; and 234 - Protect itself from being used as a vector to attack other devices 235 or hosts on the Internet. 237 Each device is responsible for its own security and for ensuring that 238 it is not used as a vector for attack on other Internet hosts. The 239 design of a device MUST NOT assume that a firewall or other perimeter 240 security measure will protect the device from attack. While useful 241 as part of a layered defense strategy, perimeter security has 242 consistently been demonstrated to be insufficient to thwart attacks 243 by itself. There are nearly always mechanisms by which one or more 244 hosts on the local network can be compromised, and which in turn can 245 serve as a means to attack other hosts. Even "air gapped" networks 246 have been compromised by portable storage devices or software 247 updates. 249 For some kinds of attack, there is a limited amout that a device can 250 do to prevent the attack. For instance, any device can fall victim 251 to certain kinds of denial-of-service attack caused by receiving more 252 traffic in a given amount of time than the device can process. A 253 device should be designed to gracefully tolerate some amount of 254 excessive traffic without failing entirely, but at some point the 255 device receives so much traffic that it cannot distinguish valid 256 requests from invalid ones. 258 2.1.1. Threat analysis 260 The design for a device MUST enumerate specific security threats 261 considered in its design, and the specific measures taken (if any) to 262 remedy or limit the effect of each threat. This requirement 263 encourages making deliberate, explicit choices about security 264 measures at design time rather than leaving security as an 265 afterthought. This document is also useful later in the life cycle 266 of a device if it becomes necessary to improve security; for instance 267 it can help identify whether the original design choices fulfilled 268 their intended function or failed to do so, or whether a newly 269 discovered threat was not anticipated in the original design. 271 2.1.2. Use of Standard Cryptographic Algorithms 273 Standard or well-established, mature algorithms for cryptographic 274 functions (such as symmetric encryption, public-key encryption, 275 digital signatures, cryptographic hash / message integrity check) 276 MUST be used. 278 Explanation: A tremendous amount of subtlety must be understood in 279 order to construct cryptographic algorithms that are resistant to 280 attack. A very few people in the world have the knowledge required 281 to construct or analyze robust new cryptographic algorithms, and even 282 then, many knowledgable people have constructed algorithms that were 283 found to be flawed within a short time. 285 2.1.3. Use of Standard Security Protocols 287 Standard protocols for authentication, encryption, and other means of 288 assuring security SHOULD be used whenever apparently-robust, 289 applicable protocols exist. 291 Explanation: The amount of expertise required to design robust 292 security protocols is comperable to that required to design robust 293 cryptographic algorithms. However, there are sometimes use cases for 294 which no existing standard protocol may be suitable. In these cases 295 it may be necessary to adapt an existing protocol for a new use case, 296 or even to design a new security protocol. 298 2.1.4. Security protocols should support algorithm agility 300 The security protocols chosen for a device design, and the 301 implementations of those protocols, SHOULD support the ability to 302 choose between multiple cryptographic algorithms and/or to negotiate 303 minimum key sizes. 305 Explanation: This way, if a flaw in one algorithm is discovered that 306 weakens its security, updated devices or their application peers with 307 which they communicate, may refuse to use that algorithm, or permit 308 its use only with a longer key than originally required. This allows 309 devices and protocol implementations to continue providing adequate 310 security even after weaknesses in algorithms are discovered. 312 The concept of crypto agility is further described in [RFC7696]. 314 2.2. Authentication requirements 316 The vast majority of Internet-connected devices will require 317 authentication for some purposes, whether to protect the device from 318 unauthorized use or reconfiguration, and to protect information 319 stored within the device from disclosure or modification. This 320 section details authentication requirements for devices that require 321 authentication. 323 2.2.1. Resistance to keyspace-searching attacks 325 A device that requires authentication MUST be designed to make brute- 326 force authentication attacks, dictionary attacks, or other attacks 327 that involve exhaustive searching of the device's key or password 328 space, infeasible. 330 2.2.2. Protection of authentication credentials 332 A device MUST be designed to protect any secrets used to authenticate 333 to the device (such as passwords or private keys) from disclosure via 334 monitoring of network traffic to or from the device. For example, if 335 a password is used to authenticate a client to the device, that 336 password must not appear "in the clear" or in any form via which 337 extraction of the password from network traffic is computationally 338 feasible. 340 2.2.3. Resistance to authentication DoS attacks 342 A device SHOULD be designed to gracefully tolerate excessive numbers 343 of authentication attempts, for instance by giving CPU priority to 344 existing protocol sessions that have already successfully 345 authenticated, limiting the number of concurrent new sessions in the 346 process of authenticating, and randomly discarding attempts to 347 establish new sessions beyond that limit. The specific mechanism is 348 a design choice to be made in light of the specific function of the 349 device and the protocols used by the device. What's important for 350 this requirement is that this be an explicit choice. 352 2.2.4. Unauthenticated device use disabled by default 354 A device that supports authentication SHOULD NOT be shipped in a 355 condition that allows an unauthenticated client to use any function 356 of the device that requires authentication, or to change that 357 device's authentication credentials. 359 Explanation: Most devices that can be used in an unauthenticated 360 state will never be configured to require authentication. These 361 devices are attractive targets for attack and compromise, especially 362 by botnets. This is very similar to the problems caused by shipping 363 devices with default passwords. 365 2.2.5. Per-device unique authentication credentials 367 Many devices that require authentication will be shipped with default 368 authentication credentials, so that the customer can authenticate to 369 the device using those credentials until they are changed. Each 370 device that requires authentication SHOULD be instantiated either 371 prior to shipping, or on initial configuration by the user, with 372 credentials unique to that device. If a device is not instantiated 373 with device-unique credentials, that device MUST NOT permit normal 374 operation until those credentials have been changed to something 375 other than the default credentials. 377 Explanation: devices that were shipped with default passwords have 378 been implicated in several serious denial-of-service attacks on 379 widely-used Internet services. 381 2.3. Encryption Requirements 383 2.3.1. Encryption should be supported 385 Internet-connected devices SHOULD support the capability to encrypt 386 traffic sent to or from the device. Any information transmitted over 387 a network is potentially sensitive to some customers. For example, 388 even a home temperature monitoring sensor may reveal information 389 about when occupants are away from home, when they wake up and when 390 they go to bed, when and how often they cook meals - all of which are 391 useful to, say, a thief. 393 Note: This requirement is separate from the requirement to protect 394 authentication secrets from disclosure. Authentication secrets MUST 395 be protected from disclosure even if a general encryption capability 396 is not supported, or if the capability is optional and a particular 397 client or user doesn't use it. 399 2.3.2. Opportunistic encryption discouraged 401 If a device supports encryption and use of encryption is optional, 402 the device SHOULD be configurable to require encryption. 404 2.3.3. Encryption algorithm strength 406 Encryption algorithms and minimum key lengths SHOULD be chosen to 407 make brute-force attack infeasible. 409 2.3.4. Man in the middle attack 411 Encryption protocols SHOULD be resistant to man-in-the-middle attack. 413 2.4. Firmware Updates 415 2.4.1. Automatic update capability 417 Vendors MUST offer an automatic firmware update mechanism. A 418 discussion about the firmware update mechanisms can be found in 419 [I-D.iab-iotsu-workshop]. 421 Devices SHOULD be configured to check for the existence of firmware 422 updates at frequent but irregular intervals. 424 2.4.2. Enable automatic firmware update by default 426 Automatic firmware updates SHOULD be enabled by default. A device 427 MAY offer an option to disable automatic firmware updates. If 428 enabling or disabling the automatic update feature is controlled by a 429 network protocol, the device MUST require authentication of any 430 request to enable or disable it. 432 2.4.3. Backward compatibility of firmware updates 434 Automatic firmware updates SHOULD NOT change network protocol 435 interfaces in any way that is incompatible with previous versions. A 436 vendor MAY offer firmware updates which add new features as long as 437 those updates are not automatically initiated. 439 2.4.4. Automatic updates should be phased in 441 To prevent widespread simultaneous failure of all instances of a 442 particular kind of device due to a bug in a new firmware release, 443 automatic firmware updates SHOULD be phased-in over a short time 444 interval rather than updating all devices at once. 446 2.4.5. Authentication of firmware updates 448 Firmware updates MUST be authenticated and the integrity of such 449 updates assured before the update is installed. Unauthenticated 450 updates or updates where the authentication or integrity checking 451 fails MUST be rejected. 453 2.5. Private key management 455 If private key cryptography is used in a device's security protocols, 456 each device MUST be instantiated with its own unique private key or 457 keys. In many cases it will be necessary for the vendor to sign such 458 keys or arrange for them to be signed by a trusted party, prior to 459 shipping the device. 461 Per-device private keys SHOULD be generated on the device and never 462 exposed outside the device. 464 2.6. Operating system features 466 2.6.1. Use of memory compartmentalization 468 Device firmware SHOULD be designed to use hardware and operating 469 systems that implement memory compartmentalization techniques, in 470 order to prevent read, write, and/or execute access to areas of 471 memory by processes not authorized to use those areas for those 472 purposes. 474 Vendors that do not make use of such features MUST document their 475 design rationale. 477 Explanation: Such mechanisms, when properly used, reduce the impact 478 of a firmware bug, such as a buffer overflow vulnerability. 479 Operating systems, or even firmware running on "bare metal", that do 480 not provide such a separation allow an attacker to gain access to the 481 complete address space. While these concepts have been available in 482 hardware for a long time already, they often are not utilized by 483 real-time operating systems. 485 2.6.2. Privilege minimization 487 Device firmware SHOULD be designed to isolate privileged code and 488 data from portions of the firmware that do not need to access them, 489 in order to minimize the potential for compromised code to access 490 those code and/or data. 492 2.7. Miscellaneous 494 3. Implementation Considerations 496 This section lists requirements for implementation that broadly 497 affect security of a device. 499 3.1. Randomness 501 Vendors MUST include a solution for generating cryptographic quality 502 random numbers in their products. Randomness is an important 503 component in security protocols and without such randomness many of 504 today's security protocols offer weak or no security protection. 505 Hardware random-number generators, when feasible, SHOULD be utilized, 506 but MAY be combined with other sources of randomness. 508 A discussion about randomness can be found in [RFC4086]. 510 4. Firmware Development Practices 512 This section outlines requirements for development of firmware that 513 is employed on Internet-connected devices. 515 Vendors SHOULD use modern firmware development practices, including: 517 - Source code change control systems, which record all changes made 518 to source code along with the identity of the person who committed 519 the change. Such systems help to identify which versions of code 520 contain a particular bug, as well as protect against insertion of 521 malicious code. 523 - Bug tracking systems. 525 - Automated testing of a set of pre-defined test conditions, 526 including tests for all security vulnerabilities identified to 527 date via either analysis or experience. 529 - Periodic checking of bug databases for reported security issues 530 associated with the product itself, and with all components (for 531 example: kernel, libraries, and protocol servers) used in the 532 product. 534 - Periodic checking of third party-provided source code and object 535 code for security bugs, or updates intended to thwart security 536 bugs. 538 All known security bugs for which fixes are known MUST be addressed 539 prior to shipping a new product or or a code update. 541 5. Documentation and Support Practices 543 5.1. Support Commitment 545 Vendors MUST be transparent about their commitment to supply devices 546 with updates before selling products to their customers and what 547 happens with those devices after the support period finishes. 549 Within the support period, vendors SHOULD provide firmware updates 550 whenever new security risks associated with their products are 551 identified. Such firmware updates SHOULD NOT change the protocol 552 interfaces to those products, except as necessary to address security 553 issues, so that they can be deployed without disruption to customers' 554 networks. Firmware updates MAY introduce new features which change 555 protocol interfaces if those features are optional and disabled by 556 default. 558 5.2. Bug Reporting 560 Vendors MUST provide an easy to find way for reporting of security 561 bugs, which is free of charge. 563 5.3. Labeling 565 Vendors MUST have a manufacturer, model number and hardware revision 566 number legibly printed on the device. This attempts to help 567 customers with bug reporting. 569 There SHOULD be a documented means of querying a device for its model 570 number, hardware revision number, and firmware revision number via 571 its network interface and/or via any manual input and display. This 572 interface MAY require authentication. 574 5.4. Documentation 576 Vendors MUST offer documentation about their products so that 577 security experts are able to assess the design choices. While such a 578 document will not directly help end customers since they will almost 579 always lack the expertise to judge these design decisions but they 580 help security experts to assess liability and independent third 581 parties to compare products without spending an unproportional amount 582 of time. 584 This form of public documentation will help transparency similar to 585 other documentation requirements found in other industries. It will 586 also help to evolve the best practices described in this document. 588 6. Security Considerations 590 This entire document is about security. 592 7. IANA Considerations 594 This document does not contain any requests to IANA. 596 8. Acknowledgements 598 Add acknowledgements here. 600 9. References 602 9.1. Normative References 604 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 605 Requirement Levels", BCP 14, RFC 2119, 606 DOI 10.17487/RFC2119, March 1997, 607 . 609 [RFC4086] Eastlake 3rd, D., Schiller, J., and S. Crocker, 610 "Randomness Requirements for Security", BCP 106, RFC 4086, 611 DOI 10.17487/RFC4086, June 2005, 612 . 614 9.2. Informative References 616 [DDOS-KREBS] 617 Goodin, D., "Record-breaking DDoS reportedly delivered by 618 >145k hacked cameras", September 2016, 619 . 622 [I-D.iab-iotsu-workshop] 623 Tschofenig, H. and S. Farrell, "Report from the Internet 624 of Things (IoT) Software Update (IoTSU) Workshop 2016", 625 draft-iab-iotsu-workshop-00 (work in progress), October 626 2016. 628 [RFC7228] Bormann, C., Ersue, M., and A. Keranen, "Terminology for 629 Constrained-Node Networks", RFC 7228, 630 DOI 10.17487/RFC7228, May 2014, 631 . 633 [RFC7696] Housley, R., "Guidelines for Cryptographic Algorithm 634 Agility and Selecting Mandatory-to-Implement Algorithms", 635 BCP 201, RFC 7696, DOI 10.17487/RFC7696, November 2015, 636 . 638 [SNMP-DDOS] 639 BITAG, "SNMP Reflected Amplification DDoS Attack 640 Mitigation", August 2012, 641 . 644 Authors' Addresses 646 Keith Moore 647 Network Heretics 648 PO Box 1934 649 Knoxville, TN 37901 650 United States 652 EMail: moore@network-heretics.com 654 Richard Barnes 655 Mozilla 657 EMail: rbarnes@mozilla.com 659 Hannes Tschofenig 660 ARM Limited 662 EMail: hannes.tschofenig@gmx.net