idnits 2.17.1 draft-richardson-t2trg-idevid-considerations-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (27 August 2020) is 1338 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'JTAGieee' is defined on line 1053, but no explicit reference was found in the text == Outdated reference: A later version (-45) exists of draft-ietf-anima-bootstrapping-keyinfra-43 == Outdated reference: A later version (-03) exists of draft-richardson-anima-voucher-delegation-01 == Outdated reference: A later version (-04) exists of draft-friel-anima-brski-cloud-02 == Outdated reference: A later version (-24) exists of draft-ietf-anima-constrained-voucher-08 == Outdated reference: A later version (-05) exists of draft-ietf-anima-brski-async-enroll-00 == Outdated reference: A later version (-10) exists of draft-moskowitz-ecdsa-pki-09 == Outdated reference: A later version (-08) exists of draft-richardson-rats-usecases-07 == Outdated reference: A later version (-16) exists of draft-ietf-suit-architecture-11 == Outdated reference: A later version (-06) exists of draft-ietf-emu-eap-noob-02 == Outdated reference: A later version (-22) exists of draft-ietf-rats-architecture-05 == Outdated reference: A later version (-24) exists of draft-ietf-sacm-coswid-15 == Outdated reference: A later version (-08) exists of draft-bormann-lwig-7228bis-06 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-12 Summary: 0 errors (**), 0 flaws (~~), 15 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 anima Working Group M. Richardson 3 Internet-Draft Sandelman Software Works 4 Intended status: Informational J. Yang 5 Expires: 28 February 2021 Huawei Technologies Co., Ltd. 6 27 August 2020 8 A Taxonomy of operational security of manufacturer installed keys and 9 anchors 10 draft-richardson-t2trg-idevid-considerations-01 12 Abstract 14 This document provides a toxonomy of methods by manufacturers of 15 silicon and devices secure private keys and public trust anchors. 16 This deals with two related activities: how trust anchors and private 17 keys are installed into devices during manufacturing, and how the 18 related manufacturer held private keys are secured against 19 disclosure. 21 This document does not evaluate the different mechanisms, but rather 22 just serves to name them in a consistent manner in order to aid in 23 communication. 25 RFCEDITOR: please remove this paragraph. This work is occurring in 26 https://github.com/mcr/idevid-security-considerations 28 Status of This Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at https://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on 28 February 2021. 45 Copyright Notice 47 Copyright (c) 2020 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 52 license-info) in effect on the date of publication of this document. 53 Please review these documents carefully, as they describe your rights 54 and restrictions with respect to this document. Code Components 55 extracted from this document must include Simplified BSD License text 56 as described in Section 4.e of the Trust Legal Provisions and are 57 provided without warranty as described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 62 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 63 2. Applicability Model . . . . . . . . . . . . . . . . . . . . . 5 64 2.1. A reference manufacturing/boot process . . . . . . . . . 5 65 3. Types of Trust Anchors . . . . . . . . . . . . . . . . . . . 7 66 3.1. Secured First Boot Trust Anchor . . . . . . . . . . . . . 8 67 3.2. Software Update Trust Anchor . . . . . . . . . . . . . . 8 68 3.3. Trusted Application Manager anchor . . . . . . . . . . . 8 69 3.4. Public WebPKI anchors . . . . . . . . . . . . . . . . . . 9 70 3.5. DNSSEC root . . . . . . . . . . . . . . . . . . . . . . . 9 71 3.6. What else? . . . . . . . . . . . . . . . . . . . . . . . 9 72 4. Types of Identities . . . . . . . . . . . . . . . . . . . . . 9 73 4.1. Manufacturer installed IDevID certificates . . . . . . . 10 74 4.1.1. Operational Considerations for Manufacturer IDevID 75 Public Key Infrastructure . . . . . . . . . . . . . . 10 76 4.1.2. Key Generation process . . . . . . . . . . . . . . . 11 77 5. Public Key Infrastructures (PKI) . . . . . . . . . . . . . . 14 78 5.1. Number of levels of certification authorities . . . . . . 15 79 5.2. Protection of CA private keys . . . . . . . . . . . . . . 16 80 5.3. Supporting provisioned anchors in devices . . . . . . . . 17 81 6. Evaluation Questions . . . . . . . . . . . . . . . . . . . . 17 82 6.1. Integrity and Privacy of on-device data . . . . . . . . . 17 83 6.2. Integrity and Privacy of device identify 84 infrastructure . . . . . . . . . . . . . . . . . . . . . 18 85 6.3. Integrity and Privacy of included trust anchors . . . . . 18 86 7. Privacy Considerations . . . . . . . . . . . . . . . . . . . 19 87 8. Security Considerations . . . . . . . . . . . . . . . . . . . 19 88 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 89 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 19 90 11. Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . 19 91 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 19 92 12.1. Normative References . . . . . . . . . . . . . . . . . . 19 93 12.2. Informative References . . . . . . . . . . . . . . . . . 20 94 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 96 1. Introduction 98 An increasing number of protocols derive a significant part of their 99 security by using trust anchors [RFC4949] that are installed by 100 manufacturers. Disclosure of the list of trust anchors does not 101 usually cause a problem, but changing them in any way does. This 102 includes adding, replacing or deleting anchors. 104 Many protocols also leverage manufacturer installed identities. 105 These identities are usually in the form of [ieee802-1AR] Initial 106 Device Identity certificates (IDevID). The identity has two 107 components: a private key that must remain under the strict control 108 of a trusted part of the device, and a public part (the certificate), 109 which (ignoring, for the moment, privacy concerns) may be freely 110 disclosed. 112 There also situations where identities which are tied up in the 113 provision of symmetric shared secret. A common example is the SIM 114 card ([_3GPP.51.011]), it now comes as a virtual SIM, but which is 115 usually not provisioned at the factory. The provision of an initial, 116 per-device default password also falls into the category of symmetric 117 shared secret. 119 It is further not unusual for many devices (particularly smartphones) 120 to also have one or more group identity keys. This is used in, for 121 instance, in [fidotechnote] to make claims about being a particular 122 model of phone (see [I-D.richardson-rats-usecases]). The key pair 123 that does this is loaded into large batches of phones for privacy 124 reasons. 126 The trust anchors are used for a variety of purposes. Trust anchors 127 are used to verify: 129 * the signature on a software update (as per 130 [I-D.ietf-suit-architecture]). 132 * a TLS Server Certificate, such as when setting up an HTTPS 133 connection. 135 * the [RFC8366] format voucher that provides proof of an ownership 136 change 138 Device identity keys are used when performing enrollment requests (in 139 [I-D.ietf-anima-bootstrapping-keyinfra], and in some uses of 140 [I-D.ietf-emu-eap-noob]. The device identity certificate is also 141 used to sign Evidence by an Attesting Environment (see 142 [I-D.ietf-rats-architecture]). 144 These security artifacts are used to anchor other chains of 145 information: an EAT Claim as to the version of software/firmware 146 running on a device ([I-D.birkholz-suit-coswid-manifest]), an EAT 147 claim about legitimate network activity (via [I-D.birkholz-rats-mud], 148 or embedded in the IDevID in [RFC8520]). Known software versions 149 lead directly to vendor/distributor signed Software Bill of Materials 150 (SBOM), such as those described by [I-D.ietf-sacm-coswid] and the 151 NTIA/SBOM work [ntiasbom] and CISQ/OMG SBOM work underway [cisqsbom]. 153 In order to manage risks and assess vulnerabilities in a Supply 154 Chain, it is necessary to determine a degree of trustworthiness in 155 each device. A device may mislead audit systems as to its 156 provenance, about its software load or even about what kind of device 157 it is (see [RFC7168] for a humoros example). In order to properly 158 assess the security of a Supply Chain it is necessary to understand 159 the kinds and severity of the threats which a device has been 160 designed to resist. To do this, it is necessary to understand the 161 ways in which the different trust anchors and identities are 162 initially provisioned, are protected, and are updated. 164 To do this, this document details the different trust anchors (TrA) 165 and identities (IDs) found in typical devices. The privacy and 166 integrity of the TAs and IDs is often provided by a different, 167 superior artifact. This relationship is examined. 169 While many might desire to assign numerical values to different 170 mitigation techniques in order to be able to rank them, this document 171 does not attempt to do that, as there are too many other (mostly 172 human) factors that would come into play. Such an effort is more 173 properly in the purview of a formal ISO9001 process such as ISO14001. 175 1.1. Terminology 177 This document is not a standards track document, and it does not make 178 use of formal requirements language. 180 This section will be expanded to include needed terminology as 181 required. 183 The words Trust Anchor are contracted to TrAnc rather than TA, in 184 order not to confuse with [I-D.ietf-teep-architecture]'s "Trusted 185 Application". 187 This document defines a number of hyphenated terms, and they are 188 summarized here: 190 device-generated: a private or symmetric key which is generated on 191 the device 193 infrastructure-generated: a private or symmetric key which is 194 generated by some system, likely located at the factory that built 195 the device 197 mechanically-installed: when a key or certificate is programmed into 198 non-volatile storage by out-of-band mechanism like JTAG [JTAG] 200 mechanically-transfered: when a key or certificate is transferred 201 into a system via private interface, such as serial console, JTAG 202 managed mailbox, or other physically private interface 204 network-transfered: when a key or certificate is transfered into a 205 system using a network interface which would be available after 206 the device has shipped. This applies even if the network is 207 physically attached using a bed-of-nails. 209 device/infrastructure-co-generated: when a private or symmetric key 210 is derived from a secret previously synchronized between the 211 silicon vendor and the factory using a common algorithm. 213 2. Applicability Model 215 There is a wide variety of devices to which this analysis can apply. 216 (See [I-D.bormann-lwig-7228bis]) This document will use a J-group 217 class C13 as a sample. This class is sufficiently large to 218 experience complex issues among multiple CPUs, packages and operating 219 systems, but at the same time, small enough that this class is often 220 deployed in single-purpose IoT-like uses. Devices in this class 221 often have Secure Enclaves (such as the "Grapeboard"), and can 222 include silicon manufacturer controlled processors in the boot 223 process (the Raspberry PI boots under control of the GPU). 225 Almost all larger systems (servers, laptops, desktops) include a 226 Baseboard Management Controller (BMC), which ranges from a M-Group 227 Class 3 MCU, to a J-Group Class 10 CPU (see, for instance [openbmc] 228 which uses a Linux kernel and system inside the BMC). As the BMC 229 usually has complete access to the main CPU's memory, I/O hardware 230 and disk, the boot path security of such a system needs to be 231 understood first as being about the security of the BMC. 233 2.1. A reference manufacturing/boot process 235 In order to provide for immutability and privacy of the critical 236 TrAnc and IDs, many CPU manufacturers will provide for some kind of 237 private memory area which is only accessible when the CPU is in 238 certain privileged states. See the Terminology section of 239 [I-D.ietf-teep-architecture], notably TEE, REE, and TAM, and also 240 section 4, Architecture. 242 The private memory that is important is usually non-volatile and 243 rather small. It may be located inside the CPU silicon die, or it 244 may be located externally. If the memory is external, then it is 245 usually encrypted by a hardware mechanism on the CPU, with only the 246 key kept inside the CPU. 248 The entire mechanism may be external to the CPU in the form of a 249 hardware-TPM module, or it may be entirely internal to the CPU in the 250 form of a firmware-TPM. It may use a custom interface to the rest of 251 the system, or it may implement the TPM 1.2 or TPM 2.0 252 specifications. Those details are important to performing a full 253 evaluation, but do not matter much to this model (see initial- 254 enclave-location below). 256 During the manufacturing process, once the components have been 257 soldered to the board, the system is usually put through a system- 258 level test. This is often done on as a "bed-of-nails" test 259 [BedOfNails], where the board has key points attached mechanically to 260 a test system. A [JTAG] process tests the System Under Test, and 261 then initializes some firmware into the still empty flash storage. 262 It is now common for a factory test image to be loaded first: this 263 image will include code to initialize the private memory key 264 described above, and will include a first-stage bootloader and some 265 kind of (primitive) Trusted Application Manager (TAM). Embedded in 266 the stage one bootloader will be a Trust Anchor that is able to 267 verify the second-stage bootloader image. 269 After the system has undergone testing, the factory test image is 270 erased, leaving the first-stage bootloader. One or more second-stage 271 bootloader images is installed. The production image may be 272 installed at that time, or if the second-stage bootloader is able to 273 install it over the network, it may be done that way instead. 275 There are many variations of the above process, and this section is 276 not attempting to be prescriptive, but to be provide enough 277 illustration to motivate subsequent terminology. 279 There process may be entirely automated, or it may be entirely driven 280 by humans working in the factory. 282 Or a combination of the above. 284 These steps may all occur on an access-controlled assembly line, or 285 the system boards may be shipped from one place to another (maybe 286 another country) before undergoing testing. 288 Some systems are intended to be shipped in a tamper-proof state, but 289 it is usually not desirable that bed-of-nails testing be possible 290 without tampering, so the initialization process is usually done 291 prior to rendering the system tamper-proof. 293 Quality control testing may be done prior to as well as after the 294 application of tamper-proofing, as systems which do not pass 295 inspection may be reworked to fix flaws, and this should ideally be 296 impossible once the system has been made tamper-proof. 298 3. Types of Trust Anchors 300 Trust Anchors are fundamentally public keys. They are used to 301 validate other digitally signed artifacts. Typically, these are 302 chains of PKIX certificates leading to an End-Entity certificate 303 (EE). 305 The chains are usually presented as part of an externally provided 306 object, with the term "externally" to be understood as being as close 307 as untrusted flash, to as far as objects retrieved over a network. 309 There is no requirement that there be any chain at all: the trust 310 anchor can be used to validate a signature over a target object 311 directly. 313 The trust anchors are often stored in the form of self-signed 314 certificates. The self-signature does not offer any cryptographic 315 assurance, but it does provide a form of error detection, providing 316 verification against non-malicious forms of data corruption. If 317 storage is at a premium (such as inside-CPU non-volatile storage) 318 then only the public key itself need to be stored. For a 256-bit 319 ECDSA key, this is 32 bytes of space. 321 When evaluating the degree of trust for each trust anchor there are 322 four aspects that need to be determined: 324 * can the trust anchor be replaced or modified? 326 * can additional trust anchors be added? 328 * can trust anchors be removed? 330 * how is the private key associated with the trust anchor stored? 332 The first three things are device specific properties of how the 333 integrity of the trust anchor is maintained. 335 The fourth property has nothing to do with the device, but has to do 336 with the reputation and care of the entity that maintains the private 337 key. 339 Different anchors have different purposes implied by the 340 authorization associated with them. 342 These are: 344 3.1. Secured First Boot Trust Anchor 346 This anchor is part of the first-stage boot loader, and it is used to 347 validate a second-stage bootloader which may be stored in external 348 flash. This is called the initial software trust anchor. 350 3.2. Software Update Trust Anchor 352 This anchor is used to validate the main application (or operating 353 system) load for the device. 355 It can be stored in a number of places. First, it may be identical 356 to the Secure Boot Trust Anchor. 358 Second, it may be stored in the second-stage bootloader, and 359 therefore its integrity is protected by the Secured First Boot Trust 360 Anchor. 362 Third, it may be stored in the application code itself, where the 363 application validates updates to the application directly (update in 364 place), or via a double-buffer arrangement. The initial (factory) 365 load of the application code initializes the trust arrangement. 367 In this situation the application code is not in a secured boot 368 situation, as the second-stage bootloader does not validate the 369 application/operating system before starting it, but it may still 370 provide measured boot mechanism. 372 3.3. Trusted Application Manager anchor 374 This anchor is part of a [I-D.ietf-teep-architecture] Trusted 375 Application Manager (TAM). Code which is signed by this anchor will 376 be given execution privileges as described by the manifest which 377 accompanies the code. This privilege may include updating anchors. 379 3.4. Public WebPKI anchors 381 These anchors are used to verify HTTPS certificates from web sites. 382 These anchors are typically distributed as part of desktop browsers, 383 and via desktop operating systems. 385 The exact set of these anchors is not precisely defined: it is 386 usually determined by the browser vendor (e.g., Mozilla, Google, 387 Apple, Safari, Microsoft), or the operating system vendor (e.g., 388 Apple, Google, Microsoft, Ubuntu). In most cases these vendors look 389 to the CA/Browser Forum ([CABFORUM]) for inclusion criteria. 391 3.5. DNSSEC root 393 This anchor is part of the DNS Security extensions. It provides an 394 anchor for securing DNS lookups. Secure DNS lookups may be important 395 in order to get access to software updates. This anchor is now 396 scheduled to change approximately every 3 years, with the new key 397 announced several years before it is used, making it possible to 398 embed a keys that will be valid for up to five years. 400 This trust anchor is typically part of the application/operating 401 system code and is usually updated by the manufacturer when they do 402 updates. However, a system which is connected to the Internet may 403 update the DNSSEC anchor itself through the mechanism described in 404 [RFC5011]. 406 There are concerns that there may be a chicken and egg situation for 407 devices that have remained in a powered off state (or disconnected 408 from the Internet) for some period of years. That upon being 409 reconnected, that the device would be unable to do DNSSEC validation. 410 This failure would result in them being unable to obtain operating 411 system updates that would then include the updates to the DNSSEC key. 413 3.6. What else? 415 TBD? 417 4. Types of Identities 419 Identities are installed during manufacturing time for a variety of 420 purposes. 422 Identities require some private component. Asymmetric identities 423 (e.g., RSA, ECDSA, EdDSA systems) require a co-responding public 424 component, usually in the form of a certificate signed by a trusted 425 third party. 427 The process of making this coordinated key pair and then installing 428 it into the device is called identity provisioning. 430 4.1. Manufacturer installed IDevID certificates 432 [ieee802-1AR] defines a category of certificates that are installed 433 by the manufacturer, which contain at the least, a device unique 434 serial number. 436 A number of protocols depend upon this certificate. 438 * [RFC8572] and [I-D.ietf-anima-bootstrapping-keyinfra] introduce 439 mechanisms for new devices (called pledges) to be onboarded into a 440 network without intervention from an expert operator. A number of 441 derived protocols such as [I-D.ietf-anima-brski-async-enroll], 442 [I-D.ietf-anima-constrained-voucher], 443 [I-D.richardson-anima-voucher-delegation], 444 [I-D.friel-anima-brski-cloud] extend this in a number of ways. 446 * [I-D.ietf-rats-architecture] depends upon a key provisioned into 447 the Attesting Environment to sign Evidence. 449 * [I-D.ietf-suit-architecture] may depend upon a key provisioned 450 into the device in order to decrypt software updates. Both 451 symmetric and asymmetric keys are possible. In both cases, the 452 decrypt operation depends upon the device having access to a 453 private key provisioned in advance. The IDevID can be used for 454 this if algorithm choices permit. ECDSA keys do not directly 455 support encryption in the same way that RSA does, for instance, 456 but the addition of ECIES can solve this. There may be other 457 legal considerations why the IDevID might not be used, and a 458 second key provisioned. 460 * TBD 462 4.1.1. Operational Considerations for Manufacturer IDevID Public Key 463 Infrastructure 465 The manufacturer has the responsibility to provision a keypair into 466 each device as part of the manufacturing process. There are a 467 variety of mechanisms to accomplish this, which this document will 468 overview. 470 There are three fundamental ways to generate IDevID certificates for 471 devices: 473 1. generating a private key on the device, creating a Certificate 474 Signing Request (or equivalent), and then returning a certificate 475 to the device. 477 2. generating a private key outside the device, signing the 478 certificate, and the installing both into the device. 480 3. deriving the private key from a previously installed secret seed, 481 that is shared with only the manufacturer. 483 There is a fourth situation where the IDevID is provided as part of a 484 Trusted Platform Module (TPM), in which case the TPM vendor may be 485 making the same tradeoffs. 487 The document [I-D.moskowitz-ecdsa-pki] provides some practical 488 instructions on setting up a reference implementation for ECDSA keys 489 using a three-tier mechanism. 491 4.1.2. Key Generation process 493 4.1.2.1. On-device private key generation 495 Generating the key on-device has the advantage that the private key 496 never leaves the device. The disadvantage is that the device may not 497 have a verified random number generator. [factoringrsa] is an example 498 of this scenario. 500 There are a number of options of how to get the public key securely 501 from the device to the certification authority. 503 This transmission must be done in an integral manner, and must be 504 securely associated with the assigned serial number. The serial 505 number goes into the certificate, and the resulting certificate needs 506 to be loaded into the manufacturer's asset database. 508 One way to do the transmission is during a factory Bed of Nails test 509 (see [BedOfNails]) or Boundary Scan. When done via a physical 510 connection like this, then this is referred to as a _device- 511 generated_ / _mechanically-transferred_ . 513 There are other ways that could be used where a certificate signing 514 request is sent over a special network channel when the device is 515 powered up in the factory. This is referrered to as the _device- 516 generated_ / _network-transferred_ method. 518 Regardless of how the certificate signing request is sent from the 519 device to the factory, and how the certificate is returned to the 520 device, a concern from production line managers is that the assembly 521 line may have to wait for the certification authority to respond with 522 the certificate. 524 After the key generation, the device needs to set a flag such that it 525 no longer generates a new key, or will accept a new IDevID via the 526 factory connection. This may be a software setting, or could be as 527 dramatic as blowing a fuse. 529 The risk is that if an attacker with physical access is able to put 530 the device back into an unconfigured mode, then the attacker may be 531 able to substitute a new certificate into the device. It is 532 difficult to construct a rationale for doing this, unless the network 533 initialization also permits an attacker to load or replace trust 534 anchors at the same time. 536 Devices are typically constructed in a fashion such that the device 537 is unable to ever disclose the private key via an external interface. 538 This is usually done using a secure-enclave provided by the CPU 539 architecture in combination with on-chip non-volatile memory. 541 4.1.2.2. Off-device private key generation 543 Generating the key off-device has the advantage that the randomness 544 of the private key can be better analyzed. As the private key is 545 available to the manufacturing infrastructure, the authenticity of 546 the public key is well known ahead of time. 548 If the device does not come with a serial number in silicon, then one 549 should be assigned and placed into a certificate. The private key 550 and certificate could be programmed into the device along with the 551 initial bootloader firmware in a single step. 553 Aside from the change of origin for the randomness, a major advantage 554 of this mechanism is that it can be done with a single write to the 555 flash. The entire firmware of the device, including configuration of 556 trust anchors and private keys can be loaded in a single write pass. 557 Given some pipelining of the generation of the keys and the creation 558 of certificates, it may be possible to install unique identities 559 without taking any additional time. 561 The major downside to generating the private key off-device is that 562 it could be seen by the manufacturing infrastructure. It could be 563 compromised by humans in the factory, or the equipment could be 564 compromised. The use of this method increases the value of attacking 565 the manufacturing infrastructure. 567 If keys are generated by the manufacturing plant, and are immediately 568 installed, but never stored, then the window in which an attacker can 569 gain access to the private key is immensely reduced. 571 As in the previous case, the transfer may be done via physical 572 interfaces such as bed-of-nails, giving the _infrastructure- 573 generated_ / _mechanically-transferred_ method. 575 There is also the possibility of having a _infrastructure-generated_ 576 / _network-transferred_ key. There is a support for "server- 577 generated" keys in [RFC7030], [I-D.gutmann-scep], and [RFC4210]. All 578 methods strongly recommend encrypting the private key for transfer. 579 This is difficult to comply with as there is not yet any private key 580 material in the device, so in many cases it will not be possible to 581 encrypt the private key. 583 4.1.2.3. Key setup based on 256 bit secret seed 585 A hybrid of the previous two methods leverages a symmetric key that 586 is often provided by a silicon vendor to OEM manufacturers. 588 Each CPU (or a Trusted Execution Environment 589 [I-D.ietf-teep-architecture], or a TPM) is provisioned at fabrication 590 time with a unique, secret seed, usually at least 256 bits in size. 592 This value is revealed to the OEM board manufacturer only via a 593 secure channel. Upon first boot, the system (probably within a TEE, 594 or within a TPM) will generate a key pair using the seed to 595 initialize a Pseudo-Random-Number-Generator (PRNG). The OEM, in a 596 separate system, will initialize the same PRNG and generate the same 597 key pair. The OEM then derives the public key part, signs it and 598 turns it into a certificate. The private part is then destroyed, 599 ideally never stored or seen by anyone. The certificate (being 600 public information) is placed into a database, in some cases it is 601 loaded by the device as its IDevID certificate, in other cases, it is 602 retrieved during the onboarding process based upon a unique serial 603 number asserted by the device. 605 This method appears to have all of the downsides of the previous two 606 methods: the device must correctly derive its own private key, and 607 the OEM has access to the private key, making it also vulnerable. 608 The secret seed must be created in a secure way and it must also be 609 communicated securely. 611 There are some advantages to the OEM however: the major one is that 612 the problem of securely communicating with the device is outsourced 613 to the silicon vendor. The private keys and certificates may be 614 calculated by the OEM asynchronously to the manufacturing process, 615 either done in batches in advance of actual manufacturing, or on 616 demand when an IDevID is demanded. Doing the processing in this way 617 permits the key derivation system to be completely disconnected from 618 any network, and requires placing very little trust in the system 619 assembly factory. Operational security such as often incorrectly 620 presented fictionalized stories of a "mainframe" system to which only 621 physical access is permitted begins to become realistic. That trust 622 has been replaced with a heightened trust placed in the silicon 623 (integrated circuit) fabrication facility. 625 The downsides of this method to the OEM are: they must be supplied by 626 a trusted silicon fabrication system, which must communicate the set 627 of secrets seeds to the OEM in batches, and they OEM must store and 628 care for these keys very carefully. There are some operational 629 advantages to keeping the secret seeds around in some form, as the 630 same secret seed could be used for other things. There are some 631 significant downsides to keeping that secret seed around. 633 5. Public Key Infrastructures (PKI) 635 [RFC5280] describes the format for certificates, and numerous 636 mechanisms for doing enrollment have been defined (including: EST 637 [RFC7030], CMP [RFC4210], SCEP [I-D.gutmann-scep]). 639 [RFC5280] provides mechanisms to deal with multi-level certification 640 authorities, but it is not always clear what operating rules apply. 642 The certification authority (CA) that is central to [RFC5280]-style 643 public key infrastructures can suffer two kinds of failures: 1. 644 disclosure of a private key. 2. loss of a private key. 646 A PKI which discloses one or more private certification authority 647 keys is no longer secure. An attacker can create new identities, and 648 forge certificates connecting existing identities to attacker 649 controlled public/private keypairs. This can permit the attacker to 650 impersonate any specific device. 652 There is an additional kind of failure is when the CA is convinced to 653 sign (or issue) a certificate which it is not authorized to do so. 654 See for instance [ComodoGate]. This is an authorization failure, and 655 while a significant event, it does not result in the CA having to be 656 re-initialized from scratch. 658 This is disintinguished from a loss as described above renders the CA 659 completely useless and likely requires a recall of all products that 660 have ever had IDevID issued from this CA. 662 If the PKI uses Certificate Revocation Lists (CRL)s, then an attacker 663 that has access to the private key can also revoke existing 664 identities. 666 In the other direction, a PKI which loses access to a private key can 667 no longer function. This does not immediately result in a failure, 668 as existing identities remain valid until their expiry time 669 (notAfter). However, if CRLs or OCSP are in use, then the inability 670 to sign a fresh CRL or OCSP response will result in all identities 671 becoming invalid once the existing CRLs or OCSP statements expire. 673 This section details some nomenclature about the structure of 674 certification authorities. 676 5.1. Number of levels of certification authorities 678 [RFC5280], section 6.1 provides a Basic Path Validation. In the 679 formula, the certificates are arranged into a list. 681 The certification authority (CA) starts with a Trust Anchor (TA). 682 This is counted as the first level of the authority. 684 In the degenerate case of a self-signed certificate, then this a one 685 level PKI. 687 .----------<-. 688 |Issuer= X | | 689 |Subject=X |--' 690 '----------' 692 The private key associated with the Trust Anchor signs one or more 693 certificates. When this first level authority trusts only End-Entity 694 (EE) certificates, then this is a two level PKI. 696 .----------.<-. 697 |Issuer= X | | 698 |Subject=X |--' 699 '----------' 700 | \-------\ 701 v | 702 .----EE----. .----EE----. 703 |Issuer= X | |Issuer= X | 704 |Subject=Y1| |Subject=Y2| 705 '----------' '----------' 707 When this first level authority signs intermediate certification 708 authorities, and those certification authorities sign End-Entity 709 certificates, then this is a three level PKI. 711 .----------.<-. 712 |Issuer= X | | 713 |Subject=X |--' 714 '----------' 715 | \-----------------------\ 716 v | 717 .----------. .----------. 718 |Issuer= X | |Issuer= X | 719 |Subject=Y1| |Subject=Y2| 720 '----------' '----------' 721 | \--------\ | \---------\ 722 V | V | 723 .----EE----. .----EE----. .----EE----. .----EE----. 724 |Issuer= Y1| |Issuer= Y1| |Issuer= Y2| |Issuer= Y2| 725 |Subject=Z1| |Subject=Z1| |Subject=Z3| |Subject=Z4| 726 '----------' '----------' '----------' '----------' 728 In general, when arranged as a tree, with the End-Entity certificates 729 at the bottom, and the Trust Anchor at the top, then the level is 730 where the deepest EE certificates are, counting from one. 732 It is quite common to have a three level PKI, where the root of the 733 CA is stored in a Hardware Security Module, while the level one 734 intermediate CA is available in an online form. 736 5.2. Protection of CA private keys 738 The private key for the certification authorities must be protected 739 from disclosure. The strongest protection is afforded by keeping 740 them in a offline device, passing Certificate Signing Requests (CSR)s 741 to the offline device by human process. 743 For examples of extreme measures, see [kskceremony]. There is 744 however a wide spectrum of needs, as exampled in [rootkeyceremony]. 745 The SAS70 audit standard is usually used as a basis for the Ceremony, 746 see [keyceremony2]. 748 This is inconvenient, and may involve latencies of days, possibly 749 even weeks to months if the offline device is kept in a locked 750 environment that requires multiple keys to be present. 752 There is therefore a tension between protection and convenience. 753 This is often accomplished by having some levels of the PKI be 754 offline, and some levels of the PKI be online. 756 There is usually a need to maintain backup copies of the critical 757 keys. It is often appropriate to use secret splitting technology 758 such as Shamir Secret Sharing among a number of parties [shamir79] 759 This mechanism can be setup such that some threshold k (less than the 760 total n) of shares are needed in order to recover the secret. 762 5.3. Supporting provisioned anchors in devices 764 IDevID-type Identity (or Birth) Certificates which are provisioned 765 into devices need to be signed by a certification authority 766 maintained by the manufacturer. During the period of manufacture of 767 new product, the manufacturer needs to be be able to sign new 768 Identity Certificates. 770 During the anticipated lifespan of the devices the manufacturer needs 771 to maintain the ability for third parties to validate the Identity 772 Certificates. If there are Certificate Revocation Lists (CRLs) 773 involved, then they will need to resigned during this period. Even 774 for devices with a short active lifetime, the lifespan of the device 775 could very long if devices are kept in a warehouse for many decades 776 before being activated. 778 Trust anchors which are provisioned in the devices will have 779 corresponding private keys maintained by the manufacturer. The trust 780 anchors will often anchor a PKI which is going to be used for a 781 particular purpose. There will be End-Entity (EE) certificates of 782 this PKI which will be used to sign particular artifacts (such as 783 software updates), or communications protocols (such as TLS 784 connections). The private key associated with these EE certificates 785 are not stored in the device, but are maintained by the manufacturer. 786 These need even more care than the private keys stored in the 787 devices, as compromise of the software update key compromises all of 788 the devices, not just a single device. 790 6. Evaluation Questions 792 This section recaps the set of questions that may need to be 793 answered. This document does not assign valuation to the answers. 795 6.1. Integrity and Privacy of on-device data 797 initial-enclave-location: Is the location of the initial software 798 trust anchor internal to the CPU package? 800 initial-enclave-integrity-key: If the first-stage bootloader is 801 external to the CPU, and it is integrity protected, where is the 802 key used to check the integrity? 804 initial-enclave-privacy-key: If the first-stage data is external to 805 the CPU, is it kept confidential by use of encryption? 807 first-stage-initialization: The number of people involved in the 808 first stage initialization. An entirely automated system would 809 have a number zero. A factory with three 8 hour shifts might have 810 a number that is a multiple of three. A system with humans 811 involved may be subject to bribery attacks, while a system with no 812 humans may be subject to attacks on the system which are hard to 813 notice. 815 first-second-stage-gap: If a board is initialized with a first-stage 816 bootloader in one location (factory), and then shipped to another 817 location, there may situations where the device can not be locked 818 down until the second step. 820 6.2. Integrity and Privacy of device identify infrastructure 822 For IDevID provisioning, which includes a private key and matching 823 certificate installed into the device, the associated public key 824 infrastructure that anchors this identity must be maintained by the 825 manufacturer. 827 identity-pki-level: how deep are the IDevID certificates that are 828 issued? 830 identity-time-limits-per-intermediate: how long is each intermediate 831 CA maintained before a new intermediate CA key is generated? 832 There may be no time limit, only a device count limit. 834 identity-number-per-intermediate: how many identities are signed by 835 a particular intermediate CA before it is retired? There may be 836 no numeric limit, only a time limit. 838 identity-anchor-storage: how is the root CA key stored? How many 839 people are needed to recover the private key? 841 6.3. Integrity and Privacy of included trust anchors 843 For each trust anchor (public key) stored in the device, there will 844 be an associated PKI. For each of those PKI the following questions 845 need to be answered. 847 pki-level: how deep is the EE that will be evaluated (the trust root 848 is at level 1) 850 pki-algorithms: what kind of algorithms and key sizes will be 851 considered to valid 853 pki-level-locked: (a Boolean) is the level where the EE cert will be 854 found locked by the device, or can levels be added or deleted by 855 the PKI operator without code changes to the device. 857 pki-breadth: how many different non-expired EE certificates exist in 858 this PKI 860 pki-lock-policy: can any EE certificate be used with this trust 861 anchor to sign? Or, is there some kind of policy OID or Subject 862 restriction? Are specific intermediate CAs needed that lead to 863 the EE? 865 pki-anchor-storage: how is the private key associated with this 866 trust root stored? How many people are needed to recover it? 868 7. Privacy Considerations 870 many yet to be detailed 872 8. Security Considerations 874 This entire document is a security considerations. 876 9. IANA Considerations 878 This document makes no IANA requests. 880 10. Acknowledgements 882 Robert Martin of MITRE provided some guidance about citing the SBOM 883 efforts. 885 11. Changelog 887 12. References 889 12.1. Normative References 891 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 892 Housley, R., and W. Polk, "Internet X.509 Public Key 893 Infrastructure Certificate and Certificate Revocation List 894 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 895 . 897 [ieee802-1AR] 898 IEEE Standard, "IEEE 802.1AR Secure Device Identifier", 899 2009, . 902 12.2. Informative References 904 [I-D.ietf-anima-bootstrapping-keyinfra] 905 Pritikin, M., Richardson, M., Eckert, T., Behringer, M., 906 and K. Watsen, "Bootstrapping Remote Secure Key 907 Infrastructures (BRSKI)", Work in Progress, Internet- 908 Draft, draft-ietf-anima-bootstrapping-keyinfra-43, 7 909 August 2020, . 912 [I-D.richardson-anima-voucher-delegation] 913 Richardson, M. and L. Xia, "Delegated Authority for 914 Bootstrap Voucher Artifacts", Work in Progress, Internet- 915 Draft, draft-richardson-anima-voucher-delegation-01, 9 916 March 2020, . 919 [I-D.friel-anima-brski-cloud] 920 Friel, O., Shekh-Yusef, R., and M. Richardson, "BRSKI 921 Cloud Registrar", Work in Progress, Internet-Draft, draft- 922 friel-anima-brski-cloud-02, 3 May 2020, 923 . 926 [I-D.ietf-anima-constrained-voucher] 927 Richardson, M., Stok, P., and P. Kampanakis, "Constrained 928 Voucher Artifacts for Bootstrapping Protocols", Work in 929 Progress, Internet-Draft, draft-ietf-anima-constrained- 930 voucher-08, 13 July 2020, . 933 [I-D.ietf-anima-brski-async-enroll] 934 Fries, S., Brockhaus, H., and E. Lear, "Support of 935 asynchronous Enrollment in BRSKI (BRSKI-AE)", Work in 936 Progress, Internet-Draft, draft-ietf-anima-brski-async- 937 enroll-00, 10 July 2020, . 940 [I-D.moskowitz-ecdsa-pki] 941 Moskowitz, R., Birkholz, H., Xia, L., and M. Richardson, 942 "Guide for building an ECC pki", Work in Progress, 943 Internet-Draft, draft-moskowitz-ecdsa-pki-09, 9 August 944 2020, . 947 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 948 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 949 . 951 [RFC5011] StJohns, M., "Automated Updates of DNS Security (DNSSEC) 952 Trust Anchors", STD 74, RFC 5011, DOI 10.17487/RFC5011, 953 September 2007, . 955 [RFC8366] Watsen, K., Richardson, M., Pritikin, M., and T. Eckert, 956 "A Voucher Artifact for Bootstrapping Protocols", 957 RFC 8366, DOI 10.17487/RFC8366, May 2018, 958 . 960 [RFC8572] Watsen, K., Farrer, I., and M. Abrahamsson, "Secure Zero 961 Touch Provisioning (SZTP)", RFC 8572, 962 DOI 10.17487/RFC8572, April 2019, 963 . 965 [RFC7030] Pritikin, M., Ed., Yee, P., Ed., and D. Harkins, Ed., 966 "Enrollment over Secure Transport", RFC 7030, 967 DOI 10.17487/RFC7030, October 2013, 968 . 970 [I-D.gutmann-scep] 971 Gutmann, P., "Simple Certificate Enrolment Protocol", Work 972 in Progress, Internet-Draft, draft-gutmann-scep-16, 27 973 March 2020, . 976 [RFC4210] Adams, C., Farrell, S., Kause, T., and T. Mononen, 977 "Internet X.509 Public Key Infrastructure Certificate 978 Management Protocol (CMP)", RFC 4210, 979 DOI 10.17487/RFC4210, September 2005, 980 . 982 [_3GPP.51.011] 983 3GPP, "Specification of the Subscriber Identity Module - 984 Mobile Equipment (SIM-ME) interface", 3GPP TS 51.011 985 4.15.0, 15 June 2005, 986 . 988 [BedOfNails] 989 Wikipedia, "Bed of nails tester", July 2020, 990 . 993 [pelionfcu] 994 ARM Pelion, "Factory provisioning overview", 28 June 2020, 995 . 998 [factoringrsa] 999 "Factoring RSA keys from certified smart cards: 1000 Coppersmith in the wild", 16 September 2013, 1001 . 1003 [kskceremony] 1004 Verisign, "DNSSEC Practice Statement for the Root Zone ZSK 1005 Operator", 2017, . 1008 [rootkeyceremony] 1009 Community, "Root Key Ceremony, Cryptography Wiki", April 1010 2020, 1011 . 1013 [keyceremony2] 1014 Digi-Sign, "SAS 70 Key Ceremony", April 2020, 1015 . 1018 [shamir79] Shamir, A., "How to share a secret.", 1979, 1019 . 1022 [nistsp800-57] 1023 NIST, "SP 800-57 Part 1 Rev. 4 Recommendation for Key 1024 Management, Part 1: General", 1 January 2016, 1025 . 1028 [fidotechnote] 1029 FIDO Alliance, "FIDO TechNotes: The Truth about 1030 Attestation", July 2018, . 1033 [ntiasbom] NTIA, "NTIA Software Compoment Transparency", n.d., 1034 . 1036 [cisqsbom] CISQ/Object Management Group, "TOOL-TO-TOOL SOFTWARE BILL 1037 OF MATERIALS EXCHANGE", July 2020, . 1040 [ComodoGate] 1041 "Comodo-gate hacker brags about forged certificate 1042 exploit", 28 March 2011, 1043 . 1046 [openbmc] Linux Foundation/OpenBMC Group, "Defining a Standard 1047 Baseboard Management Controller Firmware Stack", July 1048 2020, . 1050 [JTAG] "Joint Test Action Group", 26 August 2020, 1051 . 1053 [JTAGieee] IEEE Standard, "1149.7-2009 - IEEE Standard for Reduced- 1054 Pin and Enhanced-Functionality Test Access Port and 1055 Boundary-Scan Architecture", 2009, 1056 . 1058 [rootkeyrollover] 1059 ICANN, "Proposal for Future Root Zone KSK Rollovers", 1060 2019, . 1063 [CABFORUM] CA/Browser Forum, "CA/Browser Forum Baseline Requirements 1064 for the Issuance and Management of Publicly-Trusted 1065 Certificates, v.1.2.2", October 2014, 1066 . 1068 [I-D.richardson-rats-usecases] 1069 Richardson, M., Wallace, C., and W. Pan, "Use cases for 1070 Remote Attestation common encodings", Work in Progress, 1071 Internet-Draft, draft-richardson-rats-usecases-07, 9 March 1072 2020, . 1075 [I-D.ietf-suit-architecture] 1076 Moran, B., Tschofenig, H., Brown, D., and M. Meriac, "A 1077 Firmware Update Architecture for Internet of Things", Work 1078 in Progress, Internet-Draft, draft-ietf-suit-architecture- 1079 11, 27 May 2020, . 1082 [I-D.ietf-emu-eap-noob] 1083 Aura, T. and M. Sethi, "Nimble out-of-band authentication 1084 for EAP (EAP-NOOB)", Work in Progress, Internet-Draft, 1085 draft-ietf-emu-eap-noob-02, 12 July 2020, 1086 . 1089 [I-D.ietf-rats-architecture] 1090 Birkholz, H., Thaler, D., Richardson, M., Smith, N., and 1091 W. Pan, "Remote Attestation Procedures Architecture", Work 1092 in Progress, Internet-Draft, draft-ietf-rats-architecture- 1093 05, 10 July 2020, . 1096 [I-D.birkholz-suit-coswid-manifest] 1097 Birkholz, H., "A SUIT Manifest Extension for Concise 1098 Software Identifiers", Work in Progress, Internet-Draft, 1099 draft-birkholz-suit-coswid-manifest-00, 17 July 2018, 1100 . 1103 [I-D.birkholz-rats-mud] 1104 Birkholz, H., "MUD-Based RATS Resources Discovery", Work 1105 in Progress, Internet-Draft, draft-birkholz-rats-mud-00, 9 1106 March 2020, . 1109 [RFC8520] Lear, E., Droms, R., and D. Romascanu, "Manufacturer Usage 1110 Description Specification", RFC 8520, 1111 DOI 10.17487/RFC8520, March 2019, 1112 . 1114 [I-D.ietf-sacm-coswid] 1115 Birkholz, H., Fitzgerald-McKay, J., Schmidt, C., and D. 1116 Waltermire, "Concise Software Identification Tags", Work 1117 in Progress, Internet-Draft, draft-ietf-sacm-coswid-15, 1 1118 May 2020, . 1121 [RFC7168] Nazar, I., "The Hyper Text Coffee Pot Control Protocol for 1122 Tea Efflux Appliances (HTCPCP-TEA)", RFC 7168, 1123 DOI 10.17487/RFC7168, April 2014, 1124 . 1126 [I-D.bormann-lwig-7228bis] 1127 Bormann, C., Ersue, M., Keranen, A., and C. Gomez, 1128 "Terminology for Constrained-Node Networks", Work in 1129 Progress, Internet-Draft, draft-bormann-lwig-7228bis-06, 9 1130 March 2020, . 1133 [I-D.ietf-teep-architecture] 1134 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 1135 "Trusted Execution Environment Provisioning (TEEP) 1136 Architecture", Work in Progress, Internet-Draft, draft- 1137 ietf-teep-architecture-12, 13 July 2020, 1138 . 1141 Authors' Addresses 1143 Michael Richardson 1144 Sandelman Software Works 1146 Email: mcr+ietf@sandelman.ca 1148 Jie Yang 1149 Huawei Technologies Co., Ltd. 1151 Email: jay.yang@huawei.com