idnits 2.17.1 draft-richardson-t2trg-idevid-considerations-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (21 June 2021) is 1012 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'JTAGieee' is defined on line 1084, but no explicit reference was found in the text == Outdated reference: A later version (-24) exists of draft-ietf-anima-constrained-voucher-11 == Outdated reference: A later version (-05) exists of draft-ietf-anima-brski-async-enroll-02 == Outdated reference: A later version (-06) exists of draft-ietf-emu-eap-noob-04 == Outdated reference: A later version (-22) exists of draft-ietf-rats-architecture-12 == Outdated reference: A later version (-24) exists of draft-ietf-sacm-coswid-17 == Outdated reference: A later version (-08) exists of draft-bormann-lwig-7228bis-06 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-14 Summary: 0 errors (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 T2TRG Research Group M. Richardson 3 Internet-Draft Sandelman Software Works 4 Intended status: Informational 21 June 2021 5 Expires: 23 December 2021 7 A Taxonomy of operational security considerations for manufacturer 8 installed keys and Trust Anchors 9 draft-richardson-t2trg-idevid-considerations-05 11 Abstract 13 This document provides a taxonomy of methods used by manufacturers of 14 silicon and devices to secure private keys and public trust anchors. 15 This deals with two related activities: how trust anchors and private 16 keys are installed into devices during manufacturing, and how the 17 related manufacturer held private keys are secured against 18 disclosure. 20 This document does not evaluate the different mechanisms, but rather 21 just serves to name them in a consistent manner in order to aid in 22 communication. 24 RFCEDITOR: please remove this paragraph. This work is occurring in 25 https://github.com/mcr/idevid-security-considerations 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on 23 December 2021. 44 Copyright Notice 46 Copyright (c) 2021 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 51 license-info) in effect on the date of publication of this document. 52 Please review these documents carefully, as they describe your rights 53 and restrictions with respect to this document. Code Components 54 extracted from this document must include Simplified BSD License text 55 as described in Section 4.e of the Trust Legal Provisions and are 56 provided without warranty as described in the Simplified BSD License. 58 Table of Contents 60 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 61 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 62 2. Applicability Model . . . . . . . . . . . . . . . . . . . . . 5 63 2.1. A reference manufacturing/boot process . . . . . . . . . 6 64 3. Types of Trust Anchors . . . . . . . . . . . . . . . . . . . 7 65 3.1. Secured First Boot Trust Anchor . . . . . . . . . . . . . 8 66 3.2. Software Update Trust Anchor . . . . . . . . . . . . . . 8 67 3.3. Trusted Application Manager anchor . . . . . . . . . . . 9 68 3.4. Public WebPKI anchors . . . . . . . . . . . . . . . . . . 9 69 3.5. DNSSEC root . . . . . . . . . . . . . . . . . . . . . . . 9 70 3.6. What else? . . . . . . . . . . . . . . . . . . . . . . . 10 71 4. Types of Identities . . . . . . . . . . . . . . . . . . . . . 10 72 4.1. Manufacturer installed IDevID certificates . . . . . . . 10 73 4.1.1. Operational Considerations for Manufacturer IDevID 74 Public Key Infrastructure . . . . . . . . . . . . . . 11 75 4.1.2. Key Generation process . . . . . . . . . . . . . . . 11 76 5. Public Key Infrastructures (PKI) . . . . . . . . . . . . . . 14 77 5.1. Number of levels of certification authorities . . . . . . 15 78 5.2. Protection of CA private keys . . . . . . . . . . . . . . 17 79 5.3. Supporting provisioned anchors in devices . . . . . . . . 17 80 6. Evaluation Questions . . . . . . . . . . . . . . . . . . . . 18 81 6.1. Integrity and Privacy of on-device data . . . . . . . . . 18 82 6.2. Integrity and Privacy of device identify 83 infrastructure . . . . . . . . . . . . . . . . . . . . . 19 84 6.3. Integrity and Privacy of included trust anchors . . . . . 19 85 7. Privacy Considerations . . . . . . . . . . . . . . . . . . . 20 86 8. Security Considerations . . . . . . . . . . . . . . . . . . . 20 87 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 20 88 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 20 89 11. Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . 20 90 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 20 91 12.1. Normative References . . . . . . . . . . . . . . . . . . 20 92 12.2. Informative References . . . . . . . . . . . . . . . . . 20 93 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 25 95 1. Introduction 97 An increasing number of protocols derive a significant part of their 98 security by using trust anchors [RFC4949] that are installed by 99 manufacturers. Disclosure of the list of trust anchors does not 100 usually cause a problem, but changing them in any way does. This 101 includes adding, replacing or deleting anchors. [RFC6024] deals with 102 how trust anchor stores are managed, while this document deals with 103 how the associated PKI which is anchor is managed. 105 Many protocols also leverage manufacturer installed identities. 106 These identities are usually in the form of [ieee802-1AR] Initial 107 Device Identity certificates (IDevID). The identity has two 108 components: a private key that must remain under the strict control 109 of a trusted part of the device, and a public part (the certificate), 110 which (ignoring, for the moment, personal privacy concerns) may be 111 freely disclosed. 113 There also situations where identities are tied up in the provision 114 of symmetric shared secrets. A common example is the SIM card 115 ([_3GPP.51.011]), it now comes as a virtual SIM, but which is usually 116 not provisioned at the factory. The provision of an initial, per- 117 device default password also falls into the category of symmetric 118 shared secret. 120 It is further not unusual for many devices (particularly smartphones) 121 to also have one or more group identity keys. This is used, for 122 instance, in [fidotechnote] to make claims about being a particular 123 model of phone (see [I-D.richardson-rats-usecases]). The key pair 124 that does this is loaded into large batches of phones for privacy 125 reasons. 127 The trust anchors are used for a variety of purposes. Trust anchors 128 are used to verify: 130 * the signature on a software update (as per 131 [I-D.ietf-suit-architecture]), 133 * a TLS Server Certificate, such as when setting up an HTTPS 134 connection, 136 * the [RFC8366] format voucher that provides proof of an ownership 137 change. 139 Device identity keys are used when performing enrollment requests (in 140 [RFC8995], and in some uses of [I-D.ietf-emu-eap-noob]. The device 141 identity certificate is also used to sign Evidence by an Attesting 142 Environment (see [I-D.ietf-rats-architecture]). 144 These security artifacts are used to anchor other chains of 145 information: an EAT Claim as to the version of software/firmware 146 running on a device ([I-D.birkholz-suit-coswid-manifest]), an EAT 147 claim about legitimate network activity (via [I-D.birkholz-rats-mud], 148 or embedded in the IDevID in [RFC8520]). 150 Known software versions lead directly to vendor/distributor signed 151 Software Bill of Materials (SBOM), such as those described by 152 [I-D.ietf-sacm-coswid] and the NTIA/SBOM work [ntiasbom] and CISQ/OMG 153 SBOM work underway [cisqsbom]. 155 In order to manage risks and assess vulnerabilities in a Supply 156 Chain, it is necessary to determine a degree of trustworthiness in 157 each device. A device may mislead audit systems as to its 158 provenance, about its software load or even about what kind of device 159 it is (see [RFC7168] for a humorous example). 161 In order to properly assess the security of a Supply Chain it is 162 necessary to understand the kinds and severity of the threats which a 163 device has been designed to resist. To do this, it is necessary to 164 understand the ways in which the different trust anchors and 165 identities are initially provisioned, are protected, and are updated. 167 To do this, this document details the different trust anchors (TrAnc) 168 and identities (IDs) found in typical devices. The privacy and 169 integrity of the TrAncs and IDs is often provided by a different, 170 superior artifact. This relationship is examined. 172 While many might desire to assign numerical values to different 173 mitigation techniques in order to be able to rank them, this document 174 does not attempt to do that, as there are too many other (mostly 175 human) factors that would come into play. Such an effort is more 176 properly in the purview of a formal ISO9001 process such as ISO14001. 178 1.1. Terminology 180 This document is not a standards track document, and it does not make 181 use of formal requirements language. 183 This section will be expanded to include needed terminology as 184 required. 186 The words Trust Anchor are contracted to TrAnc rather than TA, in 187 order not to confuse with [I-D.ietf-teep-architecture]'s "Trusted 188 Application". 190 This document defines a number of hyphenated terms, and they are 191 summarized here: 193 device-generated: a private or symmetric key which is generated on 194 the device 196 infrastructure-generated: a private or symmetric key which is 197 generated by some system, likely located at the factory that built 198 the device 200 mechanically-installed: when a key or certificate is programmed into 201 non-volatile storage by an out-of-band mechanism such as JTAG 202 [JTAG] 204 mechanically-transferred: when a key or certificate is transferred 205 into a system via private interface, such as serial console, JTAG 206 managed mailbox, or other physically private interface 208 network-transferred: when a key or certificate is transferred into a 209 system using a network interface which would be available after 210 the device has shipped. This applies even if the network is 211 physically attached using a bed-of-nails [BedOfNails]. 213 device/infrastructure-co-generated: when a private or symmetric key 214 is derived from a secret previously synchronized between the 215 silicon vendor and the factory using a common algorithm. 217 2. Applicability Model 219 There is a wide variety of devices to which this analysis can apply. 220 (See [I-D.bormann-lwig-7228bis].) This document will use a J-group 221 processor as a sample. This class is sufficiently large to 222 experience complex issues among multiple CPUs, packages and operating 223 systems, but at the same time, small enough that this class is often 224 deployed in single-purpose IoT-like uses. Devices in this class 225 often have Secure Enclaves (such as the "Grapeboard"), and can 226 include silicon manufacturer controlled processors in the boot 227 process (the Raspberry PI boots under control of the GPU). 229 Almost all larger systems (servers, laptops, desktops) include a 230 Baseboard Management Controller (BMC), which ranges from a M-Group 231 Class 3 MCU, to a J-Group Class 10 CPU (see, for instance [openbmc] 232 which uses a Linux kernel and system inside the BMC). As the BMC 233 usually has complete access to the main CPU's memory, I/O hardware 234 and disk, the boot path security of such a system needs to be 235 understood first as being about the security of the BMC. 237 2.1. A reference manufacturing/boot process 239 In order to provide for immutability and privacy of the critical 240 TrAnc and IDs, many CPU manufacturers will provide for some kind of 241 private memory area which is only accessible when the CPU is in 242 certain privileged states. See the Terminology section of 243 [I-D.ietf-teep-architecture], notably TEE, REE, and TAM, and also 244 section 4, Architecture. 246 The private memory that is important is usually non-volatile and 247 rather small. It may be located inside the CPU silicon die, or it 248 may be located externally. If the memory is external, then it is 249 usually encrypted by a hardware mechanism on the CPU, with only the 250 key kept inside the CPU. 252 The entire mechanism may be external to the CPU in the form of a 253 hardware-TPM module, or it may be entirely internal to the CPU in the 254 form of a firmware-TPM. It may use a custom interface to the rest of 255 the system, or it may implement the TPM 1.2 or TPM 2.0 256 specifications. Those details are important to performing a full 257 evaluation, but do not matter much to this model (see initial- 258 enclave-location below). 260 During the manufacturing process, once the components have been 261 soldered to the board, the system is usually put through a system- 262 level test. This is often done as a "bed-of-nails" test 263 [BedOfNails], where the board has key points attached mechanically to 264 a test system. A [JTAG] process tests the System Under Test, and 265 then initializes some firmware into the still empty flash storage. 267 It is now common for a factory test image to be loaded first: this 268 image will include code to initialize the private memory key 269 described above, and will include a first-stage bootloader and some 270 kind of (primitive) Trusted Application Manager (TAM). (The TAM is a 271 piece of software that lives within the trusted execution 272 environment.) 274 Embedded in the stage one bootloader will be a Trust Anchor that is 275 able to verify the second-stage bootloader image. 277 After the system has undergone testing, the factory test image is 278 erased, leaving the first-stage bootloader. One or more second-stage 279 bootloader images are installed. The production image may be 280 installed at that time, or if the second-stage bootloader is able to 281 install it over the network, it may be done that way instead. 283 There are many variations of the above process, and this section is 284 not attempting to be prescriptive, but to be provide enough 285 illustration to motivate subsequent terminology. 287 The process may be entirely automated, or it may be entirely driven 288 by humans working in the factory, or a combination of the above. 290 These steps may all occur on an access-controlled assembly line, or 291 the system boards may be shipped from one place to another (maybe 292 another country) before undergoing testing. 294 Some systems are intended to be shipped in a tamper-proof state, but 295 it is usually not desirable that bed-of-nails testing be possible 296 without tampering, so the initialization process is usually done 297 prior to rendering the system tamper-proof. An example of a one-way 298 tamper-proof, weather resistant treatment might to mount the system 299 board in a case and fill the case with resin. 301 Quality control testing may be done prior to as well as after the 302 application of tamper-proofing, as systems which do not pass 303 inspection may be reworked to fix flaws, and this should ideally be 304 impossible once the system has been made tamper-proof. 306 3. Types of Trust Anchors 308 Trust Anchors (TrAnc) are fundamentally public keys with 309 authorizations implicitly attached through the code that references 310 them. 312 They are used to validate other digitally signed artifacts. 313 Typically, these are chains of PKIX certificates leading to an End- 314 Entity certificate (EE). 316 The chains are usually presented as part of an externally provided 317 object, with the term "externally" to be understood as being as close 318 as untrusted flash, to as far as objects retrieved over a network. 320 There is no requirement that there be any chain at all: the trust 321 anchor can be used to validate a signature over a target object 322 directly. 324 The trust anchors are often stored in the form of self-signed 325 certificates. The self-signature does not offer any cryptographic 326 assurance, but it does provide a form of error detection, providing 327 verification against non-malicious forms of data corruption. If 328 storage is at a premium (such as inside-CPU non-volatile storage) 329 then only the public key itself need to be stored. For a 256-bit 330 ECDSA key, this is 32 bytes of space. 332 When evaluating the degree of trust for each trust anchor there are 333 four aspects that need to be determined: 335 * can the trust anchor be replaced or modified? 337 * can additional trust anchors be added? 339 * can trust anchors be removed? 341 * how is the private key associated with the trust anchor, 342 maintained by the manufacturer, maintained? 344 The first three things are device specific properties of how the 345 integrity of the trust anchor is maintained. 347 The fourth property has nothing to do with the device, but has to do 348 with the reputation and care of the entity that maintains the private 349 key. 351 Different anchors have different authorizations associated with them. 353 These are: 355 3.1. Secured First Boot Trust Anchor 357 This anchor is part of the first-stage boot loader, and it is used to 358 validate a second-stage bootloader which may be stored in external 359 flash. This is called the initial software trust anchor. 361 3.2. Software Update Trust Anchor 363 This anchor is used to validate the main application (or operating 364 system) load for the device. 366 It can be stored in a number of places. First, it may be identical 367 to the Secure Boot Trust Anchor. 369 Second, it may be stored in the second-stage bootloader, and 370 therefore its integrity is protected by the Secured First Boot Trust 371 Anchor. 373 Third, it may be stored in the application code itself, where the 374 application validates updates to the application directly (update in 375 place), or via a double-buffer arrangement. The initial (factory) 376 load of the application code initializes the trust arrangement. 378 In this situation the application code is not in a secured boot 379 situation, as the second-stage bootloader does not validate the 380 application/operating system before starting it, but it may still 381 provide measured boot mechanism. 383 3.3. Trusted Application Manager anchor 385 This anchor is the secure key for the [I-D.ietf-teep-architecture] 386 Trusted Application Manager (TAM). Code which is signed by this 387 anchor will be given execution privileges as described by the 388 manifest which accompanies the code. This privilege may include 389 updating anchors. 391 3.4. Public WebPKI anchors 393 These anchors are used to verify HTTPS certificates from web sites. 394 These anchors are typically distributed as part of desktop browsers, 395 and via desktop operating systems. 397 The exact set of these anchors is not precisely defined: it is 398 usually determined by the browser vendor (e.g., Mozilla, Google, 399 Apple, Safari, Microsoft), or the operating system vendor (e.g., 400 Apple, Google, Microsoft, Ubuntu). In most cases these vendors look 401 to the CA/Browser Forum [CABFORUM] for inclusion criteria. 403 3.5. DNSSEC root 405 This anchor is part of the DNS Security extensions. It provides an 406 anchor for securing DNS lookups. Secure DNS lookups may be important 407 in order to get access to software updates. This anchor is now 408 scheduled to change approximately every 3 years, with the new key 409 announced several years before it is used, making it possible to 410 embed keys that will be valid for up to five years. 412 This trust anchor is typically part of the application/operating 413 system code and is usually updated by the manufacturer when they do 414 updates. However, a system that is connected to the Internet may 415 update the DNSSEC anchor itself through the mechanism described in 416 [RFC5011]. 418 There are concerns that there may be a chicken and egg situation for 419 devices that have remained in a powered off state (or disconnected 420 from the Internet) for some period of years. That upon being 421 reconnected, that the device would be unable to do DNSSEC validation. 422 This failure would result in them being unable to obtain operating 423 system updates that would then include the updates to the DNSSEC key. 425 3.6. What else? 427 TBD? 429 4. Types of Identities 431 Identities are installed during manufacturing time for a variety of 432 purposes. 434 Identities require some private component. Asymmetric identities 435 (e.g., RSA, ECDSA, EdDSA systems) require a corresponding public 436 component, usually in the form of a certificate signed by a trusted 437 third party. 439 This certificate associates the identity with attributes. 441 The process of making this coordinated key pair and then installing 442 it into the device is called identity provisioning. 444 4.1. Manufacturer installed IDevID certificates 446 [ieee802-1AR] defines a category of certificates that are installed 447 by the manufacturer, which contain at the least, a device unique 448 serial number. 450 A number of protocols depend upon this certificate. 452 * [RFC8572] and [RFC8995] introduce mechanisms for new devices 453 (called pledges) to be onboarded into a network without 454 intervention from an expert operator. A number of derived 455 protocols such as [I-D.ietf-anima-brski-async-enroll], 456 [I-D.ietf-anima-constrained-voucher], 457 [I-D.richardson-anima-voucher-delegation], 458 [I-D.friel-anima-brski-cloud] extend this in a number of ways. 460 * [I-D.ietf-rats-architecture] depends upon a key provisioned into 461 the Attesting Environment to sign Evidence. 463 * [I-D.ietf-suit-architecture] may depend upon a key provisioned 464 into the device in order to decrypt software updates. Both 465 symmetric and asymmetric keys are possible. In both cases, the 466 decrypt operation depends upon the device having access to a 467 private key provisioned in advance. The IDevID can be used for 468 this if algorithm choices permit. ECDSA keys do not directly 469 support encryption in the same way that RSA does, for instance, 470 but the addition of ECIES can solve this. There may be other 471 legal considerations why the IDevID might not be used, and a 472 second key provisioned. 474 * TBD 476 4.1.1. Operational Considerations for Manufacturer IDevID Public Key 477 Infrastructure 479 The manufacturer has the responsibility to provision a key pair into 480 each device as part of the manufacturing process. There are a 481 variety of mechanisms to accomplish this, which this document will 482 overview. 484 There are three fundamental ways to generate IDevID certificates for 485 devices: 487 1. generating a private key on the device, creating a Certificate 488 Signing Request (or equivalent), and then returning a certificate 489 to the device. 491 2. generating a private key outside the device, signing the 492 certificate, and the installing both into the device. 494 3. deriving the private key from a previously installed secret seed, 495 that is shared with only the manufacturer. 497 There is a fourth situation where the IDevID is provided as part of a 498 Trusted Platform Module (TPM), in which case the TPM vendor may be 499 making the same tradeoffs. 501 The document [I-D.moskowitz-ecdsa-pki] provides some practical 502 instructions on setting up a reference implementation for ECDSA keys 503 using a three-tier mechanism. 505 4.1.2. Key Generation process 507 4.1.2.1. On-device private key generation 509 Generating the key on-device has the advantage that the private key 510 never leaves the device. The disadvantage is that the device may not 511 have a verified random number generator. [factoringrsa] is an example 512 of a successful attack on this scenario. 514 There are a number of options of how to get the public key securely 515 from the device to the certification authority. 517 This transmission must be done in an integral manner, and must be 518 securely associated with the assigned serial number. The serial 519 number goes into the certificate, and the resulting certificate needs 520 to be loaded into the manufacturer's asset database. 522 One way to do the transmission is during a factory Bed of Nails test 523 (see [BedOfNails]) or Boundary Scan. When done via a physical 524 connection like this, then this is referred to as a _device- 525 generated_ / _mechanically-transferred_ method. 527 There are other ways that could be used where a certificate signing 528 request is sent over a special network channel when the device is 529 powered up in the factory. This is referred to as the _device- 530 generated_ / _network-transferred_ method. 532 Regardless of how the certificate signing request is sent from the 533 device to the factory, and how the certificate is returned to the 534 device, a concern from production line managers is that the assembly 535 line may have to wait for the certification authority to respond with 536 the certificate. 538 After the key generation, the device needs to set a flag such that it 539 no longer will generate a new key / will accept a new IDevID via the 540 factory connection. This may be a software setting, or could be as 541 dramatic as blowing a fuse. 543 The risk is that if an attacker with physical access is able to put 544 the device back into an unconfigured mode, then the attacker may be 545 able to substitute a new certificate into the device. It is 546 difficult to construct a rationale for doing this, unless the network 547 initialization also permits an attacker to load or replace trust 548 anchors at the same time. 550 Devices are typically constructed in a fashion such that the device 551 is unable to ever disclose the private key via an external interface. 552 This is usually done using a secure-enclave provided by the CPU 553 architecture in combination with on-chip non-volatile memory. 555 4.1.2.2. Off-device private key generation 557 Generating the key off-device has the advantage that the randomness 558 of the private key can be better analyzed. As the private key is 559 available to the manufacturing infrastructure, the authenticity of 560 the public key is well known ahead of time. 562 If the device does not come with a serial number in silicon, then one 563 should be assigned and placed into a certificate. The private key 564 and certificate could be programmed into the device along with the 565 initial bootloader firmware in a single step. 567 Aside from the change of origin for the randomness, a major advantage 568 of this mechanism is that it can be done with a single write to the 569 flash. The entire firmware of the device, including configuration of 570 trust anchors and private keys can be loaded in a single write pass. 571 Given some pipelining of the generation of the keys and the creation 572 of certificates, it may be possible to install unique identities 573 without taking any additional time. 575 The major downside to generating the private key off-device is that 576 it could be seen by the manufacturing infrastructure. It could be 577 compromised by humans in the factory, or the equipment could be 578 compromised. The use of this method increases the value of attacking 579 the manufacturing infrastructure. 581 If private keys are generated by the manufacturing plant, and are 582 immediately installed, but never stored, then the window in which an 583 attacker can gain access to the private key is immensely reduced. 585 As in the previous case, the transfer may be done via physical 586 interfaces such as bed-of-nails, giving the _infrastructure- 587 generated_ / _mechanically-transferred_ method. 589 There is also the possibility of having a _infrastructure-generated_ 590 / _network-transferred_ key. There is a support for "server- 591 generated" keys in [RFC7030], [RFC8894], and [RFC4210]. All methods 592 strongly recommend encrypting the private key for transfer. This is 593 difficult to comply with here as there is not yet any private key 594 material in the device, so in many cases it will not be possible to 595 encrypt the private key. 597 4.1.2.3. Key setup based on 256 bit secret seed 599 A hybrid of the previous two methods leverages a symmetric key that 600 is often provided by a silicon vendor to OEM manufacturers. 602 Each CPU (or a Trusted Execution Environment 603 [I-D.ietf-teep-architecture], or a TPM) is provisioned at fabrication 604 time with a unique, secret seed, usually at least 256 bits in size. 606 This value is revealed to the OEM board manufacturer only via a 607 secure channel. Upon first boot, the system (probably within a TEE, 608 or within a TPM) will generate a key pair using the seed to 609 initialize a Pseudo-Random-Number-Generator (PRNG). The OEM, in a 610 separate system, will initialize the same PRNG and generate the same 611 key pair. The OEM then derives the public key part, signs it and 612 turns it into a certificate. The private part is then destroyed, 613 ideally never stored or seen by anyone. The certificate (being 614 public information) is placed into a database, in some cases it is 615 loaded by the device as its IDevID certificate, in other cases, it is 616 retrieved during the onboarding process based upon a unique serial 617 number asserted by the device. 619 This method appears to have all of the downsides of the previous two 620 methods: the device must correctly derive its own private key, and 621 the OEM has access to the private key, making it also vulnerable. 622 The secret seed must be created in a secure way and it must also be 623 communicated securely. 625 There are some advantages to the OEM however: the major one is that 626 the problem of securely communicating with the device is outsourced 627 to the silicon vendor. The private keys and certificates may be 628 calculated by the OEM asynchronously to the manufacturing process, 629 either done in batches in advance of actual manufacturing, or on 630 demand when an IDevID is demanded. Doing the processing in this way 631 permits the key derivation system to be completely disconnected from 632 any network, and requires placing very little trust in the system 633 assembly factory. Operational security such as often incorrectly 634 presented fictionalized stories of a "mainframe" system to which only 635 physical access is permitted begins to become realistic. That trust 636 has been replaced with a heightened trust placed in the silicon 637 (integrated circuit) fabrication facility. 639 The downsides of this method to the OEM are: they must be supplied by 640 a trusted silicon fabrication system, which must communicate the set 641 of secrets seeds to the OEM in batches, and they OEM must store and 642 care for these keys very carefully. There are some operational 643 advantages to keeping the secret seeds around in some form, as the 644 same secret seed could be used for other things. There are some 645 significant downsides to keeping that secret seed around. 647 5. Public Key Infrastructures (PKI) 649 [RFC5280] describes the format for certificates, and numerous 650 mechanisms for doing enrollment have been defined (including: EST 651 [RFC7030], CMP [RFC4210], SCEP [RFC8894]). 653 [RFC5280] provides mechanisms to deal with multi-level certification 654 authorities, but it is not always clear what operating rules apply. 656 The certification authority (CA) that is central to [RFC5280]-style 657 public key infrastructures can suffer three kinds of failures: 659 1. disclosure of a private key, 661 2. loss of a private key, 663 3. inappropriate signing of a certificate from an unauthorized 664 source. 666 A PKI which discloses one or more private certification authority 667 keys is no longer secure. 669 An attacker can create new identities, and forge certificates 670 connecting existing identities to attacker controlled public/private 671 keypairs. This can permit the attacker to impersonate any specific 672 device. 674 There is an additional kind of failure when the CA is convinced to 675 sign (or issue) a certificate which it is not authorized to do so. 676 See for instance [ComodoGate]. This is an authorization failure, and 677 while a significant event, it does not result in the CA having to be 678 re-initialized from scratch. 680 This is distinguished from when a loss as described above renders the 681 CA completely useless and likely requires a recall of all products 682 that have ever had an IDevID issued from this CA. 684 If the PKI uses Certificate Revocation Lists (CRL)s, then an attacker 685 that has access to the private key can also revoke existing 686 identities. 688 In the other direction, a PKI which loses access to a private key can 689 no longer function. This does not immediately result in a failure, 690 as existing identities remain valid until their expiry time 691 (notAfter). However, if CRLs or OCSP are in use, then the inability 692 to sign a fresh CRL or OCSP response will result in all identities 693 becoming invalid once the existing CRLs or OCSP statements expire. 695 This section details some nomenclature about the structure of 696 certification authorities. 698 5.1. Number of levels of certification authorities 700 Section 6.1 of [RFC5280] provides a Basic Path Validation. In the 701 formula, the certificates are arranged into a list. 703 The certification authority (CA) starts with a Trust Anchor (TrAnc). 704 This is counted as the first level of the authority. 706 In the degenerate case of a self-signed certificate, then this a one 707 level PKI. 709 .----------.<-. 710 |Issuer= X | | 711 |Subject=X |--' 712 '----------' 713 The private key associated with the Trust Anchor signs one or more 714 certificates. When this first level authority trusts only End-Entity 715 (EE) certificates, then this is a two level PKI. 717 .----------.<-. 718 |Issuer= X | | root 719 |Subject=X +--' CA 720 '--+-----+-' 721 | | 722 | '-------. 723 | | 724 v v 725 .----EE----. .----EE----. 726 |Issuer= X | |Issuer= X | 727 |Subject=Y1| |Subject=Y2| 728 '----------' '----------' 730 When this first level authority signs subordinate certification 731 authorities, and those certification authorities sign End-Entity 732 certificates, then this is a three level PKI. 734 .----------.<-. 735 root |Issuer= X | | 736 CA |Subject=X +--' 737 '--+-----+-' 738 | | 739 .-----------' '------------. 740 | | 741 v v 742 .----------. .----------. 743 |Issuer= X | subordinate |Issuer= X | 744 |Subject=Y1| CA |Subject=Y2| 745 '--+---+---' '--+----+--' 746 | | | | 747 .--' '-------. .---' '------. 748 | | | | 749 v v v v 750 .----EE----. .----EE----. .----EE----. .----EE----. 751 |Issuer= Y1| |Issuer= Y1| |Issuer= Y2| |Issuer= Y2| 752 |Subject=Z1| |Subject=Z1| |Subject=Z3| |Subject=Z4| 753 '----------' '----------' '----------' '----------' 755 In general, when arranged as a tree, with the End-Entity certificates 756 at the bottom, and the Trust Anchor at the top, then the level is 757 where the deepest EE certificates are, counting from one. 759 It is quite common to have a three-level PKI, where the root of the 760 CA is stored in a Hardware Security Module, while the level one 761 subordinate CA is available in an online form. 763 5.2. Protection of CA private keys 765 The private key for the certification authorities must be protected 766 from disclosure. The strongest protection is afforded by keeping 767 them in a offline device, passing Certificate Signing Requests (CSRs) 768 to the offline device by human process. 770 For examples of extreme measures, see [kskceremony]. There is 771 however a wide spectrum of needs, as exampled in [rootkeyceremony]. 772 The SAS70 audit standard is usually used as a basis for the Ceremony, 773 see [keyceremony2]. 775 This is inconvenient, and may involve latencies of days, possibly 776 even weeks to months if the offline device is kept in a locked 777 environment that requires multiple keys to be present. 779 There is therefore a tension between protection and convenience. 780 This is often mitigated by having some levels of the PKI be offline, 781 and some levels of the PKI be online. 783 There is usually a need to maintain backup copies of the critical 784 keys. It is often appropriate to use secret splitting technology 785 such as Shamir Secret Sharing among a number of parties [shamir79] 786 This mechanism can be setup such that some threshold k (less than the 787 total n) of shares are needed in order to recover the secret. 789 5.3. Supporting provisioned anchors in devices 791 IDevID-type Identity (or Birth) Certificates which are provisioned 792 into devices need to be signed by a certification authority 793 maintained by the manufacturer. During the period of manufacture of 794 new product, the manufacturer needs to be be able to sign new 795 Identity Certificates. 797 During the anticipated lifespan of the devices the manufacturer needs 798 to maintain the ability for third parties to validate the Identity 799 Certificates. If there are Certificate Revocation Lists (CRLs) 800 involved, then they will need to re-signed during this period. Even 801 for devices with a short active lifetime, the lifespan of the device 802 could very long if devices are kept in a warehouse for many decades 803 before being activated. 805 Trust anchors which are provisioned in the devices will have 806 corresponding private keys maintained by the manufacturer. The trust 807 anchors will often anchor a PKI which is going to be used for a 808 particular purpose. There will be End-Entity (EE) certificates of 809 this PKI which will be used to sign particular artifacts (such as 810 software updates), or messages in communications protocols (such as 811 TLS connections). The private keys associated with these EE 812 certificates are not stored in the device, but are maintained by the 813 manufacturer. These need even more care than the private keys stored 814 in the devices, as compromise of the software update key compromises 815 all of the devices, not just a single device. 817 6. Evaluation Questions 819 This section recaps the set of questions that may need to be 820 answered. This document does not assign valuation to the answers. 822 6.1. Integrity and Privacy of on-device data 824 initial-enclave-location: Is the location of the initial software 825 trust anchor internal to the CPU package? Some systems have a 826 software verification public key which is built into the CPU 827 package, while other systems store that initial key in a non- 828 volatile device external to the CPU. 830 initial-enclave-integrity-key: If the first-stage bootloader is 831 external to the CPU, and if it is integrity protected, where is 832 the key used to check the integrity? 834 initial-enclave-privacy-key: If the first-stage data is external to 835 the CPU, is it kept confidential by use of encryption? 837 first-stage-initialization: The number of people involved in the 838 first stage initialization. An entirely automated system would 839 have a number zero. A factory with three 8 hour shifts might have 840 a number that is a multiple of three. A system with humans 841 involved may be subject to bribery attacks, while a system with no 842 humans may be subject to attacks on the system which are hard to 843 notice. 845 first-second-stage-gap: If a board is initialized with a first-stage 846 bootloader in one location (factory), and then shipped to another 847 location, there may situations where the device can not be locked 848 down until the second step. 850 6.2. Integrity and Privacy of device identify infrastructure 852 For IDevID provisioning, which includes a private key and matching 853 certificate installed into the device, the associated public key 854 infrastructure that anchors this identity must be maintained by the 855 manufacturer. 857 identity-pki-level: how deep are the IDevID certificates that are 858 issued? 860 identity-time-limits-per-subordinate: how long is each subordinate 861 CA maintained before a new subordinate CA key is generated? There 862 may be no time limit, only a device count limit. 864 identity-number-per-subordinate: how many identities are signed by a 865 particular subordinate CA before it is retired? There may be no 866 numeric limit, only a time limit. 868 identity-anchor-storage: how is the root CA key stored? How many 869 people are needed to recover the private key? 871 6.3. Integrity and Privacy of included trust anchors 873 For each trust anchor (public key) stored in the device, there will 874 be an associated PKI. For each of those PKI the following questions 875 need to be answered. 877 pki-level: how deep is the EE that will be evaluated (the trust root 878 is at level 1) 880 pki-algorithms: what kind of algorithms and key sizes will be 881 considered to valid 883 pki-level-locked: (a Boolean) is the level where the EE cert will be 884 found locked by the device, or can levels be added or deleted by 885 the PKI operator without code changes to the device. 887 pki-breadth: how many different non-expired EE certificates is the 888 PKI designed to manage? 890 pki-lock-policy: can any EE certificate be used with this trust 891 anchor to sign? Or, is there some kind of policy OID or Subject 892 restriction? Are specific subordinate CAs needed that lead to the 893 EE? 895 pki-anchor-storage: how is the private key associated with this 896 trust root stored? How many people are needed to recover it? 898 7. Privacy Considerations 900 many yet to be detailed 902 8. Security Considerations 904 This entire document is about security considerations. 906 9. IANA Considerations 908 This document makes no IANA requests. 910 10. Acknowledgements 912 Robert Martin of MITRE provided some guidance about citing the SBOM 913 efforts. Carsten Borman provides many editorial suggestions. 915 11. Changelog 917 12. References 919 12.1. Normative References 921 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 922 Housley, R., and W. Polk, "Internet X.509 Public Key 923 Infrastructure Certificate and Certificate Revocation List 924 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 925 . 927 [ieee802-1AR] 928 IEEE Standard, "IEEE 802.1AR Secure Device Identifier", 929 2009, . 932 12.2. Informative References 934 [RFC8995] Pritikin, M., Richardson, M., Eckert, T., Behringer, M., 935 and K. Watsen, "Bootstrapping Remote Secure Key 936 Infrastructure (BRSKI)", RFC 8995, DOI 10.17487/RFC8995, 937 May 2021, . 939 [I-D.richardson-anima-voucher-delegation] 940 Richardson, M. and W. Pan, "Delegated Authority for 941 Bootstrap Voucher Artifacts", Work in Progress, Internet- 942 Draft, draft-richardson-anima-voucher-delegation-03, 22 943 March 2021, . 946 [I-D.friel-anima-brski-cloud] 947 Friel, O., Shekh-Yusef, R., and M. Richardson, "BRSKI 948 Cloud Registrar", Work in Progress, Internet-Draft, draft- 949 friel-anima-brski-cloud-04, 6 April 2021, 950 . 953 [I-D.ietf-anima-constrained-voucher] 954 Richardson, M., Stok, P. V. D., Kampanakis, P., and E. 955 Dijk, "Constrained Voucher Artifacts for Bootstrapping 956 Protocols", Work in Progress, Internet-Draft, draft-ietf- 957 anima-constrained-voucher-11, 11 June 2021, 958 . 961 [I-D.ietf-anima-brski-async-enroll] 962 Fries, S., Brockhaus, H., Lear, E., and T. Werner, 963 "Support of asynchronous Enrollment in BRSKI (BRSKI-AE)", 964 Work in Progress, Internet-Draft, draft-ietf-anima-brski- 965 async-enroll-02, 14 June 2021, 966 . 969 [I-D.moskowitz-ecdsa-pki] 970 Moskowitz, R., Birkholz, H., Xia, L., and M. C. 971 Richardson, "Guide for building an ECC pki", Work in 972 Progress, Internet-Draft, draft-moskowitz-ecdsa-pki-10, 31 973 January 2021, . 976 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 977 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 978 . 980 [RFC5011] StJohns, M., "Automated Updates of DNS Security (DNSSEC) 981 Trust Anchors", STD 74, RFC 5011, DOI 10.17487/RFC5011, 982 September 2007, . 984 [RFC8366] Watsen, K., Richardson, M., Pritikin, M., and T. Eckert, 985 "A Voucher Artifact for Bootstrapping Protocols", 986 RFC 8366, DOI 10.17487/RFC8366, May 2018, 987 . 989 [RFC8572] Watsen, K., Farrer, I., and M. Abrahamsson, "Secure Zero 990 Touch Provisioning (SZTP)", RFC 8572, 991 DOI 10.17487/RFC8572, April 2019, 992 . 994 [RFC7030] Pritikin, M., Ed., Yee, P., Ed., and D. Harkins, Ed., 995 "Enrollment over Secure Transport", RFC 7030, 996 DOI 10.17487/RFC7030, October 2013, 997 . 999 [RFC8894] Gutmann, P., "Simple Certificate Enrolment Protocol", 1000 RFC 8894, DOI 10.17487/RFC8894, September 2020, 1001 . 1003 [RFC4210] Adams, C., Farrell, S., Kause, T., and T. Mononen, 1004 "Internet X.509 Public Key Infrastructure Certificate 1005 Management Protocol (CMP)", RFC 4210, 1006 DOI 10.17487/RFC4210, September 2005, 1007 . 1009 [_3GPP.51.011] 1010 3GPP, "Specification of the Subscriber Identity Module - 1011 Mobile Equipment (SIM-ME) interface", 3GPP TS 51.011 1012 4.15.0, 15 June 2005, 1013 . 1015 [RFC6024] Reddy, R. and C. Wallace, "Trust Anchor Management 1016 Requirements", RFC 6024, DOI 10.17487/RFC6024, October 1017 2010, . 1019 [BedOfNails] 1020 Wikipedia, "Bed of nails tester", 1 July 2020, 1021 . 1024 [pelionfcu] 1025 ARM Pelion, "Factory provisioning overview", 28 June 2020, 1026 . 1029 [factoringrsa] 1030 "Factoring RSA keys from certified smart cards: 1031 Coppersmith in the wild", 16 September 2013, 1032 . 1034 [kskceremony] 1035 Verisign, "DNSSEC Practice Statement for the Root Zone ZSK 1036 Operator", 2017, . 1039 [rootkeyceremony] 1040 Community, "Root Key Ceremony, Cryptography Wiki", 4 April 1041 2020, 1042 . 1044 [keyceremony2] 1045 Digi-Sign, "SAS 70 Key Ceremony", 4 April 2020, 1046 . 1049 [shamir79] Shamir, A., "How to share a secret.", 1979, 1050 . 1053 [nistsp800-57] 1054 NIST, "SP 800-57 Part 1 Rev. 4 Recommendation for Key 1055 Management, Part 1: General", 1 January 2016, 1056 . 1059 [fidotechnote] 1060 FIDO Alliance, "FIDO TechNotes: The Truth about 1061 Attestation", 19 July 2018, . 1064 [ntiasbom] NTIA, "NTIA Software Component Transparency", 1 July 2020, 1065 . 1067 [cisqsbom] CISQ/Object Management Group, "TOOL-TO-TOOL SOFTWARE BILL 1068 OF MATERIALS EXCHANGE", 1 July 2020, . 1071 [ComodoGate] 1072 "Comodo-gate hacker brags about forged certificate 1073 exploit", 28 March 2011, 1074 . 1077 [openbmc] Linux Foundation/OpenBMC Group, "Defining a Standard 1078 Baseboard Management Controller Firmware Stack", 1 July 1079 2020, . 1081 [JTAG] "Joint Test Action Group", 26 August 2020, 1082 . 1084 [JTAGieee] IEEE Standard, "1149.7-2009 - IEEE Standard for Reduced- 1085 Pin and Enhanced-Functionality Test Access Port and 1086 Boundary-Scan Architecture", 1087 DOI 10.1109/IEEESTD.2010.5412866, 2009, 1088 . 1090 [rootkeyrollover] 1091 ICANN, "Proposal for Future Root Zone KSK Rollovers", 1092 2019, . 1095 [CABFORUM] CA/Browser Forum, "CA/Browser Forum Baseline Requirements 1096 for the Issuance and Management of Publicly-Trusted 1097 Certificates, v.1.7.3", October 2020, 1098 . 1101 [I-D.richardson-rats-usecases] 1102 Richardson, M., Wallace, C., and W. Pan, "Use cases for 1103 Remote Attestation common encodings", Work in Progress, 1104 Internet-Draft, draft-richardson-rats-usecases-08, 2 1105 November 2020, . 1108 [I-D.ietf-suit-architecture] 1109 Moran, B., Tschofenig, H., Brown, D., and M. Meriac, "A 1110 Firmware Update Architecture for Internet of Things", Work 1111 in Progress, Internet-Draft, draft-ietf-suit-architecture- 1112 16, 27 January 2021, . 1115 [I-D.ietf-emu-eap-noob] 1116 Aura, T., Sethi, M., and A. Peltonen, "Nimble out-of-band 1117 authentication for EAP (EAP-NOOB)", Work in Progress, 1118 Internet-Draft, draft-ietf-emu-eap-noob-04, 16 March 2021, 1119 . 1122 [I-D.ietf-rats-architecture] 1123 Birkholz, H., Thaler, D., Richardson, M., Smith, N., and 1124 W. Pan, "Remote Attestation Procedures Architecture", Work 1125 in Progress, Internet-Draft, draft-ietf-rats-architecture- 1126 12, 23 April 2021, . 1129 [I-D.birkholz-suit-coswid-manifest] 1130 Birkholz, H., "A SUIT Manifest Extension for Concise 1131 Software Identifiers", Work in Progress, Internet-Draft, 1132 draft-birkholz-suit-coswid-manifest-00, 17 July 2018, 1133 . 1136 [I-D.birkholz-rats-mud] 1137 Birkholz, H., "MUD-Based RATS Resources Discovery", Work 1138 in Progress, Internet-Draft, draft-birkholz-rats-mud-00, 9 1139 March 2020, . 1142 [RFC8520] Lear, E., Droms, R., and D. Romascanu, "Manufacturer Usage 1143 Description Specification", RFC 8520, 1144 DOI 10.17487/RFC8520, March 2019, 1145 . 1147 [I-D.ietf-sacm-coswid] 1148 Birkholz, H., Fitzgerald-McKay, J., Schmidt, C., and D. 1149 Waltermire, "Concise Software Identification Tags", Work 1150 in Progress, Internet-Draft, draft-ietf-sacm-coswid-17, 22 1151 February 2021, . 1154 [RFC7168] Nazar, I., "The Hyper Text Coffee Pot Control Protocol for 1155 Tea Efflux Appliances (HTCPCP-TEA)", RFC 7168, 1156 DOI 10.17487/RFC7168, April 2014, 1157 . 1159 [I-D.bormann-lwig-7228bis] 1160 Bormann, C., Ersue, M., Keranen, A., and C. Gomez, 1161 "Terminology for Constrained-Node Networks", Work in 1162 Progress, Internet-Draft, draft-bormann-lwig-7228bis-06, 9 1163 March 2020, . 1166 [I-D.ietf-teep-architecture] 1167 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 1168 "Trusted Execution Environment Provisioning (TEEP) 1169 Architecture", Work in Progress, Internet-Draft, draft- 1170 ietf-teep-architecture-14, 22 February 2021, 1171 . 1174 Author's Address 1176 Michael Richardson 1177 Sandelman Software Works 1179 Email: mcr+ietf@sandelman.ca