idnits 2.17.1 draft-richardson-secdispatch-idevid-considerations-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (13 July 2020) is 1383 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC8174' is defined on line 707, but no explicit reference was found in the text == Unused Reference: 'I-D.richardson-anima-masa-considerations' is defined on line 725, but no explicit reference was found in the text == Unused Reference: 'RambusCryptoManager' is defined on line 787, but no explicit reference was found in the text == Outdated reference: A later version (-10) exists of draft-moskowitz-ecdsa-pki-08 ** Downref: Normative reference to an Informational draft: draft-moskowitz-ecdsa-pki (ref. 'I-D.moskowitz-ecdsa-pki') -- Possible downref: Non-RFC (?) normative reference: ref. 'ieee802-1AR' == Outdated reference: A later version (-08) exists of draft-richardson-anima-masa-considerations-04 == Outdated reference: A later version (-45) exists of draft-ietf-anima-bootstrapping-keyinfra-41 == Outdated reference: A later version (-08) exists of draft-richardson-rats-usecases-07 == Outdated reference: A later version (-16) exists of draft-ietf-suit-architecture-11 == Outdated reference: A later version (-06) exists of draft-ietf-emu-eap-noob-02 == Outdated reference: A later version (-22) exists of draft-ietf-rats-architecture-05 == Outdated reference: A later version (-24) exists of draft-ietf-sacm-coswid-15 == Outdated reference: A later version (-08) exists of draft-bormann-lwig-7228bis-06 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-11 Summary: 1 error (**), 0 flaws (~~), 15 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 anima Working Group M. Richardson 3 Internet-Draft Sandelman Software Works 4 Intended status: Standards Track W. Pan 5 Expires: 14 January 2021 Huawei Technologies 6 13 July 2020 8 Security and Operational considerations for manufacturer installed keys 9 and anchors 10 draft-richardson-secdispatch-idevid-considerations-01 12 Abstract 14 This document provides a nomenclature to describe ways in which 15 manufacturers secure private keys and public trust anchors in 16 devices. 18 Status of This Memo 20 This Internet-Draft is submitted in full conformance with the 21 provisions of BCP 78 and BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF). Note that other groups may also distribute 25 working documents as Internet-Drafts. The list of current Internet- 26 Drafts is at https://datatracker.ietf.org/drafts/current/. 28 Internet-Drafts are draft documents valid for a maximum of six months 29 and may be updated, replaced, or obsoleted by other documents at any 30 time. It is inappropriate to use Internet-Drafts as reference 31 material or to cite them other than as "work in progress." 33 This Internet-Draft will expire on 14 January 2021. 35 Copyright Notice 37 Copyright (c) 2020 IETF Trust and the persons identified as the 38 document authors. All rights reserved. 40 This document is subject to BCP 78 and the IETF Trust's Legal 41 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 42 license-info) in effect on the date of publication of this document. 43 Please review these documents carefully, as they describe your rights 44 and restrictions with respect to this document. Code Components 45 extracted from this document must include Simplified BSD License text 46 as described in Section 4.e of the Trust Legal Provisions and are 47 provided without warranty as described in the Simplified BSD License. 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 52 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 53 2. Applicability Model . . . . . . . . . . . . . . . . . . . . . 5 54 2.1. A reference manufacturing/boot process . . . . . . . . . 5 55 3. Types of Trust Anchors . . . . . . . . . . . . . . . . . . . 6 56 3.1. Secured First Boot Trust Anchor . . . . . . . . . . . . . 7 57 3.2. Software Update Trust Anchor . . . . . . . . . . . . . . 7 58 3.3. Trusted Application Manager anchor . . . . . . . . . . . 8 59 3.4. Public WebPKI anchors . . . . . . . . . . . . . . . . . . 8 60 3.5. DNSSEC root . . . . . . . . . . . . . . . . . . . . . . . 8 61 3.6. What else? . . . . . . . . . . . . . . . . . . . . . . . 9 62 4. Types of Identities . . . . . . . . . . . . . . . . . . . . . 9 63 4.1. Manufacturer installed IDevID certificates . . . . . . . 9 64 4.1.1. Operational Considerations for Manufacturer IDevID 65 Public Key Infrastructure . . . . . . . . . . . . . . 10 66 4.1.2. Key Generation process . . . . . . . . . . . . . . . 10 67 5. Public Key infrastructure for IDevIDs . . . . . . . . . . . . 13 68 6. Evaluation Questions . . . . . . . . . . . . . . . . . . . . 14 69 7. Privacy Considerations . . . . . . . . . . . . . . . . . . . 15 70 8. Security Considerations . . . . . . . . . . . . . . . . . . . 15 71 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 72 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 15 73 11. Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . 15 74 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 15 75 12.1. Normative References . . . . . . . . . . . . . . . . . . 15 76 12.2. Informative References . . . . . . . . . . . . . . . . . 16 77 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 20 79 1. Introduction 81 An increasing number of protocols derive a significant part of their 82 security by using trust anchors that are installed by manufacturers. 83 Disclosure of the list of trust anchors does not usually cause a 84 problem, but changing them in any way does. This includes adding, 85 replacing or deleting anchors. 87 Many protocols also leverage manufacturer installed identities. 88 These identities are usually in the form of [ieee802-1AR] Initial 89 Device Identity certificates (IDevID). The identity has two 90 components: a private key that must remain under the strict control 91 of a trusted part of the device, and a public part (the certificate), 92 which (ignoring, for the moment, personal privacy concerns) may be 93 freely disclosed. 95 There also situations where identities which are tied up in the 96 provision of symmetric shared secret. A common example is the SIM 97 card ([_3GPP.51.011]), which now comes as a virtual SIM, but which is 98 usually not provisioned at the factory. The provision of an initial, 99 per-device default password also falls into the category of symmetric 100 shared secret. 102 It is further not unusual for many devices (particularly smartphones) 103 to also have one or more group identity keys. This is used in, for 104 instance, in [fidotechnote] to make claims about being a particular 105 model of phone (see [I-D.richardson-rats-usecases]). The keypair 106 that does this is loaded into large batches of phones for privacy 107 reasons. 109 The trust anchors are used for a variety of purposes. The following 110 uses are specifically called out: 112 * to validate the signature on a software update (as per 113 [I-D.ietf-suit-architecture]). 115 * to verify the end of a TLS Server Certificate, such as when 116 setting up an HTTPS connection. 118 * to verify the [RFC8366] format voucher that provides proof of an 119 ownership change 121 Device identity keys are used when performing enrollment requests (in 122 [I-D.ietf-anima-bootstrapping-keyinfra], and in some uses of 123 [I-D.ietf-emu-eap-noob]. The device identity certificate is also 124 used to sign Evidence by an Attesting Environment (see 125 [I-D.ietf-rats-architecture]). 127 These security artifacts are used to anchor other chains of 128 information: an EAT Claim as to the version of software/firmware 129 running on a device (XXX and [I-D.birkholz-suit-coswid-manifest]), an 130 EAT claim about legitimate network activity (via 131 [I-D.birkholz-rats-mud], or embedded in the IDevID in [RFC8520]). 132 Known software versions lead directly to vendor/distributor signed 133 Software Bill of Materials (SBOM), such as those described by 134 [I-D.ietf-sacm-coswid] and the NTIA/CISQ/OMG SBOM work underway 135 [ntiasbom]. 137 In order to manage risks and assess vulnerabilities in a Supply 138 Chain, it is necessary to determine a degree of trusthworthiness in 139 each device. A device may mislead audit systems as to its 140 provenance, about its software load or even about what kind of device 141 it is. (see [RFC7168] for a humourous example). In order to properly 142 assess he security of a Supply Chain it is necessary to understand 143 the kinds and severity of the threats which a device has been 144 designed to resist. To do this, it is necessary to understand the 145 ways in which the different trust anchors and identities are 146 initially provisioned, are protected, and are updated. 148 To do this, this document details the different trust anchors (TrA) 149 and identities (IDs) found in typical devices. The privacy and 150 integrity of the TAs and IDs is often provided by a different, 151 superior artifacts. This relationship is examined. 153 While many might desire to assign numerical values to different 154 mitigation techniques in order to be able to rank them, this document 155 does not attempt to do, as there are too many other (mostly human) 156 factors that would come into play. Such an effort is more properly 157 in the purvue of a formal ISO9001 process such as ISO14001. 159 1.1. Terminology 161 This document is not a standards track document, and it does not make 162 use of formal requirements language. 164 This section will be expanded to include needed terminology as 165 required. 167 The words Trust Anchor are contracted to TrA rather than TA, in order 168 not to confuse with [I-D.ietf-teep-architecture]'s "Trusted 169 Application". 171 This document defines a number of hyphenated terms, and they are 172 summarized here: 174 device-generated : a private or symmetric key which is generated on 175 the device 177 factory-generated : a private or symmetric key which is generated by 178 the factory 180 mechanically-installed : when a key or certificate is programmed 181 into flash by out-of-band mechanism like JTAG 183 mechanically-transfered : when a key or certificate is transfered 184 into a system via private interface, such as serial console, JTAG 185 managed mailbox, or other physically private interface 187 network-transfered : when a key or certificate is transfered into a 188 system using a network interface which would be available after 189 the device has shipped. This applies even if the network is 190 physically attached using a bed-of-nails. 192 device/factory-co-generated : when a private or symmetric key is 193 derived from a secret previously synchronized between the silicon 194 vendor and the factory using a common algorithm. 196 2. Applicability Model 198 There is a wide variety of devices to which this analysis can apply. 199 (See [I-D.bormann-lwig-7228bis]) This document will use a J-group 200 class C13 as a sample. This class is sufficiently large to 201 experience complex issues among multiple CPUs, packages and operating 202 systems, but at the same time, small enough that this class is often 203 deployed in single-purpose IoT-like uses. Devices in this class 204 often have Secure Enclaves (such as the "Grapeboard"), and can 205 include silicon manufacturer controlled processors in the boot 206 process (the Raspberry PI boots under control of the GPU). 208 Almost all larger systems (servers, laptops, desktops) include a 209 Baseboard Management Controller (BMC), which ranges from a M-Group 210 Class 3 MCU, to a J-Group Class 10 CPU (see, for instance [openbmc] 211 which uses a Linux kernel and system inside the BMC). As the BMC 212 usually has complete access to the main CPU's memory, I/O hardware 213 and disk, the boot path security of such a system needs to be 214 understood first as being about the security of the BMC. 216 2.1. A reference manufacturing/boot process 218 In order to provide for immutability and privacy of the critical TANs 219 and IDs, many CPU manufacturers will provide for some kind of private 220 memory area which is only accessible when the CPU is in certain 221 priviledged states. See the Terminology section of 222 [I-D.ietf-teep-architecture], notably TEE, REE, and TAM, and also 223 section 4, Architecture. 225 The private memory that is important is usually non-volatile and 226 rather small. It may be located inside the CPU silicon die, or it 227 may be located externally. If the memory is external, then it is 228 usually encrypted by a hardware mechanism on the CPU, with only the 229 key kept inside the CPU. 231 The entire mechanism may be external to the CPU in the form of a 232 hardware-TPM module, or it may be entirely internal to the CPU in the 233 form of a firmware-TPM. It may use a custom interface to the rest of 234 the system, or it may implement the TPM 1.2 or TPM 2.0 235 specifications. Those details are important to performing a full 236 evaluation, but do not matter that to this model (see initial- 237 enclave-location below). 239 During the manufacturing process, once the components have been 240 soldered to the board, the system is usually put through a system- 241 level test. This is often done on as a "bed-of-nails" test 242 [BedOfNails], where the board has key points attached mechanically to 243 a test system. A [JTAG] process tests the System Under Test, and 244 then initializes some firmware into the still empty flash storage. 245 It is now common for a factory test image to be loaded first: this 246 image will include code to initialize the private memory key 247 described above, and will include a first-stage bootloader and some 248 kind of (primitive) Trusted Application Manager (TAM). Embedded in 249 the stage one bootloader will be a Trust Anchor that is able to 250 verify the second-stage bootloader image. 252 After the system has undergone testing, the factory test image is 253 erased, leaving the first-stage bootloader. One or more second-stage 254 bootloader images is installed. The production image may be 255 installed at that time, or if the second-stage bootloader is able to 256 install it over the network, it may be done that way instead. 258 There are many variations of the above process, and this section is 259 not attempting to be prescriptive, but to be provide enough 260 illustration to motivate subsequent terminology. 262 There process may be entirely automated, or it may be entirely driven 263 by humans working in the factory. 265 Or a combination of the above. 267 These steps may all occur on an access-controlled assembly line, or 268 the system boards may be shipped from one place to another (maybe 269 another country) before undergoing testing. 271 Some systems are intended to be shipped in a tamper-proof state, but 272 it is usually not desireable that bed-of-nails testing be possible 273 without tampering, so the initialization process is usually done 274 prior to rendering the system tamper-proof. 276 Quality control testing may be done prior to as well as after the 277 application of tamper-proofing, as systems which do not pass 278 inspection may be reworked to fix flaws, and this should ideally be 279 impossible once the system has been made tamper-proof. 281 3. Types of Trust Anchors 283 Trust Anchors are fundamentally public keys. They are used to 284 validate other digitally signed artifacts. Typically, these are 285 chains of PKIX certificates leading to an End-Entity certificate 286 (EE). 288 The chains are usually presented as part of an externally provided 289 object, with the term "externally" to be understood as being as close 290 as untrusted flash, to as far as objects retrieved over a network. 292 There is no requirement that there be any chain at all: the trust 293 anchor can be used to validate a signature over a target object 294 directly. 296 The trust anchors are often stored in the form of self-signed 297 certificates. The self-signature does not offer any cryptographic 298 assurange, but it does provide a form of error detection, providing 299 verification against non-malicious forms of data corruption. If 300 storage is at a premium (such as inside-CPU non-volatile storage) 301 then only the public key itself need to be stored. For a 256-bit 302 ECDSA key, this is 32-bytes of space. 304 When evaluating the degree of trust for each trust anchor there are 305 four aspects that need to be determined: 307 * can the trust anchor be replaced or modified? 309 * can additional trust anchors be added? 311 * can trust anchors be removed? 313 * how is the private key associated with the trust anchor stored? 315 The first three things are device specific properties of how the 316 integrity of the trust anchor is maintained. 318 The fourth property has nothing to do with the device, but has to do 319 with the reputation and care of the entity that maintains the private 320 key. 322 Different anchors have different purposes. 324 These are: 326 3.1. Secured First Boot Trust Anchor 328 This anchor is part of the first-stage boot loader, and it is used to 329 validate a second-stage bootloader which may be stored in external 330 flash. 332 3.2. Software Update Trust Anchor 334 This anchor is used to validate the main application (or operating 335 system) load for the device. 337 It can be stored in a number of places. First, it may be identical 338 to the Secure Boot Trust Anchor. 340 Second, it may be stored in the second-stage bootloader, and 341 therefore it's integrity is protected by the Secured First Boot Trust 342 Anchor. 344 Third, it may be stored in the application code itself, where the 345 application validates updates to the application directly (update in 346 place), or via a double-buffer arrangement. The initial (factory) 347 load of the application code initializes the trust arrangement. 349 In this situation the application code is not in a secured boot 350 situation, as the second-stage bootloader does not validate the 351 application/operating system before starting it, but it may still 352 provide measured boot mechanism. 354 3.3. Trusted Application Manager anchor 356 This anchor is part of a [I-D.ietf-teep-architecture] Trusted 357 Application Manager. Code which is signed by this anchor will be 358 given execution privileges as described by the manifest which 359 accompanies the code. This privilege may include updating anchors. 361 3.4. Public WebPKI anchors 363 These anchors are used to verify HTTPS certificates from web sites. 364 These anchors are typically distributed as part of desktop browsers, 365 and via desktop operating systems. 367 The exact set of these anchors is not precisely defined: it is 368 usually determined by the browser vendor (e.g., Mozilla, Google, 369 Apple, Safari, Microsoft), or the operating system vendor (e.g., 370 Apple, Google, Microsoft, Ubuntu). In most cases these vendors look 371 to the CA/Browser Forum ([CABFORUM]) for inclusion criteria. 373 3.5. DNSSEC root 375 This anchor is part of the DNS Security extensions. It provides an 376 anchor for securing DNS lookups. Secure DNS lookups may be important 377 in order to get access to software updates. This anchor is now 378 scheduled to change approximately every 3 years, with the new key 379 announced several years before it is used, making it possible to 380 embed a keys that will be valid for up to five years. 382 This trust anchor is typically part of the application/operating 383 system code and is usually updated by the manufacturer when they do 384 updates. However, a system which is connected to the Internet may 385 update the DNSSEC anchor itself through the mechanism described in 386 [RFC5011]. 388 There are concerns that there may be a chicken and egg situation for 389 devices have remained in a powered off state (or disconnected from 390 the Internet) for some period of years. That upon being reconnected, 391 that the device would be unable to do DNSSEC validation. This 392 failure would result in them being unable to obtain operating system 393 updates that would then include the updates to the DNSSEC key. 395 3.6. What else? 397 TBD? 399 4. Types of Identities 401 Identities are installed during manufacturing time for a variety of 402 purposes. 404 Identities require some private component. Asymmetric identities 405 (e.g., RSA, ECDSA, EdDSA systems) require a co-responding public 406 component, usually in the form of a certificate signed by a trusted 407 third party. 409 The process of making this coordinated key pair and then installing 410 it into the device is called identity provisioning. 412 4.1. Manufacturer installed IDevID certificates 414 [ieee802-1AR] defines a category of certificates that are to 415 installed by the manufacturer, which contain at the least, a device 416 unique serial number. 418 A number of protocols depend upon this certificate. 420 * [I-D.ietf-anima-bootstrapping-keyinfra] introduces a mechanism for 421 new devices (called pledges) to be onboarded into a network 422 without intervention from an expert operator. A number of derived 423 protocols such as {{I-D. 425 * [I-D.ietf-rats-architecture] depends upon a key provisioned into 426 the Attesting Environment to sign Evidence. 428 * [I-D.ietf-suit-architecture] may depend upon a key provisioned 429 into the device in order to decrypt software updates. 431 * TBD 433 4.1.1. Operational Considerations for Manufacturer IDevID Public Key 434 Infrastructure 436 The manufacturer has the responsability to provision a keypair into 437 each device as part of the manufacturing process. There are a 438 variety of mechanisms to accomplish this, which this document will 439 overview. 441 There are three fundamental ways to generate IDevID certificates for 442 devices: 444 1. generating a private key on the device, creating a Certificate 445 Signing Request (or equivalent), and then returning a certificate 446 to the device. 448 2. generating a private key outside the device, signing the 449 certificate, and the installing both into the device. 451 3. deriving the private key from a previously installed secret seed, 452 that is shared with only the manufacturer 454 There is a fourth situation where the IDevID is provided as part of a 455 Trusted Platform Module (TPM), in which case the TPM vendor may be 456 making the same tradeoffs. 458 The document [I-D.moskowitz-ecdsa-pki] provides some practical 459 instructions on setting up a reference implementation for ECDSA keys 460 using a three-tier mechanism. 462 This document recommends the use of ECDSA keys for the root and 463 intermediate CAs, but there may be operational reasons why an RSA 464 intermediate CA will be required for some legacy TPM equipment. 466 4.1.2. Key Generation process 468 4.1.2.1. On-device private key generation 470 Generating the key on-device has the advantage that the private key 471 never leaves the device. The disadvantage is that the device may not 472 have a verified random number generator. [factoringrsa] is an example 473 of this scenario! 475 There are a number of options of how to get the public key securely 476 from the device to the certification authority. 478 This transmission must be done in an integral manner, and must be 479 securely associated with the assigned serial number. The serial 480 number goes into the certificate, and the resulting certificate needs 481 to be loaded into the manufacturer's asset database. This asset 482 database needs to be shared with the MASA. 484 One way to do the transmission is during a manufacturing during a Bed 485 of Nails (see [BedOfNails]) or Boundary Scan. When done via a 486 physical connection like this, then this is referred to as a _device- 487 generated_ / _mechanically-transfered_ . 489 There are other ways that could be used where a certificate signing 490 request is sent over a special network channel when the device is 491 powered up in the factory. This is referered to as the _device- 492 generated_ / _network-transfered_ method. 494 Regardless of how the certificate signing request is sent from the 495 device to the factory, and the certificate is returned to the device, 496 a concern from production line managers is that the assembly line may 497 have to wait for the certification authority to respond with the 498 certificate. 500 After the key generation, the device needs to set a flag such that it 501 no longer generates a new key, or will accept a new IDevID via the 502 factory connection. This may be a software setting, or could be as 503 dramatic as blowing a fuse. 505 The risk is that if an attacker with physical access is able to put 506 the device back into an unconfigured mode, then the attacker may be 507 able to substitute a new certificate into the device. It is 508 difficult to construct a rational for doing this, unless the network 509 initialization also permits an attacker to load or replace trust 510 anchors at the same time. 512 Because the key is generated inside the device, it is assumed that 513 the device can never be convinced to disclose the private key. 515 4.1.2.2. Off-device private key generation 517 Generating the key off-device has the advantage that the randomness 518 of the private key can be better analyzed. As the private key is 519 available to the manufacturing infrastructure, the authenticity of 520 the public key is well known ahead of time. 522 If the device does not come with a serial number in silicon, then one 523 should be be assigned and placed into a certificate. The private key 524 and certificate could be programmed into the device along with the 525 initial bootloader firmware in a single step. 527 Aside from the the change of origin for the randomness, a major 528 advantage of this mechanism is that it can be done with a single 529 write to the flash. The entire firmware of the device, including 530 configuration of trust anchors and private keys can be loaded in a 531 single write pass. Given some pipelining of the generation of the 532 keys and the creation of certificates, it may be possible to install 533 unique identities without taking any additional time. 535 The major downside to generating the private key off-device is that 536 it could be seen by the manufacturing infrastructure. It could be 537 compromised by humans in the factory, or the equipment could be 538 compromised. The use of this method increases the value of attacking 539 the manufacturing infrastructure. 541 If keys are generated by the manufacturing plant, and are immediately 542 installed, but never stored, then the window in which an attacker can 543 gain access to the private key is immensely reduced. 545 As in the previous case, the transfer may be done via physical 546 interfaces such as bed-of-nails, giving the _factory-generated_ / 547 _mechanically-transfered_ method. 549 There is also the possibility of having a _factory-generated_ / 550 _network-transfered_ key. There is a support for "server-generated" 551 keys in [RFC7030], [I-D.gutmann-scep], and [RFC4210]. All methods 552 strongly recommend encrypting the private key for transfer. This is 553 difficult to comply with as there is not yet any private key material 554 in the device, so in many cases it will not be possible to encrypt 555 the private key. 557 4.1.2.3. Key setup based on 256-bit secret seed 559 A hybrid of the previous two methods leverages a symmetric key that 560 is often provided by a silicon vendor to OEM manufacturers. 562 Each CPU (or a Trusted Execution Environment 563 [I-D.ietf-teep-architecture], or a TPM) is provisioned at fabrication 564 time with a unique, secret seed, usually at least 256-bits in size. 566 This value is revealed to the OEM board manufacturer only via a 567 secure channel. Upon first boot, the system (probably within a TEE, 568 or within a TPM) will generate a key pair using the seed to 569 initialize a Pseudo-Random-Number-Generator (PRNG). The OEM, in a 570 separate system, will initialize the same PRNG and generate the same 571 key pair. The OEM then derives the public key part, signs it and 572 turns it into a certificate. The private part is then destroyed, 573 ideally never stored or seen by anyone. The certificate (being 574 public information) is placed into a database, in some cases it is 575 loaded by the device as its IDevID certificate, in other cases, it is 576 retrieved during the onboarding process based upon a unique serial 577 number asserted by the device. 579 This method appears to have all of the downsides of the previous two 580 methods: the device must correctly derive its own private key, and 581 the OEM has access to the private key, making it also vulnerable. 582 The secret seed must be created in a secure way and it must also be 583 communicated securely. 585 There are some advantages to the OEM however: the major one is that 586 the problem of securely communicating with the device is outsourced 587 to the silicon vendor. The private keys and certificates may be 588 calculated by the OEM asynchronously to the manufacturing process, 589 either done in batches in advance of actual manufacturing, or on 590 demand when an IDevID is demanded. Doing the processing in this way 591 permits the key derivation system to be completely disconnected from 592 any network, and requires placing very little trust in the system 593 assembly factory. Operational security such as often incorrectly 594 presented fictionalized stories of a "mainframe" system to which only 595 physical access is permitted begins to become realistic. That trust 596 has been replaced with a heightened trust placed in the silicon 597 (integrated circuit) fabrication facility. 599 The downsides of this method to the OEM are: they must be supplied by 600 a trusted silicon fabrication system, which must communicate the set 601 of secrets seeds to the OEM in batches, and they OEM must store and 602 care for these keys very carefully. There are some operational 603 advantages to keeping the secret seeds around in some form, as the 604 same secret seed could be used for other things. There are some 605 significant downsides to keeping that secret seed around. 607 5. Public Key infrastructure for IDevIDs 609 A three-tier PKI infrastructure is appropriate. This entails having 610 a root CA created with the key kept offline, and a number of 611 intermediate CAs that have online keys that issue "day-to-day" 612 certificates. 614 The root private key should be kept offline, quite probably in a 615 Hardware Security Module if financially feasible. If not, then it 616 should be secret-split across seven to nine people, with a threshold 617 of four to five people. The split secrets should be kept in 618 geographically diverse places if the manufacturer has operations in 619 multiple places. For examples of extreme measures, see 620 [kskceremony]. There is however a wide spectrum of needs, as 621 exampled in [rootkeyceremony]. The SAS70 audit standard is usually 622 used as a basis for the Ceremony, see [keyceremony2]. 624 Ongoing access to the root-CA is important, but not as critical as 625 access to the MASA key. 627 The root CA is then used to sign a number of intermediate entities. 628 If manufacturing occurs in multiple factories, then an intermediate 629 CA for each factory is appropriate. It is also reasonable to use 630 different intermediate CAs for different product lines. It may also 631 be valuable to split IDevID certificates across intermediate CAs in a 632 round-robin fashion for products with high volumes. 634 Cycling the intermediate CAs after a period of a few months or so is 635 a quite reasonable strategy. The intermediate CAs' private key may 636 be destroyed after it signed some number of IDevIDs, and a new key 637 generated. The IDevID certificates have very long (ideally infinite) 638 validity lifetimes for reasons that [ieee802-1AR] explains. The 639 intermediate CAs will have a private key, likely kept online, which 640 is used to sign each generated IDevID. Once the IDevIDs are created, 641 the private key is no longer needed and can either be destroyed, or 642 taken offline. In other CAs, the intermediate CA's private key (or 643 another designated key) is often needed to sign OCSP [RFC6960] or 644 CRLS [RFC5280]. As the IDevID process does not in general support 645 revocation, keeping such keys online is not necessary. {EDIT NOTE: 646 REVIEW of this NEEDED} 648 The intermediate CA certificate SHOULD be signed by the root-CA with 649 indefinite (notAfter: 99991231) duration as well. 651 In all cases the product DN-serialNumber embedded in the certificate 652 must be unique across all products produced by the manufacturer. 653 This suggests some amount of structure to the product DN- 654 serialNumber, such that different intermediate CAs do not need to 655 coordinate when issuing certificates. 657 6. Evaluation Questions 659 This section recaps the set of questions that may need to be 660 answered. This document does not assign valuation to the answers. 662 initial-enclave-location : Is the location of the initial software 663 trust anchor internal to the CPU package? 665 initial-enclave-integrity-key : If the first-stage bootloader is 666 external to the CPU, and it is integrity protected, where is the 667 key used to check the integrity? 669 initial-enclave-privacy-key : If the first-stage data is external to 670 the CPU, is it encrypted? 672 first-stage-initialization : The number of people involved in the 673 first stage initialization. An entirely automated system would 674 have a number zero. A factory with three 8 hour shifts might have 675 a number that is a multiple of three. A system with humans 676 involved may be subject to bribery attacks, while a system with no 677 humans may be subject to attacks on the system which are hard to 678 notice. 680 first-second-stage-gap : If a board is initialized with a first- 681 stage bootloader in one location (factory), and then shipped to 682 another location, there may situations where the device can not be 683 locked down until the second step. 685 7. Privacy Considerations 687 many yet to be detailed 689 8. Security Considerations 691 This entire document is a security considerations. 693 9. IANA Considerations 695 This document makes no IANA requests. 697 10. Acknowledgements 699 Hello. 701 11. Changelog 703 12. References 705 12.1. Normative References 707 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 708 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 709 May 2017, . 711 [I-D.moskowitz-ecdsa-pki] 712 Moskowitz, R., Birkholz, H., Xia, L., and M. Richardson, 713 "Guide for building an ECC pki", Work in Progress, 714 Internet-Draft, draft-moskowitz-ecdsa-pki-08, 14 February 715 2020, . 718 [ieee802-1AR] 719 IEEE Standard, ., "IEEE 802.1AR Secure Device Identifier", 720 2009, . 723 12.2. Informative References 725 [I-D.richardson-anima-masa-considerations] 726 Richardson, M. and W. Pan, "Operatonal Considerations for 727 Voucher infrastructure for BRSKI MASA", Work in Progress, 728 Internet-Draft, draft-richardson-anima-masa- 729 considerations-04, 9 June 2020, . 733 [I-D.ietf-anima-bootstrapping-keyinfra] 734 Pritikin, M., Richardson, M., Eckert, T., Behringer, M., 735 and K. Watsen, "Bootstrapping Remote Secure Key 736 Infrastructures (BRSKI)", Work in Progress, Internet- 737 Draft, draft-ietf-anima-bootstrapping-keyinfra-41, 8 April 738 2020, . 741 [RFC5011] StJohns, M., "Automated Updates of DNS Security (DNSSEC) 742 Trust Anchors", STD 74, RFC 5011, DOI 10.17487/RFC5011, 743 September 2007, . 745 [RFC8366] Watsen, K., Richardson, M., Pritikin, M., and T. Eckert, 746 "A Voucher Artifact for Bootstrapping Protocols", 747 RFC 8366, DOI 10.17487/RFC8366, May 2018, 748 . 750 [RFC7030] Pritikin, M., Ed., Yee, P., Ed., and D. Harkins, Ed., 751 "Enrollment over Secure Transport", RFC 7030, 752 DOI 10.17487/RFC7030, October 2013, 753 . 755 [I-D.gutmann-scep] 756 Gutmann, P., "Simple Certificate Enrolment Protocol", Work 757 in Progress, Internet-Draft, draft-gutmann-scep-16, 27 758 March 2020, . 761 [RFC4210] Adams, C., Farrell, S., Kause, T., and T. Mononen, 762 "Internet X.509 Public Key Infrastructure Certificate 763 Management Protocol (CMP)", RFC 4210, 764 DOI 10.17487/RFC4210, September 2005, 765 . 767 [_3GPP.51.011] 768 3GPP, "Specification of the Subscriber Identity Module - 769 Mobile Equipment (SIM-ME) interface", 15 June 2005, 770 . 772 [BedOfNails] 773 Wikipedia, ., "Bed of nails tester", July 2020, 774 . 777 [pelionfcu] 778 ARM Pelion, "Factory provisioning overview", 28 June 2020, 779 . 782 [factoringrsa] 783 "Factoring RSA keys from certified smart cards: 784 Coppersmith in the wild", 16 September 2013, 785 . 787 [RambusCryptoManager] 788 Qualcomm press release, "Qualcomm Licenses Rambus 789 CryptoManager Key and Feature Management Security 790 Solution", 2014, . 794 [kskceremony] 795 Verisign, "DNSSEC Practice Statement for the Root Zone ZSK 796 Operator", 2017, . 799 [rootkeyceremony] 800 Community, "Root Key Ceremony, Cryptography Wiki", April 801 2020, 802 . 804 [keyceremony2] 805 Digi-Sign, "SAS 70 Key Ceremony", April 2020, 806 . 809 [nistsp800-57] 810 NIST, "SP 800-57 Part 1 Rev. 4 Recommendation for Key 811 Management, Part 1: General", 1 January 2016, 812 . 815 [fidotechnote] 816 FIDO Alliance, ., "FIDO TechNotes: The Truth about 817 Attestation", July 2018, . 820 [ntiasbom] CISQ/Object Management Group, ., "TOOL-TO-TOOL SOFTWARE 821 BILL OF MATERIALS EXCHANGE", July 2020, . 824 [openbmc] Linux Foundation/OpenBMC Group, ., "Defining a Standard 825 Baseboard Management Controller Firmware Stack", July 826 2020, . 828 [JTAG] IEEE Standard, ., "1149.7-2009 - IEEE Standard for 829 Reduced-Pin and Enhanced-Functionality Test Access Port 830 and Boundary-Scan Architecture", 2009, 831 . 833 [rootkeyrollover] 834 ICANN, ., "Proposal for Future Root Zone KSK Rollovers", 835 2019, . 838 [CABFORUM] CA/Browser Forum, ., "CA/Browser Forum Baseline 839 Requirements for the Issuance and Management of Publicly- 840 Trusted Certificates, v.1.2.2", October 2014, 841 . 843 [I-D.richardson-rats-usecases] 844 Richardson, M., Wallace, C., and W. Pan, "Use cases for 845 Remote Attestation common encodings", Work in Progress, 846 Internet-Draft, draft-richardson-rats-usecases-07, 9 March 847 2020, . 850 [I-D.ietf-suit-architecture] 851 Moran, B., Tschofenig, H., Brown, D., and M. Meriac, "A 852 Firmware Update Architecture for Internet of Things", Work 853 in Progress, Internet-Draft, draft-ietf-suit-architecture- 854 11, 27 May 2020, . 857 [I-D.ietf-emu-eap-noob] 858 Aura, T. and M. Sethi, "Nimble out-of-band authentication 859 for EAP (EAP-NOOB)", Work in Progress, Internet-Draft, 860 draft-ietf-emu-eap-noob-02, 12 July 2020, 861 . 864 [I-D.ietf-rats-architecture] 865 Birkholz, H., Thaler, D., Richardson, M., Smith, N., and 866 W. Pan, "Remote Attestation Procedures Architecture", Work 867 in Progress, Internet-Draft, draft-ietf-rats-architecture- 868 05, 10 July 2020, . 871 [I-D.birkholz-suit-coswid-manifest] 872 Birkholz, H., "A SUIT Manifest Extension for Concise 873 Software Identifiers", Work in Progress, Internet-Draft, 874 draft-birkholz-suit-coswid-manifest-00, 17 July 2018, 875 . 878 [I-D.birkholz-rats-mud] 879 Birkholz, H., "MUD-Based RATS Resources Discovery", Work 880 in Progress, Internet-Draft, draft-birkholz-rats-mud-00, 9 881 March 2020, . 884 [RFC8520] Lear, E., Droms, R., and D. Romascanu, "Manufacturer Usage 885 Description Specification", RFC 8520, 886 DOI 10.17487/RFC8520, March 2019, 887 . 889 [I-D.ietf-sacm-coswid] 890 Birkholz, H., Fitzgerald-McKay, J., Schmidt, C., and D. 891 Waltermire, "Concise Software Identification Tags", Work 892 in Progress, Internet-Draft, draft-ietf-sacm-coswid-15, 1 893 May 2020, . 896 [RFC7168] Nazar, I., "The Hyper Text Coffee Pot Control Protocol for 897 Tea Efflux Appliances (HTCPCP-TEA)", RFC 7168, 898 DOI 10.17487/RFC7168, April 2014, 899 . 901 [I-D.bormann-lwig-7228bis] 902 Bormann, C., Ersue, M., Keranen, A., and C. Gomez, 903 "Terminology for Constrained-Node Networks", Work in 904 Progress, Internet-Draft, draft-bormann-lwig-7228bis-06, 9 905 March 2020, . 908 [I-D.ietf-teep-architecture] 909 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 910 "Trusted Execution Environment Provisioning (TEEP) 911 Architecture", Work in Progress, Internet-Draft, draft- 912 ietf-teep-architecture-11, 2 July 2020, 913 . 916 [RFC6960] Santesson, S., Myers, M., Ankney, R., Malpani, A., 917 Galperin, S., and C. Adams, "X.509 Internet Public Key 918 Infrastructure Online Certificate Status Protocol - OCSP", 919 RFC 6960, DOI 10.17487/RFC6960, June 2013, 920 . 922 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 923 Housley, R., and W. Polk, "Internet X.509 Public Key 924 Infrastructure Certificate and Certificate Revocation List 925 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 926 . 928 Authors' Addresses 930 Michael Richardson 931 Sandelman Software Works 933 Email: mcr+ietf@sandelman.ca 935 Wei Pan 936 Huawei Technologies 938 Email: william.panwei@huawei.com