idnits 2.17.1 draft-ietf-rats-architecture-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 1677: '...e by the factory SHOULD be generated b...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 518 has weird spacing: '...tloader v ...' == Line 527 has weird spacing: '... Claims v | ...' -- The document date (9 December 2021) is 869 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-birkholz-rats-tuda-05 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-15 Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RATS Working Group H. Birkholz 3 Internet-Draft Fraunhofer SIT 4 Intended status: Informational D. Thaler 5 Expires: 12 June 2022 Microsoft 6 M. Richardson 7 Sandelman Software Works 8 N. Smith 9 Intel 10 W. Pan 11 Huawei Technologies 12 9 December 2021 14 Remote Attestation Procedures Architecture 15 draft-ietf-rats-architecture-14 17 Abstract 19 In network protocol exchanges it is often useful for one end of a 20 communication to know whether the other end is in an intended 21 operating state. This document provides an architectural overview of 22 the entities involved that make such tests possible through the 23 process of generating, conveying, and evaluating evidentiary claims. 24 An attempt is made to provide for a model that is neutral toward 25 processor architectures, the content of claims, and protocols. 27 Note to Readers 29 Discussion of this document takes place on the RATS Working Group 30 mailing list (rats@ietf.org), which is archived at 31 https://mailarchive.ietf.org/arch/browse/rats/ 32 (https://mailarchive.ietf.org/arch/browse/rats/). 34 Source for this draft and an issue tracker can be found at 35 https://github.com/ietf-rats-wg/architecture (https://github.com/ 36 ietf-rats-wg/architecture). 38 Status of This Memo 40 This Internet-Draft is submitted in full conformance with the 41 provisions of BCP 78 and BCP 79. 43 Internet-Drafts are working documents of the Internet Engineering 44 Task Force (IETF). Note that other groups may also distribute 45 working documents as Internet-Drafts. The list of current Internet- 46 Drafts is at https://datatracker.ietf.org/drafts/current/. 48 Internet-Drafts are draft documents valid for a maximum of six months 49 and may be updated, replaced, or obsoleted by other documents at any 50 time. It is inappropriate to use Internet-Drafts as reference 51 material or to cite them other than as "work in progress." 53 This Internet-Draft will expire on 12 June 2022. 55 Copyright Notice 57 Copyright (c) 2021 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 62 license-info) in effect on the date of publication of this document. 63 Please review these documents carefully, as they describe your rights 64 and restrictions with respect to this document. Code Components 65 extracted from this document must include Revised BSD License text as 66 described in Section 4.e of the Trust Legal Provisions and are 67 provided without warranty as described in the Revised BSD License. 69 Table of Contents 71 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 72 2. Reference Use Cases . . . . . . . . . . . . . . . . . . . . . 5 73 2.1. Network Endpoint Assessment . . . . . . . . . . . . . . . 5 74 2.2. Confidential Machine Learning Model Protection . . . . . 5 75 2.3. Confidential Data Protection . . . . . . . . . . . . . . 6 76 2.4. Critical Infrastructure Control . . . . . . . . . . . . . 6 77 2.5. Trusted Execution Environment Provisioning . . . . . . . 7 78 2.6. Hardware Watchdog . . . . . . . . . . . . . . . . . . . . 7 79 2.7. FIDO Biometric Authentication . . . . . . . . . . . . . . 7 80 3. Architectural Overview . . . . . . . . . . . . . . . . . . . 8 81 3.1. Two Types of Environments of an Attester . . . . . . . . 9 82 3.2. Layered Attestation Environments . . . . . . . . . . . . 11 83 3.3. Composite Device . . . . . . . . . . . . . . . . . . . . 13 84 3.4. Implementation Considerations . . . . . . . . . . . . . . 15 85 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 16 86 4.1. Roles . . . . . . . . . . . . . . . . . . . . . . . . . . 16 87 4.2. Artifacts . . . . . . . . . . . . . . . . . . . . . . . . 17 88 5. Topological Patterns . . . . . . . . . . . . . . . . . . . . 18 89 5.1. Passport Model . . . . . . . . . . . . . . . . . . . . . 18 90 5.2. Background-Check Model . . . . . . . . . . . . . . . . . 20 91 5.3. Combinations . . . . . . . . . . . . . . . . . . . . . . 21 92 6. Roles and Entities . . . . . . . . . . . . . . . . . . . . . 22 93 7. Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . 23 94 7.1. Relying Party . . . . . . . . . . . . . . . . . . . . . . 23 95 7.2. Attester . . . . . . . . . . . . . . . . . . . . . . . . 24 96 7.3. Relying Party Owner . . . . . . . . . . . . . . . . . . . 24 97 7.4. Verifier . . . . . . . . . . . . . . . . . . . . . . . . 25 98 7.5. Endorser, Reference Value Provider, and Verifier Owner . 27 99 8. Conceptual Messages . . . . . . . . . . . . . . . . . . . . . 27 100 8.1. Evidence . . . . . . . . . . . . . . . . . . . . . . . . 27 101 8.2. Endorsements . . . . . . . . . . . . . . . . . . . . . . 28 102 8.3. Reference Values . . . . . . . . . . . . . . . . . . . . 28 103 8.4. Attestation Results . . . . . . . . . . . . . . . . . . . 28 104 8.5. Appraisal Policies . . . . . . . . . . . . . . . . . . . 30 105 9. Claims Encoding Formats . . . . . . . . . . . . . . . . . . . 30 106 10. Freshness . . . . . . . . . . . . . . . . . . . . . . . . . . 32 107 10.1. Explicit Timekeeping using Synchronized Clocks . . . . . 32 108 10.2. Implicit Timekeeping using Nonces . . . . . . . . . . . 33 109 10.3. Implicit Timekeeping using Epoch IDs . . . . . . . . . . 33 110 10.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . 34 111 11. Privacy Considerations . . . . . . . . . . . . . . . . . . . 35 112 12. Security Considerations . . . . . . . . . . . . . . . . . . . 36 113 12.1. Attester and Attestation Key Protection . . . . . . . . 36 114 12.1.1. On-Device Attester and Key Protection . . . . . . . 36 115 12.1.2. Attestation Key Provisioning Processes . . . . . . . 37 116 12.2. Integrity Protection . . . . . . . . . . . . . . . . . . 38 117 12.3. Epoch ID-based Attestation . . . . . . . . . . . . . . . 39 118 12.4. Trust Anchor Protection . . . . . . . . . . . . . . . . 40 119 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 40 120 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 40 121 15. Notable Contributions . . . . . . . . . . . . . . . . . . . . 40 122 16. Appendix A: Time Considerations . . . . . . . . . . . . . . . 41 123 16.1. Example 1: Timestamp-based Passport Model Example . . . 42 124 16.2. Example 2: Nonce-based Passport Model Example . . . . . 44 125 16.3. Example 3: Epoch ID-based Passport Model Example . . . . 45 126 16.4. Example 4: Timestamp-based Background-Check Model 127 Example . . . . . . . . . . . . . . . . . . . . . . . . 46 128 16.5. Example 5: Nonce-based Background-Check Model Example . 47 129 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 48 130 17.1. Normative References . . . . . . . . . . . . . . . . . . 48 131 17.2. Informative References . . . . . . . . . . . . . . . . . 48 132 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . 50 133 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 52 135 1. Introduction 137 The question of how one system can know that another system can be 138 trusted has found new interest and relevance in a world where trusted 139 computing elements are maturing in processor architectures. 141 Systems that have been attested and verified to be in a good state 142 (for some value of "good") can improve overall system posture. 143 Conversely, systems that cannot be attested and verified to be in a 144 good state can be given reduced access or privileges, taken out of 145 service, or otherwise flagged for repair. 147 For example: 149 * A bank back-end system might refuse to transact with another 150 system that is not known to be in a good state. 152 * A healthcare system might refuse to transmit electronic healthcare 153 records to a system that is not known to be in a good state. 155 In Remote Attestation Procedures (RATS), one peer (the "Attester") 156 produces believable information about itself - Evidence - to enable a 157 remote peer (the "Relying Party") to decide whether to consider that 158 Attester a trustworthy peer or not. RATS are facilitated by an 159 additional vital party, the Verifier. 161 The Verifier appraises Evidence via appraisal policies and creates 162 the Attestation Results to support Relying Parties in their decision 163 process. This document defines a flexible architecture consisting of 164 attestation roles and their interactions via conceptual messages. 165 Additionally, this document defines a universal set of terms that can 166 be mapped to various existing and emerging Remote Attestation 167 Procedures. Common topological patterns and the sequence of data 168 flows associated with them, such as the "Passport Model" and the 169 "Background-Check Model", are illustrated. The purpose is to define 170 useful terminology for remote attestation and enable readers to map 171 their solution architecture to the canonical attestation architecture 172 provided here. Having a common terminology that provides well- 173 understood meanings for common themes such as roles, device 174 composition, topological patterns, and appraisal procedures is vital 175 for semantic interoperability across solutions and platforms 176 involving multiple vendors and providers. 178 Amongst other things, this document is about trust and 179 trustworthiness. Trust is a choice one makes about another system. 180 Trustworthiness is a quality about the other system that can be used 181 in making one's decision to trust it or not. This is subtle 182 difference and being familiar with the difference is crucial for 183 using this document. Additionally, the concepts of freshness and 184 trust relationships with respect to RATS are elaborated on to enable 185 implementers to choose appropriate solutions to compose their Remote 186 Attestation Procedures. 188 2. Reference Use Cases 190 This section covers a number of representative and generic use cases 191 for remote attestation, independent of specific solutions. The 192 purpose is to provide motivation for various aspects of the 193 architecture presented in this document. Many other use cases exist, 194 and this document does not intend to have a complete list, only to 195 illustrate a set of use cases that collectively cover all the 196 functionality required in the architecture. 198 Each use case includes a description followed by an additional 199 summary of the Attester and Relying Party roles derived from the use 200 case. 202 2.1. Network Endpoint Assessment 204 Network operators want trustworthy reports that include identity and 205 version information about the hardware and software on the machines 206 attached to their network. Examples of reports include purposes, 207 such as inventory summaries, audit results, anomaly notifications, 208 typically including the maintenance of log records or trend reports. 209 The network operator may also want a policy by which full access is 210 only granted to devices that meet some definition of hygiene, and so 211 wants to get Claims about such information and verify its validity. 212 Remote attestation is desired to prevent vulnerable or compromised 213 devices from getting access to the network and potentially harming 214 others. 216 Typically, solutions start with a specific component (called a root 217 of trust) that is intended to provide trustworthy device identity and 218 protected storage for measurements. The system components perform a 219 series of measurements that may be signed via functions provided by a 220 root of trust, considered as Evidence about present system 221 components, such as hardware, firmware, BIOS, software, etc. 223 Attester: A device desiring access to a network. 225 Relying Party: Network equipment such as a router, switch, or access 226 point, responsible for admission of the device into the network. 228 2.2. Confidential Machine Learning Model Protection 230 A device manufacturer wants to protect its intellectual property. 231 The intellectual property's scope primarily encompasses the machine 232 learning (ML) model that is deployed in the devices purchased by its 233 customers. The protection goals include preventing attackers, 234 potentially the customer themselves, from seeing the details of the 235 model. 237 This typically works by having some protected environment in the 238 device go through a remote attestation with some manufacturer service 239 that can assess its trustworthiness. If remote attestation succeeds, 240 then the manufacturer service releases either the model, or a key to 241 decrypt a model already deployed on the Attester in encrypted form, 242 to the requester. 244 Attester: A device desiring to run an ML model. 246 Relying Party: A server or service holding ML models it desires to 247 protect. 249 2.3. Confidential Data Protection 251 This is a generalization of the ML model use case above, where the 252 data can be any highly confidential data, such as health data about 253 customers, payroll data about employees, future business plans, etc. 254 As part of the attestation procedure, an assessment is made against a 255 set of policies to evaluate the state of the system that is 256 requesting the confidential data. Attestation is desired to prevent 257 leaking data via compromised devices. 259 Attester: An entity desiring to retrieve confidential data. 261 Relying Party: An entity that holds confidential data for release to 262 authorized entities. 264 2.4. Critical Infrastructure Control 266 Potentially harmful physical equipment (e.g., power grid, traffic 267 control, hazardous chemical processing, etc.) is connected to a 268 network in support of critical infrastructure. The organization 269 managing such infrastructure needs to ensure that only authorized 270 code and users can control corresponding critical processes, and that 271 these processes are protected from unauthorized manipulation or other 272 threats. When a protocol operation can affect a critical system 273 component of the infrastructure, devices attached to that critical 274 component require some assurances depending on the security context, 275 including that: a requesting device or application has not been 276 compromised, and the requesters and actors act on applicable 277 policies. As such, remote attestation can be used to only accept 278 commands from requesters that are within policy. 280 Attester: A device or application wishing to control physical 281 equipment. 283 Relying Party: A device or application connected to potentially 284 dangerous physical equipment (hazardous chemical processing, 285 traffic control, power grid, etc.). 287 2.5. Trusted Execution Environment Provisioning 289 A Trusted Application Manager (TAM) server is responsible for 290 managing the applications running in a Trusted Execution Environment 291 (TEE) of a client device, as described in 292 [I-D.ietf-teep-architecture]. To achieve its purpose, the TAM needs 293 to assess the state of a TEE, or of applications in the TEE, of a 294 client device. The TEE conducts Remote Attestation Procedures with 295 the TAM, which can then decide whether the TEE is already in 296 compliance with the TAM's latest policy. If not, the TAM has to 297 uninstall, update, or install approved applications in the TEE to 298 bring it back into compliance with the TAM's policy. 300 Attester: A device with a TEE capable of running trusted 301 applications that can be updated. 303 Relying Party: A TAM. 305 2.6. Hardware Watchdog 307 There is a class of malware that holds a device hostage and does not 308 allow it to reboot to prevent updates from being applied. This can 309 be a significant problem, because it allows a fleet of devices to be 310 held hostage for ransom. 312 A solution to this problem is a watchdog timer implemented in a 313 protected environment such as a Trusted Platform Module (TPM), as 314 described in [TCGarch] section 43.3. If the watchdog does not 315 receive regular, and fresh, Attestation Results as to the system's 316 health, then it forces a reboot. 318 Attester: The device that should be protected from being held 319 hostage for a long period of time. 321 Relying Party: A watchdog capable of triggering a procedure that 322 resets a device into a known, good operational state. 324 2.7. FIDO Biometric Authentication 326 In the Fast IDentity Online (FIDO) protocol [WebAuthN], [CTAP], the 327 device in the user's hand authenticates the human user, whether by 328 biometrics (such as fingerprints), or by PIN and password. FIDO 329 authentication puts a large amount of trust in the device compared to 330 typical password authentication because it is the device that 331 verifies the biometric, PIN and password inputs from the user, not 332 the server. For the Relying Party to know that the authentication is 333 trustworthy, the Relying Party needs to know that the Authenticator 334 part of the device is trustworthy. The FIDO protocol employs remote 335 attestation for this. 337 The FIDO protocol supports several remote attestation protocols and a 338 mechanism by which new ones can be registered and added. Remote 339 attestation defined by RATS is thus a candidate for use in the FIDO 340 protocol. 342 Attester: FIDO Authenticator. 344 Relying Party: Any web site, mobile application back-end, or service 345 that relies on authentication data based on biometric information. 347 3. Architectural Overview 349 Figure 1 depicts the data that flows between different roles, 350 independent of protocol or use case. 352 ************ ************* ************ ***************** 353 * Endorser * * Reference * * Verifier * * Relying Party * 354 ************ * Value * * Owner * * Owner * 355 | * Provider * ************ ***************** 356 | ************* | | 357 | | | | 358 |Endorsements |Reference |Appraisal |Appraisal 359 | |Values |Policy |Policy for 360 | | |for |Attestation 361 .-----------. | |Evidence |Results 362 | | | | 363 | | | | 364 v v v | 365 .---------------------------. | 366 .----->| Verifier |------. | 367 | '---------------------------' | | 368 | | | 369 | Attestation| | 370 | Results | | 371 | Evidence | | 372 | | | 373 | v v 374 .----------. .---------------. 375 | Attester | | Relying Party | 376 '----------' '---------------' 378 Figure 1: Conceptual Data Flow 380 The text below summarizes the activities conducted by the roles 381 illustrated in Figure 1. 383 An Attester creates Evidence that is conveyed to a Verifier. 385 A Verifier uses the Evidence, any Reference Values from Reference 386 Value Providers, and any Endorsements from Endorsers, by applying an 387 Appraisal Policy for Evidence to assess the trustworthiness of the 388 Attester. This procedure is called the appraisal of Evidence. 390 Subsequently, the Verifier generates Attestation Results for use by 391 Relying Parties. 393 The Appraisal Policy for Evidence might be obtained from the Verifier 394 Owner via some protocol mechanism, or might be configured into the 395 Verifier by the Verifier Owner, or might be programmed into the 396 Verifier, or might be obtained via some other mechanism. 398 A Relying Party uses Attestation Results by applying its own 399 appraisal policy to make application-specific decisions, such as 400 authorization decisions. This procedure is called the appraisal of 401 Attestation Results. 403 The Appraisal Policy for Attestation Results might be obtained from 404 the Relying Party Owner via some protocol mechanism, or might be 405 configured into the Relying Party by the Relying Party Owner, or 406 might be programmed into the Relying Party, or might be obtained via 407 some other mechanism. 409 See Section 8 for further discussion of the conceptual messages shown 410 in Figure 1. 412 3.1. Two Types of Environments of an Attester 414 As shown in Figure 2, an Attester consists of at least one Attesting 415 Environment and at least one Target Environment. In some 416 implementations, the Attesting and Target Environments might be 417 combined. Other implementations might have multiple Attesting and 418 Target Environments, such as in the examples described in more detail 419 in Section 3.2 and Section 3.3. Other examples may exist. All 420 compositions of Attesting and Target Environments discussed in this 421 architecture can be combined into more complex implementations. 423 .--------------------------------. 424 | | 425 | Verifier | 426 | | 427 '--------------------------------' 428 ^ 429 | 430 .-------------------------|----------. 431 | | | 432 | .----------------. | | 433 | | Target | | | 434 | | Environment | | | 435 | | | | Evidence | 436 | '----------------' | | 437 | | | | 438 | | | | 439 | Collect | | | 440 | Claims | | | 441 | | | | 442 | v | | 443 | .-------------. | 444 | | Attesting | | 445 | | Environment | | 446 | | | | 447 | '-------------' | 448 | Attester | 449 '------------------------------------' 451 Figure 2: Two Types of Environments 453 Claims are collected from Target Environments. That is, Attesting 454 Environments collect the values and the information to be represented 455 in Claims, by reading system registers and variables, calling into 456 subsystems, taking measurements on code, memory, or other security 457 related assets of the Target Environment. Attesting Environments 458 then format the Claims appropriately, and typically use key material 459 and cryptographic functions, such as signing or cipher algorithms, to 460 generate Evidence. There is no limit to or requirement on the types 461 of hardware or software environments that can be used to implement an 462 Attesting Environment, for example: Trusted Execution Environments 463 (TEEs), embedded Secure Elements (eSEs), Trusted Platform Modules 464 (TPMs) [TCGarch], or BIOS firmware. 466 An arbitrary execution environment may not, by default, be capable of 467 Claims collection for a given Target Environment. Execution 468 environments that are designed specifically to be capable of Claims 469 collection are referred to in this document as Attesting 470 Environments. For example, a TPM doesn't actively collect Claims 471 itself, it instead requires another component to feed various values 472 to the TPM. Thus, an Attesting Environment in such a case would be 473 the combination of the TPM together with whatever component is 474 feeding it the measurements. 476 3.2. Layered Attestation Environments 478 By definition, the Attester role generates Evidence. An Attester may 479 consist of one or more nested environments (layers). The root layer 480 of an Attester includes at least one root of trust. In order to 481 appraise Evidence generated by an Attester, the Verifier needs to 482 trust the Attester's root of trust. Trust in the Attester's root of 483 trust can be established in various ways as discussed in Section 7.4. 485 In layered attestation, a root of trust is the initial Attesting 486 Environment. Claims can be collected from or about each layer. The 487 corresponding Claims can be structured in a nested fashion that 488 reflects the nesting of the Attester's layers. Normally, Claims are 489 not self-asserted, rather a previous layer acts as the Attesting 490 Environment for the next layer. Claims about a root of trust 491 typically are asserted by an Endorser. 493 The example device illustrated in Figure 3 includes (A) a BIOS stored 494 in read-only memory, (B) a bootloader, and (C) an operating system 495 kernel. 497 .-------------. Endorsement for ROM 498 | Endorser |-----------------------. 499 '-------------' | 500 v 501 .-------------. Reference .----------. 502 | Reference | Values for | | 503 | Value |----------------->| Verifier | 504 | Provider(s) | ROM, bootloader, | | 505 '-------------' and kernel '----------' 506 ^ 507 .------------------------------------. | 508 | | | 509 | .---------------------------. | | 510 | | Kernel | | | 511 | | | | | Layered 512 | | Target | | | Evidence 513 | | Environment | | | for 514 | '---------------------------' | | bootloader 515 | Collect | | | and 516 | Claims | | | kernel 517 | .---------------|-----------. | | 518 | | Bootloader v | | | 519 | | .-----------. | | | 520 | | Target | Attesting | | | | 521 | | Environment |Environment|-----------' 522 | | | | | | 523 | | '-----------' | | 524 | | ^ | | 525 | '-----------------|---------' | 526 | Collect | | Evidence for | 527 | Claims v | bootloader | 528 | .---------------------------. | 529 | | ROM | | 530 | | | | 531 | | Attesting | | 532 | | Environment | | 533 | '---------------------------' | 534 | | 535 '------------------------------------' 537 Figure 3: Layered Attester 539 The first Attesting Environment, the read-only BIOS in this example, 540 has to ensure the integrity of the bootloader (the first Target 541 Environment). There are potentially multiple kernels to boot, and 542 the decision is up to the bootloader. Only a bootloader with intact 543 integrity will make an appropriate decision. Therefore, the Claims 544 relating to the integrity of the bootloader have to be measured 545 securely. At this stage of the boot-cycle of the device, the Claims 546 collected typically cannot be composed into Evidence. 548 After the boot sequence is started, the BIOS conducts the most 549 important and defining feature of layered attestation, which is that 550 the successfully measured bootloader now becomes (or contains) an 551 Attesting Environment for the next layer. This procedure in layered 552 attestation is sometimes called "staging". It is important that the 553 bootloader not be able to alter any Claims about itself that were 554 collected by the BIOS. This can be ensured having those Claims be 555 either signed by the BIOS or stored in a tamper-proof manner by the 556 BIOS. 558 Continuing with this example, the bootloader's Attesting Environment 559 is now in charge of collecting Claims about the next Target 560 Environment, which in this example is the kernel to be booted. The 561 final Evidence thus contains two sets of Claims: one set about the 562 bootloader as measured and signed by the BIOS, plus a set of Claims 563 about the kernel as measured and signed by the bootloader. 565 This example could be extended further by making the kernel become 566 another Attesting Environment for an application as another Target 567 Environment. This would result in a third set of Claims in the 568 Evidence pertaining to that application. 570 The essence of this example is a cascade of staged environments. 571 Each environment has the responsibility of measuring the next 572 environment before the next environment is started. In general, the 573 number of layers may vary by device or implementation, and an 574 Attesting Environment might even have multiple Target Environments 575 that it measures, rather than only one as shown by example in 576 Figure 3. 578 3.3. Composite Device 580 A composite device is an entity composed of multiple sub-entities 581 such that its trustworthiness has to be determined by the appraisal 582 of all these sub-entities. 584 Each sub-entity has at least one Attesting Environment collecting the 585 Claims from at least one Target Environment, then this sub-entity 586 generates Evidence about its trustworthiness. Therefore, each sub- 587 entity can be called an Attester. Among all the Attesters, there may 588 be only some which have the ability to communicate with the Verifier 589 while others do not. 591 For example, a carrier-grade router consists of a chassis and 592 multiple slots. The trustworthiness of the router depends on all its 593 slots' trustworthiness. Each slot has an Attesting Environment, such 594 as a TEE, collecting the Claims of its boot process, after which it 595 generates Evidence from the Claims. 597 Among these slots, only a "main" slot can communicate with the 598 Verifier while other slots cannot. But other slots can communicate 599 with the main slot by the links between them inside the router. So 600 the main slot collects the Evidence of other slots, produces the 601 final Evidence of the whole router and conveys the final Evidence to 602 the Verifier. Therefore the router is a composite device, each slot 603 is an Attester, and the main slot is the lead Attester. 605 Another example is a multi-chassis router composed of multiple single 606 carrier-grade routers. Multi-chassis router setups create redundancy 607 groups that provide higher throughput by interconnecting multiple 608 routers in these groups, which can be treated as one logical router 609 for simpler management. A multi-chassis router setup provides a 610 management point that connects to the Verifier. Typically one router 611 in the group is designated as the main router. Other routers in the 612 multi-chassis setup are connected to the main router only via 613 physical network links and are therefore managed and appraised via 614 the main router's help. Consequently, a multi-chassis router setup 615 is a composite device, each router is an Attester, and the main 616 router is the lead Attester. 618 Figure 4 depicts the conceptual data flow for a composite device. 620 .-----------------------------. 621 | Verifier | 622 '-----------------------------' 623 ^ 624 | 625 | Evidence of 626 | Composite Device 627 | 628 .----------------------------------|-------------------------------. 629 | .--------------------------------|-----. .------------. | 630 | | Collect .------------. | | | | 631 | | Claims .--------->| Attesting |<--------| Attester B |-. | 632 | | | |Environment | | '------------. | | 633 | | .----------------. | |<----------| Attester C |-. | 634 | | | Target | | | | '------------' | | 635 | | | Environment(s) | | |<------------| ... | | 636 | | | | '------------' | Evidence '------------' | 637 | | '----------------' | of | 638 | | | Attesters | 639 | | lead Attester A | (via Internal Links or | 640 | '--------------------------------------' Network Connections) | 641 | | 642 | Composite Device | 643 '------------------------------------------------------------------' 645 Figure 4: Composite Device 647 In a composite device, each Attester generates its own Evidence by 648 its Attesting Environment(s) collecting the Claims from its Target 649 Environment(s). The lead Attester collects Evidence from other 650 Attesters and conveys it to a Verifier. Collection of Evidence from 651 sub-entities may itself be a form of Claims collection that results 652 in Evidence asserted by the lead Attester. The lead Attester 653 generates Evidence about the layout of the whole composite device, 654 while sub-Attesters generate Evidence about their respective 655 (sub-)modules. 657 In this scenario, the trust model described in Section 7 can also be 658 applied to an inside Verifier. 660 3.4. Implementation Considerations 662 An entity can take on multiple RATS roles (e.g., Attester, Verifier, 663 Relying Party, etc.) at the same time. Multiple entities can 664 cooperate to implement a single RATS role as well. In essence, the 665 combination of roles and entities can be arbitrary. For example, in 666 the composite device scenario, the entity inside the lead Attester 667 can also take on the role of a Verifier, and the outer entity of 668 Verifier can take on the role of a Relying Party. After collecting 669 the Evidence of other Attesters, this inside Verifier uses 670 Endorsements and appraisal policies (obtained the same way as by any 671 other Verifier) as part of the appraisal procedures that generate 672 Attestation Results. The inside Verifier then conveys the 673 Attestation Results of other Attesters to the outside Verifier, 674 whether in the same conveyance protocol as part of the Evidence or 675 not. 677 4. Terminology 679 This document uses the following terms. 681 4.1. Roles 683 Attester: A role performed by an entity (typically a device) whose 684 Evidence must be appraised in order to infer the extent to which 685 the Attester is considered trustworthy, such as when deciding 686 whether it is authorized to perform some operation. 688 Produces: Evidence 690 Relying Party: A role performed by an entity that depends on the 691 validity of information about an Attester, for purposes of 692 reliably applying application specific actions. Compare /relying 693 party/ in [RFC4949]. 695 Consumes: Attestation Results, Appraisal Policy for Attestation 696 Results 698 Verifier: A role performed by an entity that appraises the validity 699 of Evidence about an Attester and produces Attestation Results to 700 be used by a Relying Party. 702 Consumes: Evidence, Reference Values, Endorsements, Appraisal 703 Policy for Evidence 705 Produces: Attestation Results 707 Relying Party Owner: A role performed by an entity (typically an 708 administrator), that is authorized to configure Appraisal Policy 709 for Attestation Results in a Relying Party. 711 Produces: Appraisal Policy for Attestation Results 713 Verifier Owner: A role performed by an entity (typically an 714 administrator), that is authorized to configure Appraisal Policy 715 for Evidence in a Verifier. 717 Produces: Appraisal Policy for Evidence 719 Endorser: A role performed by an entity (typically a manufacturer) 720 whose Endorsements may help Verifiers appraise the authenticity of 721 Evidence and infer further capabilities of the Attester. 723 Produces: Endorsements 725 Reference Value Provider: A role performed by an entity (typically a 726 manufacturer) whose Reference Values help Verifiers appraise 727 Evidence to determine if acceptable known Claims have been 728 recorded by the Attester. 730 Produces: Reference Values 732 4.2. Artifacts 734 Claim: A piece of asserted information, often in the form of a name/ 735 value pair. Claims make up the usual structure of Evidence and 736 other RATS artifacts. Compare /claim/ in [RFC7519]. 738 Endorsement: A secure statement that an Endorser vouches for the 739 integrity of an Attester's various capabilities such as Claims 740 collection and Evidence signing. 742 Consumed By: Verifier 744 Produced By: Endorser 746 Evidence: A set of Claims generated by an Attester to be appraised 747 by a Verifier. Evidence may include configuration data, 748 measurements, telemetry, or inferences. 750 Consumed By: Verifier 752 Produced By: Attester 754 Attestation Result: The output generated by a Verifier, typically 755 including information about an Attester, where the Verifier 756 vouches for the validity of the results. 758 Consumed By: Relying Party 760 Produced By: Verifier 762 Appraisal Policy for Evidence: A set of rules that informs how a 763 Verifier evaluates the validity of information about an Attester. 764 Compare /security policy/ in [RFC4949]. 766 Consumed By: Verifier 768 Produced By: Verifier Owner 770 Appraisal Policy for Attestation Results: A set of rules that direct 771 how a Relying Party uses the Attestation Results regarding an 772 Attester generated by the Verifiers. Compare /security policy/ in 773 [RFC4949]. 775 Consumed by: Relying Party 777 Produced by: Relying Party Owner 779 Reference Values: A set of values against which values of Claims can 780 be compared as part of applying an Appraisal Policy for Evidence. 781 Reference Values are sometimes referred to in other documents as 782 known-good values, golden measurements, or nominal values, 783 although those terms typically assume comparison for equality, 784 whereas here Reference Values might be more general and be used in 785 any sort of comparison. 787 Consumed By: Verifier 789 Produced By: Reference Value Provider 791 5. Topological Patterns 793 Figure 1 shows a data-flow diagram for communication between an 794 Attester, a Verifier, and a Relying Party. The Attester conveys its 795 Evidence to the Verifier for appraisal, and the Relying Party 796 receives the Attestation Result from the Verifier. This section 797 refines the data-flow diagram by describing two reference models, as 798 well as one example composition thereof. The discussion that follows 799 is for illustrative purposes only and does not constrain the 800 interactions between RATS roles to the presented patterns. 802 5.1. Passport Model 804 The passport model is so named because of its resemblance to how 805 nations issue passports to their citizens. The nature of the 806 Evidence that an individual needs to provide to its local authority 807 is specific to the country involved. The citizen retains control of 808 the resulting passport document and presents it to other entities 809 when it needs to assert a citizenship or identity Claim, such as an 810 airport immigration desk. The passport is considered sufficient 811 because it vouches for the citizenship and identity Claims, and it is 812 issued by a trusted authority. Thus, in this immigration desk 813 analogy, the citizen is the Attester, the passport issuing agency is 814 a Verifier, the passport application and identifying information 815 (e.g., birth certificate) is the the Evidence, the passport is an 816 Attestation Result, and the immigration desk is a Relying Party. 818 In this model, an Attester conveys Evidence to a Verifier, which 819 compares the Evidence against its appraisal policy. The Verifier 820 then gives back an Attestation Result which the Attester treats as 821 opaque data. The Attester does not consume the Attestation Result, 822 but might cache it. The Attester can then present the Attestation 823 Result (and possibly additional Claims) to a Relying Party, which 824 then compares this information against its own appraisal policy. 826 Three ways in which the process may fail include: 828 * First, the Verifier may not issue a positive Attestation Result 829 due to the Evidence not passing the Appraisal Policy for Evidence. 831 * The second way in which the process may fail is when the 832 Attestation Result is examined by the Relying Party, and based 833 upon the Appraisal Policy for Attestation Results, the result does 834 not pass the policy. 836 * The third way is when the Verifier is unreachable or unavailable. 838 As with any other information needed by the Relying Party to make an 839 authorization decision, an Attestation Result can be carried in a 840 resource access protocol between the Attester and Relying Party. In 841 this model the details of the resource access protocol constrain the 842 serialization format of the Attestation Result. The format of the 843 Evidence on the other hand is only constrained by the Attester- 844 Verifier remote attestation protocol. This implies that 845 interoperability and standardization is more relevant for Attestation 846 Results than it is for Evidence. 848 +------------+ 849 | | Compare Evidence 850 | Verifier | against appraisal policy 851 | | 852 +------------+ 853 ^ | 854 Evidence | | Attestation 855 | | Result 856 | v 857 +------------+ +-------------+ 858 | |------------->| | Compare Attestation 859 | Attester | Attestation | Relying | Result against 860 | | Result | Party | appraisal policy 861 +------------+ +-------------+ 862 Figure 5: Passport Model 864 5.2. Background-Check Model 866 The background-check model is so named because of the resemblance of 867 how employers and volunteer organizations perform background checks. 868 When a prospective employee provides Claims about education or 869 previous experience, the employer will contact the respective 870 institutions or former employers to validate the Claim. Volunteer 871 organizations often perform police background checks on volunteers in 872 order to determine the volunteer's trustworthiness. Thus, in this 873 analogy, a prospective volunteer is an Attester, the organization is 874 the Relying Party, and the organization that issues a report is a 875 Verifier. 877 In this model, an Attester conveys Evidence to a Relying Party, which 878 treats it as opaque and simply forwards it on to a Verifier. The 879 Verifier compares the Evidence against its appraisal policy, and 880 returns an Attestation Result to the Relying Party. The Relying 881 Party then compares the Attestation Result against its own appraisal 882 policy. 884 The resource access protocol between the Attester and Relying Party 885 includes Evidence rather than an Attestation Result, but that 886 Evidence is not processed by the Relying Party. Since the Evidence 887 is merely forwarded on to a trusted Verifier, any serialization 888 format can be used for Evidence because the Relying Party does not 889 need a parser for it. The only requirement is that the Evidence can 890 be _encapsulated in_ the format required by the resource access 891 protocol between the Attester and Relying Party. 893 However, like in the Passport model, an Attestation Result is still 894 consumed by the Relying Party. Code footprint and attack surface 895 area can be minimized by using a serialization format for which the 896 Relying Party already needs a parser to support the protocol between 897 the Attester and Relying Party, which may be an existing standard or 898 widely deployed resource access protocol. Such minimization is 899 especially important if the Relying Party is a constrained node. 901 +-------------+ 902 | | Compare Evidence 903 | Verifier | against appraisal 904 | | policy 905 +-------------+ 906 ^ | 907 Evidence | | Attestation 908 | | Result 909 | v 910 +------------+ +-------------+ 911 | |-------------->| | Compare Attestation 912 | Attester | Evidence | Relying | Result against 913 | | | Party | appraisal policy 914 +------------+ +-------------+ 916 Figure 6: Background-Check Model 918 5.3. Combinations 920 One variation of the background-check model is where the Relying 921 Party and the Verifier are on the same machine, performing both 922 functions together. In this case, there is no need for a protocol 923 between the two. 925 It is also worth pointing out that the choice of model depends on the 926 use case, and that different Relying Parties may use different 927 topological patterns. 929 The same device may need to create Evidence for different Relying 930 Parties and/or different use cases. For instance, it would use one 931 model to provide Evidence to a network infrastructure device to gain 932 access to the network, and the other model to provide Evidence to a 933 server holding confidential data to gain access to that data. As 934 such, both models may simultaneously be in use by the same device. 936 Figure 7 shows another example of a combination where Relying Party 1 937 uses the passport model, whereas Relying Party 2 uses an extension of 938 the background-check model. Specifically, in addition to the basic 939 functionality shown in Figure 6, Relying Party 2 actually provides 940 the Attestation Result back to the Attester, allowing the Attester to 941 use it with other Relying Parties. This is the model that the 942 Trusted Application Manager plans to support in the TEEP architecture 943 [I-D.ietf-teep-architecture]. 945 +-------------+ 946 | | Compare Evidence 947 | Verifier | against appraisal policy 948 | | 949 +-------------+ 950 ^ | 951 Evidence | | Attestation 952 | | Result 953 | v 954 +-------------+ 955 | | Compare 956 | Relying | Attestation Result 957 | Party 2 | against appraisal policy 958 +-------------+ 959 ^ | 960 Evidence | | Attestation 961 | | Result 962 | v 963 +-------------+ +-------------+ 964 | |-------------->| | Compare Attestation 965 | Attester | Attestation | Relying | Result against 966 | | Result | Party 1 | appraisal policy 967 +-------------+ +-------------+ 969 Figure 7: Example Combination 971 6. Roles and Entities 973 An entity in the RATS architecture includes at least one of the roles 974 defined in this document. 976 An entity can aggregate more than one role into itself, such as being 977 both a Verifier and a Relying Party, or being both a Reference Value 978 Provider and an Endorser. As such, any conceptual messages (see 979 Section 8 for more discussion) originating from such roles might also 980 be combined. For example, Reference Values might be conveyed as part 981 of an appraisal policy if the Verifier Owner and Reference Value 982 Provider roles are combined. Similarly, Reference Values might be 983 conveyed as part of an Endorsement if the Endorser and Reference 984 Value Provider roles are combined. 986 Interactions between roles aggregated into the same entity do not 987 necessarily use the Internet Protocol. Such interactions might use a 988 loopback device or other IP-based communication between separate 989 environments, but they do not have to. Alternative channels to 990 convey conceptual messages include function calls, sockets, GPIO 991 interfaces, local busses, or hypervisor calls. This type of 992 conveyance is typically found in composite devices. Most 993 importantly, these conveyance methods are out-of-scope of RATS, but 994 they are presumed to exist in order to convey conceptual messages 995 appropriately between roles. 997 In essence, an entity that combines more than one role creates and 998 consumes the corresponding conceptual messages as defined in this 999 document. 1001 7. Trust Model 1003 7.1. Relying Party 1005 This document covers scenarios for which a Relying Party trusts a 1006 Verifier that can appraise the trustworthiness of information about 1007 an Attester. Such trust is expressed by storing one or more "trust 1008 anchors" in a secure location known as a trust anchor store. 1010 As defined in [RFC6024], "A trust anchor represents an authoritative 1011 entity via a public key and associated data. The public key is used 1012 to verify digital signatures, and the associated data is used to 1013 constrain the types of information for which the trust anchor is 1014 authoritative." The trust anchor may be a certificate or it may be a 1015 raw public key along with additional data if necessary such as its 1016 public key algorithm and parameters. 1018 Thus, trusting a Verifier might be expressed by having the Relying 1019 Party store the Verifier's public key or certificate in its trust 1020 anchor store, or might be expressed by storing the public key or 1021 certificate of an entity (e.g., a Certificate Authority) that is in 1022 the Verifier's certificate path. For example, the Relying Party can 1023 verify that the Verifier is an expected one by out of band 1024 establishment of key material, combined with a protocol like TLS to 1025 communicate. There is an assumption that between the establishment 1026 of the trusted key material and the creation of the Evidence, that 1027 the Verifier has not been compromised. 1029 For a stronger level of security, the Relying Party might require 1030 that the Verifier first provide information about itself that the 1031 Relying Party can use to assess the trustworthiness of the Verifier 1032 before accepting its Attestation Results. Such process would provide 1033 a stronger level of confidence in the correctness of the information 1034 provided, such as a belief that the authentic Verifier has not been 1035 compromised by malware. 1037 For example, one explicit way for a Relying Party "A" to establish 1038 such confidence in the correctness of a Verifier "B", would be for B 1039 to first act as an Attester where A acts as a combined Verifier/ 1040 Relying Party. If A then accepts B as trustworthy, it can choose to 1041 accept B as a Verifier for other Attesters. 1043 Similarly, the Relying Party also needs to trust the Relying Party 1044 Owner for providing its Appraisal Policy for Attestation Results, and 1045 in some scenarios the Relying Party might even require that the 1046 Relying Party Owner go through a remote attestation procedure with it 1047 before the Relying Party will accept an updated policy. This can be 1048 done similarly to how a Relying Party could establish trust in a 1049 Verifier as discussed above, i.e., verifying credentials against a 1050 trust anchor store and optionally requiring Attestation Results from 1051 the Relying Party Owner. 1053 7.2. Attester 1055 In some scenarios, Evidence might contain sensitive information such 1056 as Personally Identifiable Information (PII) or system identifiable 1057 information. Thus, an Attester must trust entities to which it 1058 conveys Evidence, to not reveal sensitive data to unauthorized 1059 parties. The Verifier might share this information with other 1060 authorized parties, according to a governing policy that address the 1061 handling of sensitive information (potentially included in Appraisal 1062 Policies for Evidence). In the background-check model, this Evidence 1063 may also be revealed to Relying Party(s). 1065 When Evidence contains sensitive information, an Attester typically 1066 requires that a Verifier authenticates itself (e.g., at TLS session 1067 establishment) and might even request a remote attestation before the 1068 Attester sends the sensitive Evidence. This can be done by having 1069 the Attester first act as a Verifier/Relying Party, and the Verifier 1070 act as its own Attester, as discussed above. 1072 7.3. Relying Party Owner 1074 The Relying Party Owner might also require that the Relying Party 1075 first act as an Attester, providing Evidence that the Owner can 1076 appraise, before the Owner would give the Relying Party an updated 1077 policy that might contain sensitive information. In such a case, 1078 authentication or attestation in both directions might be needed, in 1079 which case typically one side's Evidence must be considered safe to 1080 share with an untrusted entity, in order to bootstrap the sequence. 1081 See Section 11 for more discussion. 1083 7.4. Verifier 1085 The Verifier trusts (or more specifically, the Verifier's security 1086 policy is written in a way that configures the Verifier to trust) a 1087 manufacturer, or the manufacturer's hardware, so as to be able to 1088 appraise the trustworthiness of that manufacturer's devices. Such 1089 trust is expressed by storing one or more trust anchors in the 1090 Verifier's trust anchor store. 1092 In a typical solution, a Verifier comes to trust an Attester 1093 indirectly by having an Endorser (such as a manufacturer) vouch for 1094 the Attester's ability to securely generate Evidence through 1095 Endorsements (see Section 8.2). Endorsements might describe the ways 1096 in which the Attester resists attack, protects secrets and measures 1097 Target Environments. Consequently, the Endorser's key material is 1098 stored in the Verifier's trust anchor store so that Endorsements can 1099 be authenticated and used in the Verifier's appraisal process. 1101 In some solutions, a Verifier might be configured to directly trust 1102 an Attester by having the Verifier have the Attester's key material 1103 (rather than the Endorser's) in its trust anchor store. 1105 Such direct trust must first be established at the time of trust 1106 anchor store configuration either by checking with an Endorser at 1107 that time, or by conducting a security analysis of the specific 1108 device. Having the Attester directly in the trust anchor store 1109 narrows the Verifier's trust to only specific devices rather than all 1110 devices the Endorser might vouch for, such as all devices 1111 manufactured by the same manufacturer in the case that the Endorser 1112 is a manufacturer. 1114 Such narrowing is often important since physical possession of a 1115 device can also be used to conduct a number of attacks, and so a 1116 device in a physically secure environment (such as one's own 1117 premises) may be considered trusted whereas devices owned by others 1118 would not be. This often results in a desire to either have the 1119 owner run their own Endorser that would only endorse devices one 1120 owns, or to use Attesters directly in the trust anchor store. When 1121 there are many Attesters owned, the use of an Endorser enables better 1122 scalability. 1124 That is, a Verifier might appraise the trustworthiness of an 1125 application component, operating system component, or service under 1126 the assumption that information provided about it by the lower-layer 1127 firmware or software is true. A stronger level of assurance of 1128 security comes when information can be vouched for by hardware or by 1129 ROM code, especially if such hardware is physically resistant to 1130 hardware tampering. In most cases, components that have to be 1131 vouched for via Endorsements because no Evidence is generated about 1132 them are referred to as roots of trust. 1134 The manufacturer having arranged for an Attesting Environment to be 1135 provisioned with key material with which to sign Evidence, the 1136 Verifier is then provided with some way of verifying the signature on 1137 the Evidence. This may be in the form of an appropriate trust 1138 anchor, or the Verifier may be provided with a database of public 1139 keys (rather than certificates) or even carefully curated and secured 1140 lists of symmetric keys. 1142 The nature of how the Verifier manages to validate the signatures 1143 produced by the Attester is critical to the secure operation of a 1144 remote attestation system, but is not the subject of standardization 1145 within this architecture. 1147 A conveyance protocol that provides authentication and integrity 1148 protection can be used to convey Evidence that is otherwise 1149 unprotected (e.g., not signed). Appropriate conveyance of 1150 unprotected Evidence (e.g., [I-D.birkholz-rats-uccs]) relies on the 1151 following conveyance protocol's protection capabilities: 1153 1. The key material used to authenticate and integrity protect the 1154 conveyance channel is trusted by the Verifier to speak for the 1155 Attesting Environment(s) that collected Claims about the Target 1156 Environment(s). 1158 2. All unprotected Evidence that is conveyed is supplied exclusively 1159 by the Attesting Environment that has the key material that 1160 protects the conveyance channel 1162 3. The root of trust protects both the conveyance channel key 1163 material and the Attesting Environment with equivalent strength 1164 protections. 1166 As illustrated in [I-D.birkholz-rats-uccs], an entity that receives 1167 unprotected Evidence via a trusted conveyance channel always takes on 1168 the responsibility of vouching for the Evidence's authenticity and 1169 freshness. If protected Evidence is generated, the Attester's 1170 Attesting Environments take on that responsibility. In cases where 1171 unprotected Evidence is processed by a Verifier, Relying Parties have 1172 to trust that the Verifier is capable of handling Evidence in a 1173 manner that preserves the Evidence's authenticity and freshness. 1174 Generating and conveying unprotected Evidence always creates 1175 significant risk and the benefits of that approach have to be 1176 carefully weighed against potential drawbacks. 1178 See Section 12 for discussion on security strength. 1180 7.5. Endorser, Reference Value Provider, and Verifier Owner 1182 In some scenarios, the Endorser, Reference Value Provider, and 1183 Verifier Owner may need to trust the Verifier before giving the 1184 Endorsement, Reference Values, or appraisal policy to it. This can 1185 be done similarly to how a Relying Party might establish trust in a 1186 Verifier. 1188 As discussed in Section 7.3, authentication or attestation in both 1189 directions might be needed, in which case typically one side's 1190 identity or Evidence must be considered safe to share with an 1191 untrusted entity, in order to bootstrap the sequence. See Section 11 1192 for more discussion. 1194 8. Conceptual Messages 1196 Figure 1 illustrates the flow of a conceptual messages between 1197 various roles. This section provides additional elaboration and 1198 implementation considerations. It is the responsibility of protocol 1199 specifications to define the actual data format and semantics of any 1200 relevant conceptual messages. 1202 8.1. Evidence 1204 Evidence is a set of Claims about the target environment that reveal 1205 operational status, health, configuration or construction that have 1206 security relevance. Evidence is appraised by a Verifier to establish 1207 its relevance, compliance, and timeliness. Claims need to be 1208 collected in a manner that is reliable such that a Target Environment 1209 cannot lie to the Attesting Environment about its trustworthiness 1210 properties. Evidence needs to be securely associated with the target 1211 environment so that the Verifier cannot be tricked into accepting 1212 Claims originating from a different environment (that may be more 1213 trustworthy). Evidence also must be protected from man-in-the-middle 1214 attackers who may observe, change or misdirect Evidence as it travels 1215 from Attester to Verifier. The timeliness of Evidence can be 1216 captured using Claims that pinpoint the time or interval when changes 1217 in operational status, health, and so forth occur. 1219 8.2. Endorsements 1221 An Endorsement is a secure statement that some entity (e.g., a 1222 manufacturer) vouches for the integrity of the device's various 1223 capabilities such as claims collection, signing, launching code, 1224 transitioning to other environments, storing secrets, and more. For 1225 example, if the device's signing capability is in hardware, then an 1226 Endorsement might be a manufacturer certificate that signs a public 1227 key whose corresponding private key is only known inside the device's 1228 hardware. Thus, when Evidence and such an Endorsement are used 1229 together, an appraisal procedure can be conducted based on appraisal 1230 policies that may not be specific to the device instance, but merely 1231 specific to the manufacturer providing the Endorsement. For example, 1232 an appraisal policy might simply check that devices from a given 1233 manufacturer have information matching a set of Reference Values, or 1234 an appraisal policy might have a set of more complex logic on how to 1235 appraise the validity of information. 1237 However, while an appraisal policy that treats all devices from a 1238 given manufacturer the same may be appropriate for some use cases, it 1239 would be inappropriate to use such an appraisal policy as the sole 1240 means of authorization for use cases that wish to constrain _which_ 1241 compliant devices are considered authorized for some purpose. For 1242 example, an enterprise using remote attestation for Network Endpoint 1243 Assessment [RFC5209] may not wish to let every healthy laptop from 1244 the same manufacturer onto the network, but instead only want to let 1245 devices that it legally owns onto the network. Thus, an Endorsement 1246 may be helpful information in authenticating information about a 1247 device, but is not necessarily sufficient to authorize access to 1248 resources which may need device-specific information such as a public 1249 key for the device or component or user on the device. 1251 8.3. Reference Values 1253 Reference Values used in appraisal procedures come from a Reference 1254 Value Provider and are then used by the Verifier to compare to 1255 Evidence. Reference Values with matching Evidence produces 1256 acceptable Claims. Additionally, appraisal policy may play a role in 1257 determining the acceptance of Claims. 1259 8.4. Attestation Results 1261 Attestation Results are the input used by the Relying Party to decide 1262 the extent to which it will trust a particular Attester, and allow it 1263 to access some data or perform some operation. 1265 Attestation Results may carry a boolean value indicating compliance 1266 or non-compliance with a Verifier's appraisal policy, or may carry a 1267 richer set of Claims about the Attester, against which the Relying 1268 Party applies its Appraisal Policy for Attestation Results. 1270 The quality of the Attestation Results depends upon the ability of 1271 the Verifier to evaluate the Attester. Different Attesters have a 1272 different _Strength of Function_ [strengthoffunction], which results 1273 in the Attestation Results being qualitatively different in strength. 1275 An Attestation Result that indicates non-compliance can be used by an 1276 Attester (in the passport model) or a Relying Party (in the 1277 background-check model) to indicate that the Attester should not be 1278 treated as authorized and may be in need of remediation. In some 1279 cases, it may even indicate that the Evidence itself cannot be 1280 authenticated as being correct. 1282 By default, the Relying Party does not believe the Attester to be 1283 compliant. Upon receipt of an authentic Attestation Result and given 1284 the Appraisal Policy for Attestation Results is satisfied, the 1285 Attester is allowed to perform the prescribed actions or access. The 1286 simplest such appraisal policy might authorize granting the Attester 1287 full access or control over the resources guarded by the Relying 1288 Party. A more complex appraisal policy might involve using the 1289 information provided in the Attestation Result to compare against 1290 expected values, or to apply complex analysis of other information 1291 contained in the Attestation Result. 1293 Thus, Attestation Results can contain detailed information about an 1294 Attester, which can include privacy sensitive information as 1295 discussed in section Section 11. Unlike Evidence, which is often 1296 very device- and vendor-specific, Attestation Results can be vendor- 1297 neutral, if the Verifier has a way to generate vendor-agnostic 1298 information based on the appraisal of vendor-specific information in 1299 Evidence. This allows a Relying Party's appraisal policy to be 1300 simpler, potentially based on standard ways of expressing the 1301 information, while still allowing interoperability with heterogeneous 1302 devices. 1304 Finally, whereas Evidence is signed by the device (or indirectly by a 1305 manufacturer, if Endorsements are used), Attestation Results are 1306 signed by a Verifier, allowing a Relying Party to only need a trust 1307 relationship with one entity, rather than a larger set of entities, 1308 for purposes of its appraisal policy. 1310 8.5. Appraisal Policies 1312 The Verifier, when appraising Evidence, or the Relying Party, when 1313 appraising Attestation Results, checks the values of matched Claims 1314 against constraints specified in its appraisal policy. Examples of 1315 such constraints checking include: 1317 * comparison for equality against a Reference Value, or 1319 * a check for being in a range bounded by Reference Values, or 1321 * membership in a set of Reference Values, or 1323 * a check against values in other Claims. 1325 Upon completing all appraisal policy constraints, the remaining 1326 Claims are accepted as input toward determining Attestation Results, 1327 when appraising Evidence, or as input to a Relying Party, when 1328 appraising Attestation Results. 1330 9. Claims Encoding Formats 1332 The following diagram illustrates a relationship to which remote 1333 attestation is desired to be added: 1335 +-------------+ +------------+ Evaluate 1336 | |-------------->| | request 1337 | Attester | Access some | Relying | against 1338 | | resource | Party | security 1339 +-------------+ +------------+ policy 1341 Figure 8: Typical Resource Access 1343 In this diagram, the protocol between Attester and a Relying Party 1344 can be any new or existing protocol (e.g., HTTP(S), COAP(S), ROLIE 1345 [RFC8322], 802.1x, OPC UA [OPCUA], etc.), depending on the use case. 1347 Typically, such protocols already have mechanisms for passing 1348 security information for authentication and authorization purposes. 1349 Common formats include JWTs [RFC7519], CWTs [RFC8392], and X.509 1350 certificates. 1352 Retrofitting already deployed protocols with remote attestation 1353 requires adding RATS conceptual messages to the existing data flows. 1354 This must be done in a way that does not degrade the security 1355 properties of the systems involved and should use native extension 1356 mechanisms provided by the underlying protocol. For example, if a 1357 TLS handshake is to be extended with remote attestation capabilities, 1358 attestation Evidence may be embedded in an ad-hoc X.509 certificate 1359 extension (e.g., [TCG-DICE]), or into a new TLS Certificate Type 1360 (e.g., [I-D.tschofenig-tls-cwt]). 1362 Especially for constrained nodes there is a desire to minimize the 1363 amount of parsing code needed in a Relying Party, in order to both 1364 minimize footprint and to minimize the attack surface. While it 1365 would be possible to embed a CWT inside a JWT, or a JWT inside an 1366 X.509 extension, etc., there is a desire to encode the information 1367 natively in a format that is already supported by the Relying Party. 1369 This motivates having a common "information model" that describes the 1370 set of remote attestation related information in an encoding-agnostic 1371 way, and allowing multiple encoding formats (CWT, JWT, X.509, etc.) 1372 that encode the same information into the Claims format needed by the 1373 Relying Party. 1375 The following diagram illustrates that Evidence and Attestation 1376 Results might be expressed via multiple potential encoding formats, 1377 so that they can be conveyed by various existing protocols. It also 1378 motivates why the Verifier might also be responsible for accepting 1379 Evidence that encodes Claims in one format, while issuing Attestation 1380 Results that encode Claims in a different format. 1382 Evidence Attestation Results 1383 .--------------. CWT CWT .-------------------. 1384 | Attester-A |------------. .----------->| Relying Party V | 1385 '--------------' v | `-------------------' 1386 .--------------. JWT .------------. JWT .-------------------. 1387 | Attester-B |-------->| Verifier |-------->| Relying Party W | 1388 '--------------' | | `-------------------' 1389 .--------------. X.509 | | X.509 .-------------------. 1390 | Attester-C |-------->| |-------->| Relying Party X | 1391 '--------------' | | `-------------------' 1392 .--------------. TPM | | TPM .-------------------. 1393 | Attester-D |-------->| |-------->| Relying Party Y | 1394 '--------------' '------------' `-------------------' 1395 .--------------. other ^ | other .-------------------. 1396 | Attester-E |------------' '----------->| Relying Party Z | 1397 '--------------' `-------------------' 1399 Figure 9: Multiple Attesters and Relying Parties with Different 1400 Formats 1402 10. Freshness 1404 A Verifier or Relying Party might need to learn the point in time 1405 (i.e., the "epoch") an Evidence or Attestation Result has been 1406 produced. This is essential in deciding whether the included Claims 1407 and their values can be considered fresh, meaning they still reflect 1408 the latest state of the Attester, and that any Attestation Result was 1409 generated using the latest Appraisal Policy for Evidence. 1411 This section provides a number of details. It does not however 1412 define any protocol formats, the interactions shown are abstract. 1413 This section is intended for those creating protocols and solutions 1414 to understand the options available to ensure freshness. The way in 1415 which freshness is provisioned in a protocol is an architectural 1416 decision. Provisioning of freshness has an impact on the number of 1417 needed round trips in a protocol, and therefore must be made very 1418 early in the design. Different decisions will have significant 1419 impacts on resulting interoperability, which is why this section goes 1420 into sufficient detail such that choices in freshness will be 1421 compatible across interacting protocols, such as depicted in 1422 Figure 9. 1424 Freshness is assessed based on the Appraisal Policy for Evidence or 1425 Attestation Results that compares the estimated epoch against an 1426 "expiry" threshold defined locally to that policy. There is, 1427 however, always a race condition possible in that the state of the 1428 Attester, and the appraisal policies might change immediately after 1429 the Evidence or Attestation Result was generated. The goal is merely 1430 to narrow their recentness to something the Verifier (for Evidence) 1431 or Relying Party (for Attestation Result) is willing to accept. Some 1432 flexibility on the freshness requirement is a key component for 1433 enabling caching and reuse of both Evidence and Attestation Results, 1434 which is especially valuable in cases where their computation uses a 1435 substantial part of the resource budget (e.g., energy in constrained 1436 devices). 1438 There are three common approaches for determining the epoch of 1439 Evidence or an Attestation Result. 1441 10.1. Explicit Timekeeping using Synchronized Clocks 1443 The first approach is to rely on synchronized and trustworthy clocks, 1444 and include a signed timestamp (see [I-D.birkholz-rats-tuda]) along 1445 with the Claims in the Evidence or Attestation Result. Timestamps 1446 can also be added on a per-Claim basis to distinguish the time of 1447 generation of Evidence or Attestation Result from the time that a 1448 specific Claim was generated. The clock's trustworthiness can 1449 generally be established via Endorsements and typically requires 1450 additional Claims about the signer's time synchronization mechanism. 1452 In some use cases, however, a trustworthy clock might not be 1453 available. For example, in many Trusted Execution Environments 1454 (TEEs) today, a clock is only available outside the TEE and so cannot 1455 be trusted by the TEE. 1457 10.2. Implicit Timekeeping using Nonces 1459 A second approach places the onus of timekeeping solely on the 1460 Verifier (for Evidence) or the Relying Party (for Attestation 1461 Results), and might be suitable, for example, in case the Attester 1462 does not have a trustworthy clock or time synchronization is 1463 otherwise impaired. In this approach, a non-predictable nonce is 1464 sent by the appraising entity, and the nonce is then signed and 1465 included along with the Claims in the Evidence or Attestation Result. 1466 After checking that the sent and received nonces are the same, the 1467 appraising entity knows that the Claims were signed after the nonce 1468 was generated. This allows associating a "rough" epoch to the 1469 Evidence or Attestation Result. In this case the epoch is said to be 1470 rough because: 1472 * The epoch applies to the entire Claim set instead of a more 1473 granular association, and 1475 * The time between the creation of Claims and the collection of 1476 Claims is indistinguishable. 1478 10.3. Implicit Timekeeping using Epoch IDs 1480 A third approach relies on having epoch identifiers (or "IDs") 1481 periodically sent to both the sender and receiver of Evidence or 1482 Attestation Results by some "Epoch ID Distributor". 1484 Epoch IDs are different from nonces as they can be used more than 1485 once and can even be used by more than one entity at the same time. 1486 Epoch IDs are different from timestamps as they do not have to convey 1487 information about a point in time, i.e., they are not necessarily 1488 monotonically increasing integers. 1490 Like the nonce approach, this allows associating a "rough" epoch 1491 without requiring a trustworthy clock or time synchronization in 1492 order to generate or appraise the freshness of Evidence or 1493 Attestation Results. Only the Epoch ID Distributor requires access 1494 to a clock so it can periodically send new epoch IDs. 1496 The most recent epoch ID is included in the produced Evidence or 1497 Attestation Results, and the appraising entity can compare the epoch 1498 ID in received Evidence or Attestation Results against the latest 1499 epoch ID it received from the Epoch ID Distributor to determine if it 1500 is within the current epoch. An actual solution also needs to take 1501 into account race conditions when transitioning to a new epoch, such 1502 as by using a counter signed by the Epoch ID Distributor as the epoch 1503 ID, or by including both the current and previous epoch IDs in 1504 messages and/or checks, by requiring retries in case of mismatching 1505 epoch IDs, or by buffering incoming messages that might be associated 1506 with a epoch ID that the receiver has not yet obtained. 1508 More generally, in order to prevent an appraising entity from 1509 generating false negatives (e.g., discarding Evidence that is deemed 1510 stale even if it is not), the appraising entity should keep an "epoch 1511 window" consisting of the most recently received epoch IDs. The 1512 depth of such epoch window is directly proportional to the maximum 1513 network propagation delay between the first to receive the epoch ID 1514 and the last to receive the epoch ID, and it is inversely 1515 proportional to the epoch duration. The appraising entity shall 1516 compare the epoch ID carried in the received Evidence or Attestation 1517 Result with the epoch IDs in its epoch window to find a suitable 1518 match. 1520 Whereas the nonce approach typically requires the appraising entity 1521 to keep state for each nonce generated, the epoch ID approach 1522 minimizes the state kept to be independent of the number of Attesters 1523 or Verifiers from which it expects to receive Evidence or Attestation 1524 Results, as long as all use the same Epoch ID Distributor. 1526 10.4. Discussion 1528 Implicit and explicit timekeeping can be combined into hybrid 1529 mechanisms. For example, if clocks exist and are considered 1530 trustworthy but are not synchronized, a nonce-based exchange may be 1531 used to determine the (relative) time offset between the involved 1532 peers, followed by any number of timestamp based exchanges. 1534 It is important to note that the actual values in Claims might have 1535 been generated long before the Claims are signed. If so, it is the 1536 signer's responsibility to ensure that the values are still correct 1537 when they are signed. For example, values generated at boot time 1538 might have been saved to secure storage until network connectivity is 1539 established to the remote Verifier and a nonce is obtained. 1541 A more detailed discussion with examples appears in Section 16. 1543 For a discussion on the security of epoch IDs see Section 12.3. 1545 11. Privacy Considerations 1547 The conveyance of Evidence and the resulting Attestation Results 1548 reveal a great deal of information about the internal state of a 1549 device as well as potentially any users of the device. 1551 In many cases, the whole point of attestation procedures is to 1552 provide reliable information about the type of the device and the 1553 firmware/software that the device is running. 1555 This information might be particularly interesting to many attackers. 1556 For example, knowing that a device is running a weak version of 1557 firmware provides a way to aim attacks better. 1559 In some circumstances, if an attacker can become aware of 1560 Endorsements, Reference Values, or appraisal policies, it could 1561 potentially provide an attacker with insight into defensive 1562 mitigations. It is recommended that attention be paid to 1563 confidentiality of such information. 1565 Additionally, many Claims in Evidence, many Claims in Attestation 1566 Results, and appraisal policies potentially contain Personally 1567 Identifying Information (PII) depending on the end-to-end use case of 1568 the remote attestation procedure. Remote attestation that includes 1569 containers and applications, e.g., a blood pressure monitor, may 1570 further reveal details about specific systems or users. 1572 In some cases, an attacker may be able to make inferences about the 1573 contents of Evidence from the resulting effects or timing of the 1574 processing. For example, an attacker might be able to infer the 1575 value of specific Claims if it knew that only certain values were 1576 accepted by the Relying Party. 1578 Conceptual messages (see Section 8) carrying sensitive or 1579 confidential information are expected to be integrity protected 1580 (i.e., either via signing or a secure channel) and optionally might 1581 be confidentiality protected via encryption. If there isn't 1582 confidentiality protection of conceptual messages themselves, the 1583 underlying conveyance protocol should provide these protections. 1585 As Evidence might contain sensitive or confidential information, 1586 Attesters are responsible for only sending such Evidence to trusted 1587 Verifiers. Some Attesters might want a stronger level of assurance 1588 of the trustworthiness of a Verifier before sending Evidence to it. 1589 In such cases, an Attester can first act as a Relying Party and ask 1590 for the Verifier's own Attestation Result, and appraising it just as 1591 a Relying Party would appraise an Attestation Result for any other 1592 purpose. 1594 Another approach to deal with Evidence is to remove PII from the 1595 Evidence while still being able to verify that the Attester is one of 1596 a large set. This approach is often called "Direct Anonymous 1597 Attestation". See [CCC-DeepDive] section 6.2 for more discussion. 1599 12. Security Considerations 1601 This document provides an architecture for doing remote attestation. 1602 No specific wire protocol is documented here. Without a specific 1603 proposal to compare against, it is impossible to know if the security 1604 threats listed below have been mitigated well. 1606 The security considerations below should be read as being essentially 1607 requirements against realizations of the RATS Architecture. Some 1608 threats apply to protocols, some are against implementations (code), 1609 and some threats are against physical infrastructure (such as 1610 factories). 1612 The fundamental purpose of the RATS architecture is to allow a 1613 Relying Party to establish a basis for trusting the Attester. 1615 12.1. Attester and Attestation Key Protection 1617 Implementers need to pay close attention to the protection of the 1618 Attester and the manufacturing processes for provisioning attestation 1619 key material. If either of these are compromised, intended levels of 1620 assurance for RATS are compromised because attackers can forge 1621 Evidence or manipulate the Attesting Environment. For example, a 1622 Target Environment should not be able to tamper with the Attesting 1623 Environment that measures it, by isolating the two environments from 1624 each other in some way. 1626 Remote attestation applies to use cases with a range of security 1627 requirements, so the protections discussed here range from low to 1628 high security where low security may be limited to application or 1629 process isolation by the device's operating system, and high security 1630 may involve specialized hardware to defend against physical attacks 1631 on a chip. 1633 12.1.1. On-Device Attester and Key Protection 1635 It is assumed that an Attesting Environment is sufficiently isolated 1636 from the Target Environment it collects Claims about and that it 1637 signs the resulting Claims set with an attestation key, so that the 1638 Target Environment cannot forge Evidence about itself. Such an 1639 isolated environment might be provided by a process, a dedicated 1640 chip, a TEE, a virtual machine, or another secure mode of operation. 1641 The Attesting Environment must be protected from unauthorized 1642 modification to ensure it behaves correctly. Confidentiality 1643 protection of the Attesting Environment's signing key is vital so it 1644 cannot be misused to forge Evidence. 1646 In many cases the user or owner of a device that includes the role of 1647 Attester must not be able to modify or extract keys from the 1648 Attesting Environments, to prevent creating forged Evidence. Some 1649 common examples include the user of a mobile phone or FIDO 1650 authenticator. 1652 Measures for a minimally protected system might include process or 1653 application isolation provided by a high-level operating system, and 1654 restricted access to root or system privileges. In contrast, For 1655 really simple single-use devices that don't use a protected mode 1656 operating system, like a Bluetooth speaker, the only factual 1657 isolation might be the sturdy housing of the device. 1659 Measures for a moderately protected system could include a special 1660 restricted operating environment, such as a TEE. In this case, only 1661 security-oriented software has access to the Attester and key 1662 material. 1664 Measures for a highly protected system could include specialized 1665 hardware that is used to provide protection against chip decapping 1666 attacks, power supply and clock glitching, faulting injection and RF 1667 and power side channel attacks. 1669 12.1.2. Attestation Key Provisioning Processes 1671 Attestation key provisioning is the process that occurs in the 1672 factory or elsewhere to establish signing key material on the device 1673 and the validation key material off the device. Sometimes this 1674 procedure is referred to as personalization or customization. 1676 The keys generated in the factory, whether generated in the device or 1677 off-device by the factory SHOULD be generated by a Cryptographically 1678 Strong Sequence ([RFC4086], Section 6.2). 1680 12.1.2.1. Off-Device Key Generation 1682 One way to provision key material is to first generate it external to 1683 the device and then copy the key onto the device. In this case, 1684 confidentiality protection of the generator, as well as for the path 1685 over which the key is provisioned, is necessary. The manufacturer 1686 needs to take care to protect corresponding key material with 1687 measures appropriate for its value. 1689 The degree of protection afforded to this key material can vary by 1690 the intended function of the device and the specific practices of the 1691 device manufacturer or integrator. The confidentiality protection is 1692 fundamentally based upon some amount of physical protection: while 1693 encryption is often used to provide confidentiality when a key is 1694 conveyed across a factory, where the attestation key is created or 1695 applied, it must be available in an unencrypted form. The physical 1696 protection can therefore vary from situations where the key is 1697 unencrypted only within carefully controlled secure enclaves within 1698 silicon, to situations where an entire facility is considered secure, 1699 by the simple means of locked doors and limited access. 1701 The cryptography that is used to enable confidentiality protection of 1702 the attestation key comes with its own requirements to be secured. 1703 This results in recursive problems, as the key material used to 1704 provision attestation keys must again somehow have been provisioned 1705 securely beforehand (requiring an additional level of protection, and 1706 so on). 1708 So, this is why, in general, a combination of some physical security 1709 measures and some cryptographic measures is used to establish 1710 confidentiality protection. 1712 12.1.2.2. On-Device Key Generation 1714 When key material is generated within a device and the secret part of 1715 it never leaves the device, then the problem may lessen. For public- 1716 key cryptography, it is, by definition, not necessary to maintain 1717 confidentiality of the public key: however integrity of the chain of 1718 custody of the public key is necessary in order to avoid attacks 1719 where an attacker is able get a key they control endorsed. 1721 To summarize: attestation key provisioning must ensure that only 1722 valid attestation key material is established in Attesters. 1724 12.2. Integrity Protection 1726 Any solution that conveys information in any conceptual message (see 1727 Section 8) must support end-to-end integrity protection and replay 1728 attack prevention, and often also needs to support additional 1729 security properties, including: 1731 * end-to-end encryption, 1733 * denial of service protection, 1735 * authentication, 1736 * auditing, 1738 * fine grained access controls, and 1740 * logging. 1742 Section 10 discusses ways in which freshness can be used in this 1743 architecture to protect against replay attacks. 1745 To assess the security provided by a particular appraisal policy, it 1746 is important to understand the strength of the root of trust, e.g., 1747 whether it is mutable software, or firmware that is read-only after 1748 boot, or immutable hardware/ROM. 1750 It is also important that the appraisal policy was itself obtained 1751 securely. If an attacker can configure or modify appraisal policies, 1752 Endorsements or Reference Values for a Relying Party or for a 1753 Verifier, then integrity of the process is compromised. 1755 Security protections in RATS may be applied at different layers, 1756 whether by a conveyance protocol, or an information encoding format. 1757 This architecture expects conceptual messages to be end-to-end 1758 protected based on the role interaction context. For example, if an 1759 Attester produces Evidence that is relayed through some other entity 1760 that doesn't implement the Attester or the intended Verifier roles, 1761 then the relaying entity should not expect to have access to the 1762 Evidence. 1764 12.3. Epoch ID-based Attestation 1766 Epoch IDs, described in Section 10.3, can be tampered with, replayed, 1767 dropped, delayed, and reordered by an attacker. 1769 An attacker could be either external or belong to the distribution 1770 group, for example, if one of the Attester entities have been 1771 compromised. 1773 An attacker who is able to tamper with epoch IDs can potentially lock 1774 all the participants in a certain epoch of choice for ever, 1775 effectively freezing time. This is problematic since it destroys the 1776 ability to ascertain freshness of Evidence and Attestation Results. 1778 To mitigate this threat, the transport should be at least integrity 1779 protected and provide origin authentication. 1781 Selective dropping of epoch IDs is equivalent to pinning the victim 1782 node to a past epoch. An attacker could drop epoch IDs to only some 1783 entities and not others, which will typically result in a denial of 1784 service due to the permanent staleness of the Attestation Result or 1785 Evidence. 1787 Delaying or reordering epoch IDs is equivalent to manipulating the 1788 victim's timeline at will. This ability could be used by a malicious 1789 actor (e.g., a compromised router) to mount a confusion attack where, 1790 for example, a Verifier is tricked into accepting Evidence coming 1791 from a past epoch as fresh, while in the meantime the Attester has 1792 been compromised. 1794 Reordering and dropping attacks are mitigated if the transport 1795 provides the ability to detect reordering and drop. However, the 1796 delay attack described above can't be thwarted in this manner. 1798 12.4. Trust Anchor Protection 1800 As noted in Section 7, Verifiers and Relying Parties have trust 1801 anchor stores that must be secured. [RFC6024] contains more 1802 discussion of trust anchor store requirements. Specifically, a trust 1803 anchor store must resist modification against unauthorized insertion, 1804 deletion, and modification. 1806 If certificates are used as trust anchors, Verifiers and Relying 1807 Parties are also responsible for validating the entire certificate 1808 path up to the trust anchor, which includes checking for certificate 1809 revocation. See Section 6 of [RFC5280] for details. 1811 13. IANA Considerations 1813 This document does not require any actions by IANA. 1815 14. Acknowledgments 1817 Special thanks go to Joerg Borchert, Nancy Cam-Winget, Jessica 1818 Fitzgerald-McKay, Diego Lopez, Laurence Lundblade, Paul Rowe, Hannes 1819 Tschofenig, Frank Xia, and David Wooten. 1821 15. Notable Contributions 1823 Thomas Hardjono created initial versions of the terminology section 1824 in collaboration with Ned Smith. Eric Voit provided the conceptual 1825 separation between Attestation Provision Flows and Attestation 1826 Evidence Flows. Monty Wisemen created the content structure of the 1827 first three architecture drafts. Carsten Bormann provided many of 1828 the motivational building blocks with respect to the Internet Threat 1829 Model. 1831 16. Appendix A: Time Considerations 1833 Section 10 discussed various issues and requirements around freshness 1834 of evidence, and summarized three approaches that might be used by 1835 different solutions to address them. This appendix provides more 1836 details with examples to help illustrate potential approaches, to 1837 inform those creating specific solutions. 1839 The table below defines a number of relevant events, with an ID that 1840 is used in subsequent diagrams. The times of said events might be 1841 defined in terms of an absolute clock time, such as the Coordinated 1842 Universal Time timescale, or might be defined relative to some other 1843 timestamp or timeticks counter, such as a clock resetting its epoch 1844 each time it is powered on. 1846 +====+============+=================================================+ 1847 | ID | Event | Explanation of event | 1848 +====+============+=================================================+ 1849 | VG | Value | A value to appear in a Claim was created. | 1850 | | generated | In some cases, a value may have technically | 1851 | | | existed before an Attester became aware of | 1852 | | | it but the Attester might have no idea how | 1853 | | | long it has had that value. In such a | 1854 | | | case, the Value created time is the time at | 1855 | | | which the Claim containing the copy of the | 1856 | | | value was created. | 1857 +----+------------+-------------------------------------------------+ 1858 | NS | Nonce sent | A nonce not predictable to an Attester | 1859 | | | (recentness & uniqueness) is sent to an | 1860 | | | Attester. | 1861 +----+------------+-------------------------------------------------+ 1862 | NR | Nonce | A nonce is relayed to an Attester by | 1863 | | relayed | another entity. | 1864 +----+------------+-------------------------------------------------+ 1865 | IR | Epoch ID | An epoch ID is successfully received and | 1866 | | received | processed by an entity. | 1867 +----+------------+-------------------------------------------------+ 1868 | EG | Evidence | An Attester creates Evidence from collected | 1869 | | generation | Claims. | 1870 +----+------------+-------------------------------------------------+ 1871 | ER | Evidence | A Relying Party relays Evidence to a | 1872 | | relayed | Verifier. | 1873 +----+------------+-------------------------------------------------+ 1874 | RG | Result | A Verifier appraises Evidence and generates | 1875 | | generation | an Attestation Result. | 1876 +----+------------+-------------------------------------------------+ 1877 | RR | Result | A Relying Party relays an Attestation | 1878 | | relayed | Result to a Relying Party. | 1879 +----+------------+-------------------------------------------------+ 1880 | RA | Result | The Relying Party appraises Attestation | 1881 | | appraised | Results. | 1882 +----+------------+-------------------------------------------------+ 1883 | OP | Operation | The Relying Party performs some operation | 1884 | | performed | requested by the Attester via a resource | 1885 | | | access protocol as depicted in Figure 8, | 1886 | | | e.g., across a session created earlier at | 1887 | | | time(RA). | 1888 +----+------------+-------------------------------------------------+ 1889 | RX | Result | An Attestation Result should no longer be | 1890 | | expiry | accepted, according to the Verifier that | 1891 | | | generated it. | 1892 +----+------------+-------------------------------------------------+ 1894 Table 1 1896 Using the table above, a number of hypothetical examples of how a 1897 solution might be built are illustrated below. This list is not 1898 intended to be complete, but is just representative enough to 1899 highlight various timing considerations. 1901 All times are relative to the local clocks, indicated by an "_a" 1902 (Attester), "_v" (Verifier), or "_r" (Relying Party) suffix. 1904 Times with an appended Prime (') indicate a second instance of the 1905 same event. 1907 How and if clocks are synchronized depends upon the model. 1909 In the figures below, curly braces indicate containment. For 1910 example, the notation Evidence{foo} indicates that 'foo' is contained 1911 in the Evidence and is thus covered by its signature. 1913 16.1. Example 1: Timestamp-based Passport Model Example 1915 The following example illustrates a hypothetical Passport Model 1916 solution that uses timestamps and requires roughly synchronized 1917 clocks between the Attester, Verifier, and Relying Party, which 1918 depends on using a secure clock synchronization mechanism. As a 1919 result, the receiver of a conceptual message containing a timestamp 1920 can directly compare it to its own clock and timestamps. 1922 .----------. .----------. .---------------. 1923 | Attester | | Verifier | | Relying Party | 1924 '----------' '----------' '---------------' 1925 time(VG_a) | | 1926 | | | 1927 ~ ~ ~ 1928 | | | 1929 time(EG_a) | | 1930 |------Evidence{time(EG_a)}------>| | 1931 | time(RG_v) | 1932 |<-----Attestation Result---------| | 1933 | {time(RG_v),time(RX_v)} | | 1934 ~ ~ 1935 | | 1936 |----Attestation Result{time(RG_v),time(RX_v)}-->time(RA_r) 1937 | | 1938 ~ ~ 1939 | | 1940 | time(OP_r) 1942 The Verifier can check whether the Evidence is fresh when appraising 1943 it at time(RG_v) by checking time(RG_v) - time(EG_a) < Threshold, 1944 where the Verifier's threshold is large enough to account for the 1945 maximum permitted clock skew between the Verifier and the Attester. 1947 If time(VG_a) is also included in the Evidence along with the Claim 1948 value generated at that time, and the Verifier decides that it can 1949 trust the time(VG_a) value, the Verifier can also determine whether 1950 the Claim value is recent by checking time(RG_v) - time(VG_a) < 1951 Threshold. The threshold is decided by the Appraisal Policy for 1952 Evidence, and again needs to take into account the maximum permitted 1953 clock skew between the Verifier and the Attester. 1955 The Relying Party can check whether the Attestation Result is fresh 1956 when appraising it at time(RA_r) by checking time(RA_r) - time(RG_v) 1957 < Threshold, where the Relying Party's threshold is large enough to 1958 account for the maximum permitted clock skew between the Relying 1959 Party and the Verifier. The result might then be used for some time 1960 (e.g., throughout the lifetime of a connection established at 1961 time(RA_r)). The Relying Party must be careful, however, to not 1962 allow continued use beyond the period for which it deems the 1963 Attestation Result to remain fresh enough. Thus, it might allow use 1964 (at time(OP_r)) as long as time(OP_r) - time(RG_v) < Threshold. 1965 However, if the Attestation Result contains an expiry time time(RX_v) 1966 then it could explicitly check time(OP_r) < time(RX_v). 1968 16.2. Example 2: Nonce-based Passport Model Example 1970 The following example illustrates a hypothetical Passport Model 1971 solution that uses nonces instead of timestamps. Compared to the 1972 timestamp-based example, it requires an extra round trip to retrieve 1973 a nonce, and requires that the Verifier and Relying Party track state 1974 to remember the nonce for some period of time. 1976 The advantage is that it does not require that any clocks are 1977 synchronized. As a result, the receiver of a conceptual message 1978 containing a timestamp cannot directly compare it to its own clock or 1979 timestamps. Thus we use a suffix ("a" for Attester, "v" for 1980 Verifier, and "r" for Relying Party) on the IDs below indicating 1981 which clock generated them, since times from different clocks cannot 1982 be compared. Only the delta between two events from the sender can 1983 be used by the receiver. 1985 .----------. .----------. .---------------. 1986 | Attester | | Verifier | | Relying Party | 1987 '----------' '----------' '---------------' 1988 time(VG_a) | | 1989 | | | 1990 ~ ~ ~ 1991 | | | 1992 |<--Nonce1---------------------time(NS_v) | 1993 time(EG_a) | | 1994 |---Evidence--------------------->| | 1995 | {Nonce1, time(EG_a)-time(VG_a)} | | 1996 | time(RG_v) | 1997 |<--Attestation Result------------| | 1998 | {time(RX_v)-time(RG_v)} | | 1999 ~ ~ 2000 | | 2001 |<--Nonce2-------------------------------------time(NS_r) 2002 time(RR_a) | 2003 |--[Attestation Result{time(RX_v)-time(RG_v)}, -->|time(RA_r) 2004 | Nonce2, time(RR_a)-time(EG_a)] | 2005 ~ ~ 2006 | | 2007 | time(OP_r) 2009 In this example solution, the Verifier can check whether the Evidence 2010 is fresh at time(RG_v) by verifying that time(RG_v)-time(NS_v) < 2011 Threshold. 2013 The Verifier cannot, however, simply rely on a Nonce to determine 2014 whether the value of a Claim is recent, since the Claim value might 2015 have been generated long before the nonce was sent by the Verifier. 2017 However, if the Verifier decides that the Attester can be trusted to 2018 correctly provide the delta time(EG_a)-time(VG_a), then it can 2019 determine recency by checking time(RG_v)-time(NS_v) + time(EG_a)- 2020 time(VG_a) < Threshold. 2022 Similarly if, based on an Attestation Result from a Verifier it 2023 trusts, the Relying Party decides that the Attester can be trusted to 2024 correctly provide time deltas, then it can determine whether the 2025 Attestation Result is fresh by checking time(OP_r)-time(NS_r) + 2026 time(RR_a)-time(EG_a) < Threshold. Although the Nonce2 and 2027 time(RR_a)-time(EG_a) values cannot be inside the Attestation Result, 2028 they might be signed by the Attester such that the Attestation Result 2029 vouches for the Attester's signing capability. 2031 The Relying Party must still be careful, however, to not allow 2032 continued use beyond the period for which it deems the Attestation 2033 Result to remain valid. Thus, if the Attestation Result sends a 2034 validity lifetime in terms of time(RX_v)-time(RG_v), then the Relying 2035 Party can check time(OP_r)-time(NS_r) < time(RX_v)-time(RG_v). 2037 16.3. Example 3: Epoch ID-based Passport Model Example 2039 The example in Figure 10 illustrates a hypothetical Passport Model 2040 solution that uses epoch IDs instead of nonces or timestamps. 2042 The Epoch ID Distributor broadcasts epoch ID I which starts a new 2043 epoch E for a protocol participant upon reception at time(IR). 2045 The Attester generates Evidence incorporating epoch ID I and conveys 2046 it to the Verifier. 2048 The Verifier appraises that the received epoch ID I is "fresh" 2049 according to the definition provided in Section 10.3 whereby retries 2050 are required in the case of mismatching epoch IDs, and generates an 2051 Attestation Result. The Attestation Result is conveyed to the 2052 Attester. 2054 After the transmission of epoch ID I' a new epoch E' is established 2055 when I' is received by each protocol participant. The Attester 2056 relays the Attestation Result obtained during epoch E (associated 2057 with epoch ID I) to the Relying Party using the epoch ID for the 2058 current epoch I'. If the Relying Party had not yet received I', then 2059 the Attestation Result would be rejected, but in this example, it is 2060 received. 2062 In the illustrated scenario, the epoch ID for relaying an Attestation 2063 Result to the Relying Party is current, while a previous epoch ID was 2064 used to generate Verifier evaluated evidence. This indicates that at 2065 least one epoch transition has occurred, and the Attestation Results 2066 may only be as fresh as the previous epoch. If the Relying Party 2067 remembers the previous epoch ID I during an epoch window as discussed 2068 in Section 10.3, and the message is received during that window, the 2069 Attestation Result is accepted as fresh, and otherwise it is rejected 2070 as stale. 2072 .-------------. 2073 .----------. | Epoch ID | .----------. .---------------. 2074 | Attester | | Distributor | | Verifier | | Relying Party | 2075 '----------' '-------------' '----------' '---------------' 2076 time(VG_a) | | | 2077 | | | | 2078 ~ ~ ~ ~ 2079 | | | | 2080 time(IR_a)<------I--+--I--------time(IR_v)----->time(IR_r) 2081 | | | | 2082 time(EG_a) | | | 2083 |---Evidence--------------------->| | 2084 | {I,time(EG_a)-time(VG_a)} | | 2085 | | | | 2086 | | time(RG_v) | 2087 |<--Attestation Result------------| | 2088 | {I,time(RX_v)-time(RG_v)} | | 2089 | | | | 2090 time(IR'_a)<-----I'-+--I'-------time(IR'_v)---->time(IR'_r) 2091 | | | | 2092 |---[Attestation Result--------------------->time(RA_r) 2093 | {I,time(RX_v)-time(RG_v)},I'] | | 2094 | | | | 2095 ~ ~ ~ ~ 2096 | | | | 2097 | | | time(OP_r) 2099 Figure 10: Epoch ID-based Passport Model 2101 16.4. Example 4: Timestamp-based Background-Check Model Example 2103 The following example illustrates a hypothetical Background-Check 2104 Model solution that uses timestamps and requires roughly synchronized 2105 clocks between the Attester, Verifier, and Relying Party. 2107 .----------. .---------------. .----------. 2108 | Attester | | Relying Party | | Verifier | 2109 '----------' '---------------' '----------' 2110 time(VG_a) | | 2111 | | | 2112 ~ ~ ~ 2113 | | | 2114 time(EG_a) | | 2115 |----Evidence------->| | 2116 | {time(EG_a)} time(ER_r)--Evidence{time(EG_a)}->| 2117 | | time(RG_v) 2118 | time(RA_r)<-Attestation Result---| 2119 | | {time(RX_v)} | 2120 ~ ~ ~ 2121 | | | 2122 | time(OP_r) | 2124 The time considerations in this example are equivalent to those 2125 discussed under Example 1 above. 2127 16.5. Example 5: Nonce-based Background-Check Model Example 2129 The following example illustrates a hypothetical Background-Check 2130 Model solution that uses nonces and thus does not require that any 2131 clocks are synchronized. In this example solution, a nonce is 2132 generated by a Verifier at the request of a Relying Party, when the 2133 Relying Party needs to send one to an Attester. 2135 .----------. .---------------. .----------. 2136 | Attester | | Relying Party | | Verifier | 2137 '----------' '---------------' '----------' 2138 time(VG_a) | | 2139 | | | 2140 ~ ~ ~ 2141 | | | 2142 | |<-------Nonce-----------time(NS_v) 2143 |<---Nonce-----------time(NR_r) | 2144 time(EG_a) | | 2145 |----Evidence{Nonce}--->| | 2146 | time(ER_r)--Evidence{Nonce}--->| 2147 | | time(RG_v) 2148 | time(RA_r)<-Attestation Result-| 2149 | | {time(RX_v)-time(RG_v)} | 2150 ~ ~ ~ 2151 | | | 2152 | time(OP_r) | 2154 The Verifier can check whether the Evidence is fresh, and whether a 2155 Claim value is recent, the same as in Example 2 above. 2157 However, unlike in Example 2, the Relying Party can use the Nonce to 2158 determine whether the Attestation Result is fresh, by verifying that 2159 time(OP_r)-time(NR_r) < Threshold. 2161 The Relying Party must still be careful, however, to not allow 2162 continued use beyond the period for which it deems the Attestation 2163 Result to remain valid. Thus, if the Attestation Result sends a 2164 validity lifetime in terms of time(RX_v)-time(RG_v), then the Relying 2165 Party can check time(OP_r)-time(ER_r) < time(RX_v)-time(RG_v). 2167 17. References 2169 17.1. Normative References 2171 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 2172 Housley, R., and W. Polk, "Internet X.509 Public Key 2173 Infrastructure Certificate and Certificate Revocation List 2174 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 2175 . 2177 [RFC7519] Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token 2178 (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015, 2179 . 2181 [RFC8392] Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, 2182 "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, 2183 May 2018, . 2185 17.2. Informative References 2187 [CCC-DeepDive] 2188 Confidential Computing Consortium, "Confidential Computing 2189 Deep Dive", n.d., 2190 . 2192 [CTAP] FIDO Alliance, "Client to Authenticator Protocol", n.d., 2193 . 2197 [I-D.birkholz-rats-tuda] 2198 Fuchs, A., Birkholz, H., McDonald, I. E., and C. Bormann, 2199 "Time-Based Uni-Directional Attestation", Work in 2200 Progress, Internet-Draft, draft-birkholz-rats-tuda-05, 12 2201 July 2021, . 2204 [I-D.birkholz-rats-uccs] 2205 Birkholz, H., O'Donoghue, J., Cam-Winget, N., and C. 2206 Bormann, "A CBOR Tag for Unprotected CWT Claims Sets", 2207 Work in Progress, Internet-Draft, draft-birkholz-rats- 2208 uccs-03, 8 March 2021, . 2211 [I-D.ietf-teep-architecture] 2212 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 2213 "Trusted Execution Environment Provisioning (TEEP) 2214 Architecture", Work in Progress, Internet-Draft, draft- 2215 ietf-teep-architecture-15, 12 July 2021, 2216 . 2219 [I-D.tschofenig-tls-cwt] 2220 Tschofenig, H. and M. Brossard, "Using CBOR Web Tokens 2221 (CWTs) in Transport Layer Security (TLS) and Datagram 2222 Transport Layer Security (DTLS)", Work in Progress, 2223 Internet-Draft, draft-tschofenig-tls-cwt-02, 13 July 2020, 2224 . 2227 [OPCUA] OPC Foundation, "OPC Unified Architecture Specification, 2228 Part 2: Security Model, Release 1.03", OPC 10000-2 , 25 2229 November 2015, . 2233 [RFC4086] Eastlake 3rd, D., Schiller, J., and S. Crocker, 2234 "Randomness Requirements for Security", BCP 106, RFC 4086, 2235 DOI 10.17487/RFC4086, June 2005, 2236 . 2238 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 2239 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 2240 . 2242 [RFC5209] Sangster, P., Khosravi, H., Mani, M., Narayan, K., and J. 2243 Tardo, "Network Endpoint Assessment (NEA): Overview and 2244 Requirements", RFC 5209, DOI 10.17487/RFC5209, June 2008, 2245 . 2247 [RFC6024] Reddy, R. and C. Wallace, "Trust Anchor Management 2248 Requirements", RFC 6024, DOI 10.17487/RFC6024, October 2249 2010, . 2251 [RFC8322] Field, J., Banghart, S., and D. Waltermire, "Resource- 2252 Oriented Lightweight Information Exchange (ROLIE)", 2253 RFC 8322, DOI 10.17487/RFC8322, February 2018, 2254 . 2256 [strengthoffunction] 2257 NISC, "Strength of Function", n.d., 2258 . 2261 [TCG-DICE] Trusted Computing Group, "DICE Certificate Profiles", 2262 n.d., . 2266 [TCGarch] Trusted Computing Group, "Trusted Platform Module Library 2267 - Part 1: Architecture", 8 November 2019, 2268 . 2271 [WebAuthN] W3C, "Web Authentication: An API for accessing Public Key 2272 Credentials", n.d., . 2274 Contributors 2276 Monty Wiseman 2278 Email: montywiseman32@gmail.com 2280 Liang Xia 2282 Email: frank.xialiang@huawei.com 2284 Laurence Lundblade 2286 Email: lgl@island-resort.com 2287 Eliot Lear 2289 Email: elear@cisco.com 2291 Jessica Fitzgerald-McKay 2293 Sarah C. Helbe 2295 Andrew Guinn 2297 Peter Loscocco 2299 Email: pete.loscocco@gmail.com 2301 Eric Voit 2303 Thomas Fossati 2305 Email: thomas.fossati@arm.com 2307 Paul Rowe 2309 Carsten Bormann 2311 Email: cabo@tzi.org 2313 Giri Mandyam 2315 Email: mandyam@qti.qualcomm.com 2317 Kathleen Moriarty 2319 Email: kathleen.moriarty.ietf@gmail.com 2321 Guy Fedorkow 2323 Email: gfedorkow@juniper.net 2324 Simon Frost 2326 Email: Simon.Frost@arm.com 2328 Authors' Addresses 2330 Henk Birkholz 2331 Fraunhofer SIT 2332 Rheinstrasse 75 2333 64295 Darmstadt 2334 Germany 2336 Email: henk.birkholz@sit.fraunhofer.de 2338 Dave Thaler 2339 Microsoft 2340 United States of America 2342 Email: dthaler@microsoft.com 2344 Michael Richardson 2345 Sandelman Software Works 2346 Canada 2348 Email: mcr+ietf@sandelman.ca 2350 Ned Smith 2351 Intel Corporation 2352 United States of America 2354 Email: ned.smith@intel.com 2356 Wei Pan 2357 Huawei Technologies 2359 Email: william.panwei@huawei.com