idnits 2.17.1 draft-ietf-rats-architecture-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 546 has weird spacing: '... Claims v ...' -- The document date (30 March 2021) is 1116 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-birkholz-rats-tuda-04 == Outdated reference: A later version (-03) exists of draft-birkholz-rats-uccs-02 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-13 Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RATS Working Group H. Birkholz 3 Internet-Draft Fraunhofer SIT 4 Intended status: Informational D. Thaler 5 Expires: 1 October 2021 Microsoft 6 M. Richardson 7 Sandelman Software Works 8 N. Smith 9 Intel 10 W. Pan 11 Huawei Technologies 12 30 March 2021 14 Remote Attestation Procedures Architecture 15 draft-ietf-rats-architecture-11 17 Abstract 19 In network protocol exchanges it is often useful for one end of a 20 communication to know whether the other end is in an intended 21 operating state. This document provides an architectural overview of 22 the entities involved that make such tests possible through the 23 process of generating, conveying, and evaluating evidentiary claims. 24 An attempt is made to provide for a model that is neutral toward 25 processor architectures, the content of claims, and protocols. 27 Note to Readers 29 Discussion of this document takes place on the RATS Working Group 30 mailing list (rats@ietf.org), which is archived at 31 https://mailarchive.ietf.org/arch/browse/rats/ 32 (https://mailarchive.ietf.org/arch/browse/rats/). 34 Source for this draft and an issue tracker can be found at 35 https://github.com/ietf-rats-wg/architecture (https://github.com/ 36 ietf-rats-wg/architecture). 38 Status of This Memo 40 This Internet-Draft is submitted in full conformance with the 41 provisions of BCP 78 and BCP 79. 43 Internet-Drafts are working documents of the Internet Engineering 44 Task Force (IETF). Note that other groups may also distribute 45 working documents as Internet-Drafts. The list of current Internet- 46 Drafts is at https://datatracker.ietf.org/drafts/current/. 48 Internet-Drafts are draft documents valid for a maximum of six months 49 and may be updated, replaced, or obsoleted by other documents at any 50 time. It is inappropriate to use Internet-Drafts as reference 51 material or to cite them other than as "work in progress." 53 This Internet-Draft will expire on 1 October 2021. 55 Copyright Notice 57 Copyright (c) 2021 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 62 license-info) in effect on the date of publication of this document. 63 Please review these documents carefully, as they describe your rights 64 and restrictions with respect to this document. Code Components 65 extracted from this document must include Simplified BSD License text 66 as described in Section 4.e of the Trust Legal Provisions and are 67 provided without warranty as described in the Simplified BSD License. 69 Table of Contents 71 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 72 2. Reference Use Cases . . . . . . . . . . . . . . . . . . . . . 5 73 2.1. Network Endpoint Assessment . . . . . . . . . . . . . . . 5 74 2.2. Confidential Machine Learning Model Protection . . . . . 5 75 2.3. Confidential Data Protection . . . . . . . . . . . . . . 6 76 2.4. Critical Infrastructure Control . . . . . . . . . . . . . 6 77 2.5. Trusted Execution Environment Provisioning . . . . . . . 7 78 2.6. Hardware Watchdog . . . . . . . . . . . . . . . . . . . . 7 79 2.7. FIDO Biometric Authentication . . . . . . . . . . . . . . 7 80 3. Architectural Overview . . . . . . . . . . . . . . . . . . . 8 81 3.1. Appraisal Policies . . . . . . . . . . . . . . . . . . . 9 82 3.2. Reference Values . . . . . . . . . . . . . . . . . . . . 9 83 3.3. Two Types of Environments of an Attester . . . . . . . . 10 84 3.4. Layered Attestation Environments . . . . . . . . . . . . 11 85 3.5. Composite Device . . . . . . . . . . . . . . . . . . . . 13 86 3.6. Implementation Considerations . . . . . . . . . . . . . . 15 87 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 15 88 4.1. Roles . . . . . . . . . . . . . . . . . . . . . . . . . . 15 89 4.2. Artifacts . . . . . . . . . . . . . . . . . . . . . . . . 16 90 5. Topological Patterns . . . . . . . . . . . . . . . . . . . . 18 91 5.1. Passport Model . . . . . . . . . . . . . . . . . . . . . 18 92 5.2. Background-Check Model . . . . . . . . . . . . . . . . . 19 93 5.3. Combinations . . . . . . . . . . . . . . . . . . . . . . 20 94 6. Roles and Entities . . . . . . . . . . . . . . . . . . . . . 21 95 7. Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . 22 96 7.1. Relying Party . . . . . . . . . . . . . . . . . . . . . . 22 97 7.2. Attester . . . . . . . . . . . . . . . . . . . . . . . . 23 98 7.3. Relying Party Owner . . . . . . . . . . . . . . . . . . . 24 99 7.4. Verifier . . . . . . . . . . . . . . . . . . . . . . . . 24 100 7.5. Endorser, Reference Value Provider, and Verifier Owner . 25 101 8. Conceptual Messages . . . . . . . . . . . . . . . . . . . . . 26 102 8.1. Evidence . . . . . . . . . . . . . . . . . . . . . . . . 26 103 8.2. Endorsements . . . . . . . . . . . . . . . . . . . . . . 26 104 8.3. Attestation Results . . . . . . . . . . . . . . . . . . . 27 105 9. Claims Encoding Formats . . . . . . . . . . . . . . . . . . . 28 106 10. Freshness . . . . . . . . . . . . . . . . . . . . . . . . . . 29 107 10.1. Explicit Timekeeping using Synchronized Clocks . . . . . 30 108 10.2. Implicit Timekeeping using Nonces . . . . . . . . . . . 30 109 10.3. Implicit Timekeeping using Epoch IDs . . . . . . . . . . 31 110 10.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . 32 111 11. Privacy Considerations . . . . . . . . . . . . . . . . . . . 32 112 12. Security Considerations . . . . . . . . . . . . . . . . . . . 33 113 12.1. Attester and Attestation Key Protection . . . . . . . . 33 114 12.1.1. On-Device Attester and Key Protection . . . . . . . 34 115 12.1.2. Attestation Key Provisioning Processes . . . . . . . 34 116 12.2. Integrity Protection . . . . . . . . . . . . . . . . . . 35 117 12.3. Epoch ID-based Attestation . . . . . . . . . . . . . . . 36 118 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 37 119 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 37 120 15. Notable Contributions . . . . . . . . . . . . . . . . . . . . 37 121 16. Appendix A: Time Considerations . . . . . . . . . . . . . . . 37 122 16.1. Example 1: Timestamp-based Passport Model Example . . . 39 123 16.2. Example 2: Nonce-based Passport Model Example . . . . . 40 124 16.3. Example 3: Epoch ID-based Passport Model Example . . . . 42 125 16.4. Example 4: Timestamp-based Background-Check Model 126 Example . . . . . . . . . . . . . . . . . . . . . . . . 43 127 16.5. Example 5: Nonce-based Background-Check Model Example . 44 128 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 45 129 17.1. Normative References . . . . . . . . . . . . . . . . . . 45 130 17.2. Informative References . . . . . . . . . . . . . . . . . 45 131 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . 47 132 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 48 134 1. Introduction 136 The question of how one system can know that another system can be 137 trusted has found new interest and relevance in a world where trusted 138 computing elements are maturing in processor architectures. 140 Systems that have been attested and verified to be in a good state 141 (for some value of "good") can improve overall system posture. 142 Conversely, systems that cannot be attested and verified to be in a 143 good state can be taken out of service, or otherwise flagged for 144 repair. 146 For example: 148 * A bank back-end system might refuse to transact with another 149 system that is not known to be in a good state. 151 * A healthcare system might refuse to transmit electronic healthcare 152 records to a system that is not known to be in a good state. 154 In Remote Attestation Procedures (RATS), one peer (the "Attester") 155 produces believable information about itself - Evidence - to enable a 156 remote peer (the "Relying Party") to decide whether to consider that 157 Attester a trustworthy peer or not. RATS are facilitated by an 158 additional vital party, the Verifier. 160 The Verifier appraises Evidence via appraisal policies and creates 161 the Attestation Results to support Relying Parties in their decision 162 process. This document defines a flexible architecture consisting of 163 attestation roles and their interactions via conceptual messages. 164 Additionally, this document defines a universal set of terms that can 165 be mapped to various existing and emerging Remote Attestation 166 Procedures. Common topological patterns and the sequence of data 167 flows associated with them, such as the "Passport Model" and the 168 "Background-Check Model", are illustrated. The purpose is to define 169 useful terminology for remote attestation and enable readers to map 170 their solution architecture to the canonical attestation architecture 171 provided here. Having a common terminology that provides well- 172 understood meanings for common themes such as roles, device 173 composition, topological patterns, and appraisal procedures is vital 174 for semantic interoperability across solutions and platforms 175 involving multiple vendors and providers. 177 Amongst other things, this document is about trust and 178 trustworthiness. Trust is a choice one makes about another system. 179 Trustworthiness is a quality about the other system that can be used 180 in making one's decision to trust it or not. This is subtle 181 difference and being familiar with the difference is crucial for 182 using this document. Additionally, the concepts of freshness and 183 trust relationships with respect to RATS are elaborated on to enable 184 implementers to choose appropriate solutions to compose their Remote 185 Attestation Procedures. 187 2. Reference Use Cases 189 This section covers a number of representative and generic use cases 190 for remote attestation, independent of specific solutions. The 191 purpose is to provide motivation for various aspects of the 192 architecture presented in this document. Many other use cases exist, 193 and this document does not intend to have a complete list, only to 194 illustrate a set of use cases that collectively cover all the 195 functionality required in the architecture. 197 Each use case includes a description followed by an additional 198 summary of the Attester and Relying Party roles derived from the use 199 case. 201 2.1. Network Endpoint Assessment 203 Network operators want a trustworthy report that includes identity 204 and version information about the hardware and software on the 205 machines attached to their network, for purposes such as inventory, 206 audit, anomaly detection, record maintenance and/or trending reports 207 (logging). The network operator may also want a policy by which full 208 access is only granted to devices that meet some definition of 209 hygiene, and so wants to get Claims about such information and verify 210 its validity. Remote attestation is desired to prevent vulnerable or 211 compromised devices from getting access to the network and 212 potentially harming others. 214 Typically, solutions start with a specific component (called a root 215 of trust) that is intended to provide trustworthy device identity and 216 protected storage for measurements. The system components perform a 217 series of measurements that may be signed via functions provided by a 218 root of trust, considered as Evidence about present system 219 components, such as hardware, firmware, BIOS, software, etc. 221 Attester: A device desiring access to a network. 223 Relying Party: Network equipment such as a router, switch, or access 224 point, responsible for admission of the device into the network. 226 2.2. Confidential Machine Learning Model Protection 228 A device manufacturer wants to protect its intellectual property. 229 The intellectual property's scope primarily encompasses the machine 230 learning (ML) model that is deployed in the devices purchased by its 231 customers. The protection goals include preventing attackers, 232 potentially the customer themselves, from seeing the details of the 233 model. 235 This typically works by having some protected environment in the 236 device go through a remote attestation with some manufacturer service 237 that can assess its trustworthiness. If remote attestation succeeds, 238 then the manufacturer service releases either the model, or a key to 239 decrypt a model already deployed on the Attester in encrypted form, 240 to the requester. 242 Attester: A device desiring to run an ML model. 244 Relying Party: A server or service holding ML models it desires to 245 protect. 247 2.3. Confidential Data Protection 249 This is a generalization of the ML model use case above, where the 250 data can be any highly confidential data, such as health data about 251 customers, payroll data about employees, future business plans, etc. 252 As part of the attestation procedure, an assessment is made against a 253 set of policies to evaluate the state of the system that is 254 requesting the confidential data. Attestation is desired to prevent 255 leaking data via compromised devices. 257 Attester: An entity desiring to retrieve confidential data. 259 Relying Party: An entity that holds confidential data for release to 260 authorized entities. 262 2.4. Critical Infrastructure Control 264 Potentially harmful physical equipment (e.g., power grid, traffic 265 control, hazardous chemical processing, etc.) is connected to a 266 network in support of critical infrastructure. The organization 267 managing such infrastructure needs to ensure that only authorized 268 code and users can control corresponding critical processes, and that 269 these processes are protected from unauthorized manipulation or other 270 threats. When a protocol operation can affect a critical system 271 component of the infrastructure, devices attached to that critical 272 component require some assurances depending on the security context, 273 including that: a requesting device or application has not been 274 compromised, and the requesters and actors act on applicable 275 policies. As such, remote attestation can be used to only accept 276 commands from requesters that are within policy. 278 Attester: A device or application wishing to control physical 279 equipment. 281 Relying Party: A device or application connected to potentially 282 dangerous physical equipment (hazardous chemical processing, 283 traffic control, power grid, etc.). 285 2.5. Trusted Execution Environment Provisioning 287 A Trusted Application Manager (TAM) server is responsible for 288 managing the applications running in a Trusted Execution Environment 289 (TEE) of a client device. To achieve its purpose, the TAM needs to 290 assess the state of a TEE, or of applications in the TEE, of a client 291 device. The TEE conducts Remote Attestation Procedures with the TAM, 292 which can then decide whether the TEE is already in compliance with 293 the TAM's latest policy. If not, the TAM has to uninstall, update, 294 or install approved applications in the TEE to bring it back into 295 compliance with the TAM's policy. 297 Attester: A device with a TEE capable of running trusted 298 applications that can be updated. 300 Relying Party: A TAM. 302 2.6. Hardware Watchdog 304 There is a class of malware that holds a device hostage and does not 305 allow it to reboot to prevent updates from being applied. This can 306 be a significant problem, because it allows a fleet of devices to be 307 held hostage for ransom. 309 A solution to this problem is a watchdog timer implemented in a 310 protected environment such as a Trusted Platform Module (TPM), as 311 described in [TCGarch] section 43.3. If the watchdog does not 312 receive regular, and fresh, Attestation Results as to the system's 313 health, then it forces a reboot. 315 Attester: The device that should be protected from being held 316 hostage for a long period of time. 318 Relying Party: A watchdog capable of triggering a procedure that 319 resets a device into a known, good operational state. 321 2.7. FIDO Biometric Authentication 323 In the Fast IDentity Online (FIDO) protocol [WebAuthN], [CTAP], the 324 device in the user's hand authenticates the human user, whether by 325 biometrics (such as fingerprints), or by PIN and password. FIDO 326 authentication puts a large amount of trust in the device compared to 327 typical password authentication because it is the device that 328 verifies the biometric, PIN and password inputs from the user, not 329 the server. For the Relying Party to know that the authentication is 330 trustworthy, the Relying Party needs to know that the Authenticator 331 part of the device is trustworthy. The FIDO protocol employs remote 332 attestation for this. 334 The FIDO protocol supports several remote attestation protocols and a 335 mechanism by which new ones can be registered and added. Remote 336 attestation defined by RATS is thus a candidate for use in the FIDO 337 protocol. 339 Other biometric authentication protocols such as the Chinese IFAA 340 standard and WeChat Pay as well as Google Pay make use of remote 341 attestation in one form or another. 343 Attester: Every FIDO Authenticator contains an Attester. 345 Relying Party: Any web site, mobile application back-end, or service 346 that relies on authentication data based on biometric information. 348 3. Architectural Overview 350 Figure 1 depicts the data that flows between different roles, 351 independent of protocol or use case. 353 ************ ************* ************ ***************** 354 * Endorser * * Reference * * Verifier * * Relying Party * 355 ************ * Value * * Owner * * Owner * 356 | * Provider * ************ ***************** 357 | ************* | | 358 | | | | 359 |Endorsements |Reference |Appraisal |Appraisal 360 | |Values |Policy |Policy for 361 | | |for |Attestation 362 .-----------. | |Evidence |Results 363 | | | | 364 | | | | 365 v v v | 366 .---------------------------. | 367 .----->| Verifier |------. | 368 | '---------------------------' | | 369 | | | 370 | Attestation| | 371 | Results | | 372 | Evidence | | 373 | | | 374 | v v 375 .----------. .---------------. 376 | Attester | | Relying Party | 377 '----------' '---------------' 378 Figure 1: Conceptual Data Flow 380 The text below summarizes the activities conducted by the roles 381 illustrated in Figure 1. 383 An Attester creates Evidence that is conveyed to a Verifier. 385 A Verifier uses the Evidence, any Reference Values from Reference 386 Value Providers, and any Endorsements from Endorsers, by applying an 387 Appraisal Policy for Evidence to assess the trustworthiness of the 388 Attester. This procedure is called the appraisal of Evidence. 390 Subsequently, the Verifier generates Attestation Results for use by 391 Relying Parties. The Appraisal Policy for Evidence might be obtained 392 from an Endorser along with the Endorsements, and/or might be 393 obtained via some other mechanism, such as being configured in the 394 Verifier by the Verifier Owner. 396 A Relying Party uses Attestation Results by applying its own 397 appraisal policy to make application-specific decisions, such as 398 authorization decisions. The Appraisal Policy for Attestation 399 Results is configured in the Relying Party by the Relying Party 400 Owner, and/or are programmed into the Relying Party. This procedure 401 is called the appraisal of Attestation Results. 403 3.1. Appraisal Policies 405 The Verifier, when appraising Evidence, or the Relying Party, when 406 appraising Attestation Results, checks the values of some Claims 407 against constraints specified in its appraisal policy. Examples of 408 such constraints checking include: 410 * comparison for equality against a Reference Value, or 412 * a check for being in a range bounded by Reference Values, or 414 * membership in a set of Reference Values, or 416 * a check against values in other Claims. 418 The actual data format and semantics of any Appraisal Policy is 419 implementation specific. 421 3.2. Reference Values 423 Reference Values used in appraisal procedures come from a Reference 424 Value Provider and are then used by the appraisal policy. 426 The actual data format and semantics of any Reference Values are 427 specific to Claims and implementations. This architecture document 428 does not define any general purpose format for Reference Values or 429 general means for comparison. 431 3.3. Two Types of Environments of an Attester 433 As shown in Figure 2, an Attester consists of at least one Attesting 434 Environment and at least one Target Environment. In some 435 implementations, the Attesting and Target Environments might be 436 combined. Other implementations might have multiple Attesting and 437 Target Environments, such as in the examples described in more detail 438 in Section 3.4 and Section 3.5. Other examples may exist. All 439 compositions of Attesting and Target Environments discussed in this 440 architecture can be combined into more complex implementations. 442 .--------------------------------. 443 | | 444 | Verifier | 445 | | 446 '--------------------------------' 447 ^ 448 | 449 .-------------------------|----------. 450 | | | 451 | .----------------. | | 452 | | Target | | | 453 | | Environment | | | 454 | | | | Evidence | 455 | '----------------' | | 456 | | | | 457 | | | | 458 | Collect | | | 459 | Claims | | | 460 | | | | 461 | v | | 462 | .-------------. | 463 | | Attesting | | 464 | | Environment | | 465 | | | | 466 | '-------------' | 467 | Attester | 468 '------------------------------------' 470 Figure 2: Two Types of Environments 472 Claims are collected from Target Environments. That is, Attesting 473 Environments collect the values and the information to be represented 474 in Claims, by reading system registers and variables, calling into 475 subsystems, taking measurements on code, memory, or other security 476 related assets of the Target Environment. Attesting Environments 477 then format the Claims appropriately, and typically use key material 478 and cryptographic functions, such as signing or cipher algorithms, to 479 generate Evidence. There is no limit to or requirement on the types 480 of hardware or software environments that can be used to implement an 481 Attesting Environment, for example: Trusted Execution Environments 482 (TEEs), embedded Secure Elements (eSEs), Trusted Platform Modules 483 (TPMs), or BIOS firmware. 485 An arbitrary execution environment may not, by default, be capable of 486 Claims collection for a given Target Environment. Execution 487 environments that are designed specifically to be capable of Claims 488 collection are referred to in this document as Attesting 489 Environments. For example, a TPM doesn't actively collect Claims 490 itself, it instead requires another component to feed various values 491 to the TPM. Thus, an Attesting Environment in such a case would be 492 the combination of the TPM together with whatever component is 493 feeding it the measurements. 495 3.4. Layered Attestation Environments 497 By definition, the Attester role generates Evidence. An Attester may 498 consist of one or more nested environments (layers). The root layer 499 of an Attester includes at least one root of trust. In order to 500 appraise Evidence generated by an Attester, the Verifier needs to 501 trust the Attester's root of trust. Trust in the Attester's root of 502 trust can be established either directly (e.g., the Verifier puts the 503 root of trust's public key into its trust anchor store) or 504 transitively via an Endorser (e.g., the Verifier puts the Endorser's 505 public key into its trust anchor store). In layered attestation, a 506 root of trust is the initial Attesting Environment. Claims can be 507 collected from or about each layer. The corresponding Claims can be 508 structured in a nested fashion that reflects the nesting of the 509 Attester's layers. Normally, Claims are not self-asserted, rather a 510 previous layer acts as the Attesting Environment for the next layer. 511 Claims about a root of trust typically are asserted by an Endorser. 513 The device illustrated in Figure 3 includes (A) a BIOS stored in 514 read-only memory, (B) an operating system kernel, and (C) an 515 application or workload. 517 .-------------. Endorsement for A 518 | Endorser |-----------------------. 519 '-------------' | 520 v 521 .-------------. Reference .----------. 522 | Reference | Values | | 523 | Value |---------------->| Verifier | 524 | Provider(s) | for A, B, | | 525 '-------------' and C '----------' 526 ^ 527 .------------------------------------. | 528 | | | 529 | .---------------------------. | | 530 | | Target | | | Layered 531 | | Environment | | | Evidence 532 | | C | | | for 533 | '---------------------------' | | B and C 534 | Collect | | | 535 | Claims | | | 536 | .---------------|-----------. | | 537 | | Target v | | | 538 | | Environment .-----------. | | | 539 | | B | Attesting | | | | 540 | | |Environment|-----------' 541 | | | B | | | 542 | | '-----------' | | 543 | | ^ | | 544 | '---------------------|-----' | 545 | Collect | | Evidence | 546 | Claims v | for B | 547 | .-----------. | 548 | | Attesting | | 549 | |Environment| | 550 | | A | | 551 | '-----------' | 552 | | 553 '------------------------------------' 555 Figure 3: Layered Attester 557 Attesting Environment A, the read-only BIOS in this example, has to 558 ensure the integrity of the bootloader (Target Environment B). There 559 are potentially multiple kernels to boot, and the decision is up to 560 the bootloader. Only a bootloader with intact integrity will make an 561 appropriate decision. Therefore, the Claims relating to the 562 integrity of the bootloader have to be measured securely. At this 563 stage of the boot-cycle of the device, the Claims collected typically 564 cannot be composed into Evidence. 566 After the boot sequence is started, the BIOS conducts the most 567 important and defining feature of layered attestation, which is that 568 the successfully measured Target Environment B now becomes (or 569 contains) an Attesting Environment for the next layer. This 570 procedure in layered attestation is sometimes called "staging". It 571 is important that the new Attesting Environment B not be able to 572 alter any Claims about its own Target Environment B. This can be 573 ensured having those Claims be either signed by Attesting Environment 574 A or stored in an untamperable manner by Attesting Environment A. 576 Continuing with this example, the bootloader's Attesting Environment 577 B is now in charge of collecting Claims about Target Environment C, 578 which in this example is the kernel to be booted. The final Evidence 579 thus contains two sets of Claims: one set about the bootloader as 580 measured and signed by the BIOS, plus a set of Claims about the 581 kernel as measured and signed by the bootloader. 583 This example could be extended further by making the kernel become 584 another Attesting Environment for an application as another Target 585 Environment. This would result in a third set of Claims in the 586 Evidence pertaining to that application. 588 The essence of this example is a cascade of staged environments. 589 Each environment has the responsibility of measuring the next 590 environment before the next environment is started. In general, the 591 number of layers may vary by device or implementation, and an 592 Attesting Environment might even have multiple Target Environments 593 that it measures, rather than only one as shown in Figure 3. 595 3.5. Composite Device 597 A composite device is an entity composed of multiple sub-entities 598 such that its trustworthiness has to be determined by the appraisal 599 of all these sub-entities. 601 Each sub-entity has at least one Attesting Environment collecting the 602 Claims from at least one Target Environment, then this sub-entity 603 generates Evidence about its trustworthiness. Therefore, each sub- 604 entity can be called an Attester. Among all the Attesters, there may 605 be only some which have the ability to communicate with the Verifier 606 while others do not. 608 For example, a carrier-grade router consists of a chassis and 609 multiple slots. The trustworthiness of the router depends on all its 610 slots' trustworthiness. Each slot has an Attesting Environment, such 611 as a TEE, collecting the Claims of its boot process, after which it 612 generates Evidence from the Claims. 614 Among these slots, only a "main" slot can communicate with the 615 Verifier while other slots cannot. But other slots can communicate 616 with the main slot by the links between them inside the router. So 617 the main slot collects the Evidence of other slots, produces the 618 final Evidence of the whole router and conveys the final Evidence to 619 the Verifier. Therefore the router is a composite device, each slot 620 is an Attester, and the main slot is the lead Attester. 622 Another example is a multi-chassis router composed of multiple single 623 carrier-grade routers. Multi-chassis router setups create redundancy 624 groups that provide higher throughput by interconnecting multiple 625 routers in these groups, which can be treated as one logical router 626 for simpler management. A multi-chassis router setup provides a 627 management point that connects to the Verifier. Typically one router 628 in the group is designated as the main router. Other routers in the 629 multi-chassis setup are connected to the main router only via 630 physical network links and are therefore managed and appraised via 631 the main router's help. In consequence, a multi-chassis router setup 632 is a composite device, each router is an Attester, and the main 633 router is the lead Attester. 635 Figure 4 depicts the conceptual data flow for a composite device. 637 .-----------------------------. 638 | Verifier | 639 '-----------------------------' 640 ^ 641 | 642 | Evidence of 643 | Composite Device 644 | 645 .----------------------------------|-------------------------------. 646 | .--------------------------------|-----. .------------. | 647 | | Collect .------------. | | | | 648 | | Claims .--------->| Attesting |<--------| Attester B |-. | 649 | | | |Environment | | '------------. | | 650 | | .----------------. | |<----------| Attester C |-. | 651 | | | Target | | | | '------------' | | 652 | | | Environment(s) | | |<------------| ... | | 653 | | | | '------------' | Evidence '------------' | 654 | | '----------------' | of | 655 | | | Attesters | 656 | | lead Attester A | (via Internal Links or | 657 | '--------------------------------------' Network Connections) | 658 | | 659 | Composite Device | 660 '------------------------------------------------------------------' 661 Figure 4: Composite Device 663 In a composite device, each Attester generates its own Evidence by 664 its Attesting Environment(s) collecting the Claims from its Target 665 Environment(s). The lead Attester collects Evidence from other 666 Attesters and conveys it to a Verifier. Collection of Evidence from 667 sub-entities may itself be a form of Claims collection that results 668 in Evidence asserted by the lead Attester. The lead Attester 669 generates Evidence about the layout of the whole composite device, 670 while sub-Attesters generate Evidence about their respective 671 (sub-)modules. 673 In this scenario, the trust model described in Section 7 can also be 674 applied to an inside Verifier. 676 3.6. Implementation Considerations 678 An entity can take on multiple RATS roles (e.g., Attester, Verifier, 679 Relying Party, etc.) at the same time. Multiple entities can 680 cooperate to implement a single RATS role as well. In essence, the 681 combination of roles and entities can be arbitrary. For example, in 682 the composite device scenario, the entity inside the lead Attester 683 can also take on the role of a Verifier, and the outer entity of 684 Verifier can take on the role of a Relying Party. After collecting 685 the Evidence of other Attesters, this inside Verifier uses 686 Endorsements and appraisal policies (obtained the same way as by any 687 other Verifier) as part of the appraisal procedures that generate 688 Attestation Results. The inside Verifier then conveys the 689 Attestation Results of other Attesters to the outside Verifier, 690 whether in the same conveyance protocol as part of the Evidence or 691 not. 693 4. Terminology 695 This document uses the following terms. 697 4.1. Roles 699 Attester: A role performed by an entity (typically a device) whose 700 Evidence must be appraised in order to infer the extent to which 701 the Attester is considered trustworthy, such as when deciding 702 whether it is authorized to perform some operation. 704 Produces: Evidence 706 Relying Party: A role performed by an entity that depends on the 707 validity of information about an Attester, for purposes of 708 reliably applying application specific actions. Compare /relying 709 party/ in [RFC4949]. 711 Consumes: Attestation Results 713 Verifier: A role performed by an entity that appraises the validity 714 of Evidence about an Attester and produces Attestation Results to 715 be used by a Relying Party. 717 Consumes: Evidence, Reference Values, Endorsements, Appraisal 718 Policy for Evidence 720 Produces: Attestation Results 722 Relying Party Owner: A role performed by an entity (typically an 723 administrator), that is authorized to configure Appraisal Policy 724 for Attestation Results in a Relying Party. 726 Produces: Appraisal Policy for Attestation Results 728 Verifier Owner: A role performed by an entity (typically an 729 administrator), that is authorized to configure Appraisal Policy 730 for Evidence in a Verifier. 732 Produces: Appraisal Policy for Evidence 734 Endorser: A role performed by an entity (typically a manufacturer) 735 whose Endorsements help Verifiers appraise the authenticity of 736 Evidence. 738 Produces: Endorsements 740 Reference Value Provider: A role performed by an entity (typically a 741 manufacturer) whose Reference Values help Verifiers appraise 742 Evidence to determine if acceptable known Claims have been 743 recorded by the Attester. 745 Produces: Reference Values 747 4.2. Artifacts 749 Claim: A piece of asserted information, often in the form of a name/ 750 value pair. Claims make up the usual structure of Evidence and 751 other RATS artifacts. Compare /claim/ in [RFC7519]. 753 Endorsement: A secure statement that an Endorser vouches for the 754 integrity of an Attester's various capabilities such as Claims 755 collection and Evidence signing. 757 Consumed By: Verifier 759 Produced By: Endorser 761 Evidence: A set of Claims generated by an Attester to be appraised 762 by a Verifier. Evidence may include configuration data, 763 measurements, telemetry, or inferences. 765 Consumed By: Verifier 767 Produced By: Attester 769 Attestation Result: The output generated by a Verifier, typically 770 including information about an Attester, where the Verifier 771 vouches for the validity of the results. 773 Consumed By: Relying Party 775 Produced By: Verifier 777 Appraisal Policy for Evidence: A set of rules that informs how a 778 Verifier evaluates the validity of information about an Attester. 779 Compare /security policy/ in [RFC4949]. 781 Consumed By: Verifier 783 Produced By: Verifier Owner 785 Appraisal Policy for Attestation Results: A set of rules that direct 786 how a Relying Party uses the Attestation Results regarding an 787 Attester generated by the Verifiers. Compare /security policy/ in 788 [RFC4949]. 790 Consumed by: Relying Party 792 Produced by: Relying Party Owner 794 Reference Values: A set of values against which values of Claims can 795 be compared as part of applying an Appraisal Policy for Evidence. 796 Reference Values are sometimes referred to in other documents as 797 known-good values, golden measurements, or nominal values, 798 although those terms typically assume comparison for equality, 799 whereas here Reference Values might be more general and be used in 800 any sort of comparison. 802 Consumed By: Verifier 804 Produced By: Reference Value Provider 806 5. Topological Patterns 808 Figure 1 shows a data-flow diagram for communication between an 809 Attester, a Verifier, and a Relying Party. The Attester conveys its 810 Evidence to the Verifier for appraisal, and the Relying Party 811 receives the Attestation Result from the Verifier. This section 812 refines the data-flow diagram by describing two reference models, as 813 well as one example composition thereof. The discussion that follows 814 is for illustrative purposes only and does not constrain the 815 interactions between RATS roles to the presented patterns. 817 5.1. Passport Model 819 The passport model is so named because of its resemblance to how 820 nations issue passports to their citizens. The nature of the 821 Evidence that an individual needs to provide to its local authority 822 is specific to the country involved. The citizen retains control of 823 the resulting passport document and presents it to other entities 824 when it needs to assert a citizenship or identity Claim, such as an 825 airport immigration desk. The passport is considered sufficient 826 because it vouches for the citizenship and identity Claims, and it is 827 issued by a trusted authority. Thus, in this immigration desk 828 analogy, the passport issuing agency is a Verifier, the passport is 829 an Attestation Result, and the immigration desk is a Relying Party. 831 In this model, an Attester conveys Evidence to a Verifier, which 832 compares the Evidence against its appraisal policy. The Verifier 833 then gives back an Attestation Result. If the Attestation Result was 834 a successful one, the Attester can then present the Attestation 835 Result (and possibly additional Claims) to a Relying Party, which 836 then compares this information against its own appraisal policy. 838 Three ways in which the process may fail include: 840 * First, the Verifier may not issue a positive Attestation Result 841 due to the Evidence not passing the Appraisal Policy for Evidence. 843 * The second way in which the process may fail is when the 844 Attestation Result is examined by the Relying Party, and based 845 upon the Appraisal Policy for Attestation Results, the result does 846 not pass the policy. 848 * The third way is when the Verifier is unreachable or unavailable. 850 Since the resource access protocol between the Attester and Relying 851 Party includes an Attestation Result, in this model the details of 852 that protocol constrain the serialization format of the Attestation 853 Result. The format of the Evidence on the other hand is only 854 constrained by the Attester-Verifier remote attestation protocol. 855 This implies that interoperability and standardization is more 856 relevant for Attestation Results than it is for Evidence. 858 +------------+ 859 | | Compare Evidence 860 | Verifier | against appraisal policy 861 | | 862 +------------+ 863 ^ | 864 Evidence | | Attestation 865 | | Result 866 | v 867 +------------+ +-------------+ 868 | |------------->| | Compare Attestation 869 | Attester | Attestation | Relying | Result against 870 | | Result | Party | appraisal policy 871 +------------+ +-------------+ 873 Figure 5: Passport Model 875 5.2. Background-Check Model 877 The background-check model is so named because of the resemblance of 878 how employers and volunteer organizations perform background checks. 879 When a prospective employee provides Claims about education or 880 previous experience, the employer will contact the respective 881 institutions or former employers to validate the Claim. Volunteer 882 organizations often perform police background checks on volunteers in 883 order to determine the volunteer's trustworthiness. Thus, in this 884 analogy, a prospective volunteer is an Attester, the organization is 885 the Relying Party, and the organization that issues a report is a 886 Verifier. 888 In this model, an Attester conveys Evidence to a Relying Party, which 889 simply passes it on to a Verifier. The Verifier then compares the 890 Evidence against its appraisal policy, and returns an Attestation 891 Result to the Relying Party. The Relying Party then compares the 892 Attestation Result against its own appraisal policy. 894 The resource access protocol between the Attester and Relying Party 895 includes Evidence rather than an Attestation Result, but that 896 Evidence is not processed by the Relying Party. Since the Evidence 897 is merely forwarded on to a trusted Verifier, any serialization 898 format can be used for Evidence because the Relying Party does not 899 need a parser for it. The only requirement is that the Evidence can 900 be _encapsulated in_ the format required by the resource access 901 protocol between the Attester and Relying Party. 903 However, like in the Passport model, an Attestation Result is still 904 consumed by the Relying Party. Code footprint and attack surface 905 area can be minimized by using a serialization format for which the 906 Relying Party already needs a parser to support the protocol between 907 the Attester and Relying Party, which may be an existing standard or 908 widely deployed resource access protocol. Such minimization is 909 especially important if the Relying Party is a constrained node. 911 +-------------+ 912 | | Compare Evidence 913 | Verifier | against appraisal 914 | | policy 915 +-------------+ 916 ^ | 917 Evidence | | Attestation 918 | | Result 919 | v 920 +------------+ +-------------+ 921 | |-------------->| | Compare Attestation 922 | Attester | Evidence | Relying | Result against 923 | | | Party | appraisal policy 924 +------------+ +-------------+ 926 Figure 6: Background-Check Model 928 5.3. Combinations 930 One variation of the background-check model is where the Relying 931 Party and the Verifier are on the same machine, performing both 932 functions together. In this case, there is no need for a protocol 933 between the two. 935 It is also worth pointing out that the choice of model depends on the 936 use case, and that different Relying Parties may use different 937 topological patterns. 939 The same device may need to create Evidence for different Relying 940 Parties and/or different use cases. For instance, it would use one 941 model to provide Evidence to a network infrastructure device to gain 942 access to the network, and the other model to provide Evidence to a 943 server holding confidential data to gain access to that data. As 944 such, both models may simultaneously be in use by the same device. 946 Figure 7 shows another example of a combination where Relying Party 1 947 uses the passport model, whereas Relying Party 2 uses an extension of 948 the background-check model. Specifically, in addition to the basic 949 functionality shown in Figure 6, Relying Party 2 actually provides 950 the Attestation Result back to the Attester, allowing the Attester to 951 use it with other Relying Parties. This is the model that the 952 Trusted Application Manager plans to support in the TEEP architecture 953 [I-D.ietf-teep-architecture]. 955 +-------------+ 956 | | Compare Evidence 957 | Verifier | against appraisal policy 958 | | 959 +-------------+ 960 ^ | 961 Evidence | | Attestation 962 | | Result 963 | v 964 +-------------+ 965 | | Compare 966 | Relying | Attestation Result 967 | Party 2 | against appraisal policy 968 +-------------+ 969 ^ | 970 Evidence | | Attestation 971 | | Result 972 | v 973 +-------------+ +-------------+ 974 | |-------------->| | Compare Attestation 975 | Attester | Attestation | Relying | Result against 976 | | Result | Party 1 | appraisal policy 977 +-------------+ +-------------+ 979 Figure 7: Example Combination 981 6. Roles and Entities 983 An entity in the RATS architecture includes at least one of the roles 984 defined in this document. 986 An entity can aggregate more than one role into itself, such as being 987 both a Verifier and a Relying Party, or being both a Reference Value 988 Provider and an Endorser. As such, any conceptual messages (see 989 Section 8 for more discussion) originating from such roles might also 990 be combined. For example, Reference Values might be conveyed as part 991 of an appraisal policy if the Verifier Owner and Reference Value 992 Provider roles are combined. Similarly, Reference Values might be 993 conveyed as part of an Endorsement if the Endorser and Reference 994 Value Provider roles are combined. 996 Interactions between roles aggregated into the same entity do not 997 necessarily use the Internet Protocol. Such interactions might use a 998 loopback device or other IP-based communication between separate 999 environments, but they do not have to. Alternative channels to 1000 convey conceptual messages include function calls, sockets, GPIO 1001 interfaces, local busses, or hypervisor calls. This type of 1002 conveyance is typically found in composite devices. Most 1003 importantly, these conveyance methods are out-of-scope of RATS, but 1004 they are presumed to exist in order to convey conceptual messages 1005 appropriately between roles. 1007 For example, an entity that both connects to a wide-area network and 1008 to a system bus is taking on both the Attester and Verifier roles. 1009 As a system bus-connected entity, a Verifier consumes Evidence from 1010 other devices connected to the system bus that implement Attester 1011 roles. As a wide-area network connected entity, it may implement an 1012 Attester role. 1014 In essence, an entity that combines more than one role creates and 1015 consumes the corresponding conceptual messages as defined in this 1016 document. 1018 7. Trust Model 1020 7.1. Relying Party 1022 This document covers scenarios for which a Relying Party trusts a 1023 Verifier that can appraise the trustworthiness of information about 1024 an Attester. Such trust might come by the Relying Party trusting the 1025 Verifier (or its public key) directly, or might come by trusting an 1026 entity (e.g., a Certificate Authority) that is in the Verifier's 1027 certificate chain. 1029 The Relying Party might implicitly trust a Verifier, such as in a 1030 Verifier/Relying Party combination where the Verifier and Relying 1031 Party roles are combined. Or, for a stronger level of security, the 1032 Relying Party might require that the Verifier first provide 1033 information about itself that the Relying Party can use to assess the 1034 trustworthiness of the Verifier before accepting its Attestation 1035 Results. 1037 For example, one explicit way for a Relying Party "A" to establish 1038 such trust in a Verifier "B", would be for B to first act as an 1039 Attester where A acts as a combined Verifier/Relying Party. If A 1040 then accepts B as trustworthy, it can choose to accept B as a 1041 Verifier for other Attesters. 1043 As another example, the Relying Party can establish trust in the 1044 Verifier by out of band establishment of key material, combined with 1045 a protocol like TLS to communicate. There is an assumption that 1046 between the establishment of the trusted key material and the 1047 creation of the Evidence, that the Verifier has not been compromised. 1049 Similarly, the Relying Party also needs to trust the Relying Party 1050 Owner for providing its Appraisal Policy for Attestation Results, and 1051 in some scenarios the Relying Party might even require that the 1052 Relying Party Owner go through a remote attestation procedure with it 1053 before the Relying Party will accept an updated policy. This can be 1054 done similarly to how a Relying Party could establish trust in a 1055 Verifier as discussed above. 1057 7.2. Attester 1059 In some scenarios, Evidence might contain sensitive information such 1060 as Personally Identifiable Information (PII) or system identifiable 1061 information. Thus, an Attester must trust entities to which it 1062 conveys Evidence, to not reveal sensitive data to unauthorized 1063 parties. The Verifier might share this information with other 1064 authorized parties, according to a governing policy that address the 1065 handling of sensitive information (potentially included in Appraisal 1066 Policies for Evidence). In the background-check model, this Evidence 1067 may also be revealed to Relying Party(s). 1069 When Evidence contains sensitive information, an Attester typically 1070 requires that a Verifier authenticates itself (e.g., at TLS session 1071 establishment) and might even request a remote attestation before the 1072 Attester sends the sensitive Evidence. This can be done by having 1073 the Attester first act as a Verifier/Relying Party, and the Verifier 1074 act as its own Attester, as discussed above. 1076 7.3. Relying Party Owner 1078 The Relying Party Owner might also require that the Relying Party 1079 first act as an Attester, providing Evidence that the Owner can 1080 appraise, before the Owner would give the Relying Party an updated 1081 policy that might contain sensitive information. In such a case, 1082 authentication or attestation in both directions might be needed, in 1083 which case typically one side's Evidence must be considered safe to 1084 share with an untrusted entity, in order to bootstrap the sequence. 1085 See Section 11 for more discussion. 1087 7.4. Verifier 1089 The Verifier trusts (or more specifically, the Verifier's security 1090 policy is written in a way that configures the Verifier to trust) a 1091 manufacturer, or the manufacturer's hardware, so as to be able to 1092 appraise the trustworthiness of that manufacturer's devices. In a 1093 typical solution, a Verifier comes to trust an Attester indirectly by 1094 having an Endorser (such as a manufacturer) vouch for the Attester's 1095 ability to securely generate Evidence. 1097 In some solutions, a Verifier might be configured to directly trust 1098 an Attester by having the Verifier have the Attester's key material 1099 (rather than the Endorser's) in its trust anchor store. 1101 Such direct trust must first be established at the time of trust 1102 anchor store configuration either by checking with an Endorser at 1103 that time, or by conducting a security analysis of the specific 1104 device. Having the Attester directly in the trust anchor store 1105 narrows the Verifier's trust to only specific devices rather than all 1106 devices the Endorser might vouch for, such as all devices 1107 manufactured by the same manufacturer in the case that the Endorser 1108 is a manufacturer. 1110 Such narrowing is often important since physical possession of a 1111 device can also be used to conduct a number of attacks, and so a 1112 device in a physically secure environment (such as one's own 1113 premises) may be considered trusted whereas devices owned by others 1114 would not be. This often results in a desire to either have the 1115 owner run their own Endorser that would only endorse devices one 1116 owns, or to use Attesters directly in the trust anchor store. When 1117 there are many Attesters owned, the use of an Endorser enables better 1118 scalability. 1120 That is, a Verifier might appraise the trustworthiness of an 1121 application component, operating system component, or service under 1122 the assumption that information provided about it by the lower-layer 1123 firmware or software is true. A stronger level of assurance of 1124 security comes when information can be vouched for by hardware or by 1125 ROM code, especially if such hardware is physically resistant to 1126 hardware tampering. In most cases, components that have to be 1127 vouched for via Endorsements because no Evidence is generated about 1128 them are referred to as roots of trust. 1130 The manufacturer having arranged for an Attesting Environment to be 1131 provisioned with key material with which to sign Evidence, the 1132 Verifier is then provided with some way of verifying the signature on 1133 the Evidence. This may be in the form of an appropriate trust 1134 anchor, or the Verifier may be provided with a database of public 1135 keys (rather than certificates) or even carefully curated and secured 1136 lists of symmetric keys. 1138 The nature of how the Verifier manages to validate the signatures 1139 produced by the Attester is critical to the secure operation of a 1140 remote attestation system, but is not the subject of standardization 1141 within this architecture. 1143 A conveyance protocol that provides authentication and integrity 1144 protection can be used to convey Evidence that is otherwise 1145 unprotected (e.g., not signed). Appropriate conveyance of 1146 unprotected Evidence (e.g., [I-D.birkholz-rats-uccs]) relies on the 1147 following conveyance protocol's protection capabilities: 1149 1. The key material used to authenticate and integrity protect the 1150 conveyance channel is trusted by the Verifier to speak for the 1151 Attesting Environment(s) that collected Claims about the Target 1152 Environment(s). 1154 2. All unprotected Evidence that is conveyed is supplied exclusively 1155 by the Attesting Environment that has the key material that 1156 protects the conveyance channel 1158 3. The root of trust protects both the conveyance channel key 1159 material and the Attesting Environment with equivalent strength 1160 protections. 1162 See Section 12 for discussion on security strength. 1164 7.5. Endorser, Reference Value Provider, and Verifier Owner 1166 In some scenarios, the Endorser, Reference Value Provider, and 1167 Verifier Owner may need to trust the Verifier before giving the 1168 Endorsement, Reference Values, or appraisal policy to it. This can 1169 be done similarly to how a Relying Party might establish trust in a 1170 Verifier. 1172 As discussed in Section 7.3, authentication or attestation in both 1173 directions might be needed, in which case typically one side's 1174 identity or Evidence must be considered safe to share with an 1175 untrusted entity, in order to bootstrap the sequence. See Section 11 1176 for more discussion. 1178 8. Conceptual Messages 1180 8.1. Evidence 1182 Evidence is a set of Claims about the target environment that reveal 1183 operational status, health, configuration or construction that have 1184 security relevance. Evidence is appraised by a Verifier to establish 1185 its relevance, compliance, and timeliness. Claims need to be 1186 collected in a manner that is reliable. Evidence needs to be 1187 securely associated with the target environment so that the Verifier 1188 cannot be tricked into accepting Claims originating from a different 1189 environment (that may be more trustworthy). Evidence also must be 1190 protected from man-in-the-middle attackers who may observe, change or 1191 misdirect Evidence as it travels from Attester to Verifier. The 1192 timeliness of Evidence can be captured using Claims that pinpoint the 1193 time or interval when changes in operational status, health, and so 1194 forth occur. 1196 8.2. Endorsements 1198 An Endorsement is a secure statement that some entity (e.g., a 1199 manufacturer) vouches for the integrity of the device's signing 1200 capability. For example, if the signing capability is in hardware, 1201 then an Endorsement might be a manufacturer certificate that signs a 1202 public key whose corresponding private key is only known inside the 1203 device's hardware. Thus, when Evidence and such an Endorsement are 1204 used together, an appraisal procedure can be conducted based on 1205 appraisal policies that may not be specific to the device instance, 1206 but merely specific to the manufacturer providing the Endorsement. 1207 For example, an appraisal policy might simply check that devices from 1208 a given manufacturer have information matching a set of Reference 1209 Values, or an appraisal policy might have a set of more complex logic 1210 on how to appraise the validity of information. 1212 However, while an appraisal policy that treats all devices from a 1213 given manufacturer the same may be appropriate for some use cases, it 1214 would be inappropriate to use such an appraisal policy as the sole 1215 means of authorization for use cases that wish to constrain _which_ 1216 compliant devices are considered authorized for some purpose. For 1217 example, an enterprise using remote attestation for Network Endpoint 1218 Assessment [RFC5209] may not wish to let every healthy laptop from 1219 the same manufacturer onto the network, but instead only want to let 1220 devices that it legally owns onto the network. Thus, an Endorsement 1221 may be helpful information in authenticating information about a 1222 device, but is not necessarily sufficient to authorize access to 1223 resources which may need device-specific information such as a public 1224 key for the device or component or user on the device. 1226 8.3. Attestation Results 1228 Attestation Results are the input used by the Relying Party to decide 1229 the extent to which it will trust a particular Attester, and allow it 1230 to access some data or perform some operation. 1232 Attestation Results may carry a boolean value indicating compliance 1233 or non-compliance with a Verifier's appraisal policy, or may carry a 1234 richer set of Claims about the Attester, against which the Relying 1235 Party applies its Appraisal Policy for Attestation Results. 1237 The quality of the Attestation Results depends upon the ability of 1238 the Verifier to evaluate the Attester. Different Attesters have a 1239 different _Strength of Function_ [strengthoffunction], which results 1240 in the Attestation Results being qualitatively different in strength. 1242 An Attestation Result that indicates non-compliance can be used by an 1243 Attester (in the passport model) or a Relying Party (in the 1244 background-check model) to indicate that the Attester should not be 1245 treated as authorized and may be in need of remediation. In some 1246 cases, it may even indicate that the Evidence itself cannot be 1247 authenticated as being correct. 1249 By default, the Relying Party does not believe the Attester to be 1250 compliant. Upon receipt of an authentic Attestation Result and given 1251 the Appraisal Policy for Attestation Results is satisfied, the 1252 Attester is allowed to perform the prescribed actions or access. The 1253 simplest such appraisal policy might authorize granting the Attester 1254 full access or control over the resources guarded by the Relying 1255 Party. A more complex appraisal policy might involve using the 1256 information provided in the Attestation Result to compare against 1257 expected values, or to apply complex analysis of other information 1258 contained in the Attestation Result. 1260 Thus, Attestation Results often need to include detailed information 1261 about the Attester, for use by Relying Parties, much like physical 1262 passports and drivers licenses include personal information such as 1263 name and date of birth. Unlike Evidence, which is often very device- 1264 and vendor-specific, Attestation Results can be vendor-neutral, if 1265 the Verifier has a way to generate vendor-agnostic information based 1266 on the appraisal of vendor-specific information in Evidence. This 1267 allows a Relying Party's appraisal policy to be simpler, potentially 1268 based on standard ways of expressing the information, while still 1269 allowing interoperability with heterogeneous devices. 1271 Finally, whereas Evidence is signed by the device (or indirectly by a 1272 manufacturer, if Endorsements are used), Attestation Results are 1273 signed by a Verifier, allowing a Relying Party to only need a trust 1274 relationship with one entity, rather than a larger set of entities, 1275 for purposes of its appraisal policy. 1277 9. Claims Encoding Formats 1279 The following diagram illustrates a relationship to which remote 1280 attestation is desired to be added: 1282 +-------------+ +------------+ Evaluate 1283 | |-------------->| | request 1284 | Attester | Access some | Relying | against 1285 | | resource | Party | security 1286 +-------------+ +------------+ policy 1288 Figure 8: Typical Resource Access 1290 In this diagram, the protocol between Attester and a Relying Party 1291 can be any new or existing protocol (e.g., HTTP(S), COAP(S), ROLIE 1292 [RFC8322], 802.1x, OPC UA [OPCUA], etc.), depending on the use case. 1294 Typically, such protocols already have mechanisms for passing 1295 security information for authentication and authorization purposes. 1296 Common formats include JWTs [RFC7519], CWTs [RFC8392], and X.509 1297 certificates. 1299 Retrofitting already deployed protocols with remote attestation 1300 requires adding RATS conceptual messages to the existing data flows. 1301 This must be done in a way that does not degrade the security 1302 properties of the systems involved and should use native extension 1303 mechanisms provided by the underlying protocol. For example, if a 1304 TLS handshake is to be extended with remote attestation capabilities, 1305 attestation Evidence may be embedded in an ad-hoc X.509 certificate 1306 extension (e.g., [TCG-DICE]), or into a new TLS Certificate Type 1307 (e.g., [I-D.tschofenig-tls-cwt]). 1309 Especially for constrained nodes there is a desire to minimize the 1310 amount of parsing code needed in a Relying Party, in order to both 1311 minimize footprint and to minimize the attack surface. While it 1312 would be possible to embed a CWT inside a JWT, or a JWT inside an 1313 X.509 extension, etc., there is a desire to encode the information 1314 natively in a format that is already supported by the Relying Party. 1316 This motivates having a common "information model" that describes the 1317 set of remote attestation related information in an encoding-agnostic 1318 way, and allowing multiple encoding formats (CWT, JWT, X.509, etc.) 1319 that encode the same information into the Claims format needed by the 1320 Relying Party. 1322 The following diagram illustrates that Evidence and Attestation 1323 Results might be expressed via multiple potential encoding formats, 1324 so that they can be conveyed by various existing protocols. It also 1325 motivates why the Verifier might also be responsible for accepting 1326 Evidence that encodes Claims in one format, while issuing Attestation 1327 Results that encode Claims in a different format. 1329 Evidence Attestation Results 1330 .--------------. CWT CWT .-------------------. 1331 | Attester-A |------------. .----------->| Relying Party V | 1332 '--------------' v | `-------------------' 1333 .--------------. JWT .------------. JWT .-------------------. 1334 | Attester-B |-------->| Verifier |-------->| Relying Party W | 1335 '--------------' | | `-------------------' 1336 .--------------. X.509 | | X.509 .-------------------. 1337 | Attester-C |-------->| |-------->| Relying Party X | 1338 '--------------' | | `-------------------' 1339 .--------------. TPM | | TPM .-------------------. 1340 | Attester-D |-------->| |-------->| Relying Party Y | 1341 '--------------' '------------' `-------------------' 1342 .--------------. other ^ | other .-------------------. 1343 | Attester-E |------------' '----------->| Relying Party Z | 1344 '--------------' `-------------------' 1346 Figure 9: Multiple Attesters and Relying Parties with Different 1347 Formats 1349 10. Freshness 1351 A Verifier or Relying Party might need to learn the point in time 1352 (i.e., the "epoch") an Evidence or Attestation Result has been 1353 produced. This is essential in deciding whether the included Claims 1354 and their values can be considered fresh, meaning they still reflect 1355 the latest state of the Attester, and that any Attestation Result was 1356 generated using the latest Appraisal Policy for Evidence. 1358 Freshness is assessed based on the Appraisal Policy for Evidence or 1359 Attestation Results that compares the estimated epoch against an 1360 "expiry" threshold defined locally to that policy. There is, 1361 however, always a race condition possible in that the state of the 1362 Attester, and the appraisal policies might change immediately after 1363 the Evidence or Attestation Result was generated. The goal is merely 1364 to narrow their recentness to something the Verifier (for Evidence) 1365 or Relying Party (for Attestation Result) is willing to accept. Some 1366 flexibility on the freshness requirement is a key component for 1367 enabling caching and reuse of both Evidence and Attestation Results, 1368 which is especially valuable in cases where their computation uses a 1369 substantial part of the resource budget (e.g., energy in constrained 1370 devices). 1372 There are three common approaches for determining the epoch of 1373 Evidence or an Attestation Result. 1375 10.1. Explicit Timekeeping using Synchronized Clocks 1377 The first approach is to rely on synchronized and trustworthy clocks, 1378 and include a signed timestamp (see [I-D.birkholz-rats-tuda]) along 1379 with the Claims in the Evidence or Attestation Result. Timestamps 1380 can also be added on a per-Claim basis to distinguish the time of 1381 generation of Evidence or Attestation Result from the time that a 1382 specific Claim was generated. The clock's trustworthiness can 1383 generally be established via Endorsements and typically requires 1384 additional Claims about the signer's time synchronization mechanism. 1386 In some use cases, however, a trustworthy clock might not be 1387 available. For example, in many Trusted Execution Environments 1388 (TEEs) today, a clock is only available outside the TEE and so cannot 1389 be trusted by the TEE. 1391 10.2. Implicit Timekeeping using Nonces 1393 A second approach places the onus of timekeeping solely on the 1394 Verifier (for Evidence) or the Relying Party (for Attestation 1395 Results), and might be suitable, for example, in case the Attester 1396 does not have a trustworthy clock or time synchronization is 1397 otherwise impaired. In this approach, a non-predictable nonce is 1398 sent by the appraising entity, and the nonce is then signed and 1399 included along with the Claims in the Evidence or Attestation Result. 1400 After checking that the sent and received nonces are the same, the 1401 appraising entity knows that the Claims were signed after the nonce 1402 was generated. This allows associating a "rough" epoch to the 1403 Evidence or Attestation Result. In this case the epoch is said to be 1404 rough because: 1406 * The epoch applies to the entire Claim set instead of a more 1407 granular association, and 1409 * The time between the creation of Claims and the collection of 1410 Claims is indistinguishable. 1412 10.3. Implicit Timekeeping using Epoch IDs 1414 A third approach relies on having epoch identifiers (or "IDs") 1415 periodically sent to both the sender and receiver of Evidence or 1416 Attestation Results by some "Epoch ID Distributor". 1418 Epoch IDs are different from nonces as they can be used more than 1419 once and can even be used by more than one entity at the same time. 1420 Epoch IDs are different from timestamps as they do not have to convey 1421 information about a point in time, i.e., they are not necessarily 1422 monotonically increasing integers. 1424 Like the nonce approach, this allows associating a "rough" epoch 1425 without requiring a trustworthy clock or time synchronization in 1426 order to generate or appraise the freshness of Evidence or 1427 Attestation Results. Only the Epoch ID Distributor requires access 1428 to a clock so it can periodically send new epoch IDs. 1430 The most recent epoch ID is included in the produced Evidence or 1431 Attestation Results, and the appraising entity can compare the epoch 1432 ID in received Evidence or Attestation Results against the latest 1433 epoch ID it received from the Epoch ID Distributor to determine if it 1434 is within the current epoch. An actual solution also needs to take 1435 into account race conditions when transitioning to a new epoch, such 1436 as by using a counter signed by the Epoch ID Distributor as the epoch 1437 ID, or by including both the current and previous epoch IDs in 1438 messages and/or checks, by requiring retries in case of mismatching 1439 epoch IDs, or by buffering incoming messages that might be associated 1440 with a epoch ID that the receiver has not yet obtained. 1442 More generally, in order to prevent an appraising entity from 1443 generating false negatives (e.g., discarding Evidence that is deemed 1444 stale even if it is not), the appraising entity should keep an "epoch 1445 window" consisting of the most recently received epoch IDs. The 1446 depth of such epoch window is directly proportional to the maximum 1447 network propagation delay between the first to receive the epoch ID 1448 and the last to receive the epoch ID, and it is inversely 1449 proportional to the epoch duration. The appraising entity shall 1450 compare the epoch ID carried in the received Evidence or Attestation 1451 Result with the epoch IDs in its epoch window to find a suitable 1452 match. 1454 Whereas the nonce approach typically requires the appraising entity 1455 to keep state for each nonce generated, the epoch ID approach 1456 minimizes the state kept to be independent of the number of Attesters 1457 or Verifiers from which it expects to receive Evidence or Attestation 1458 Results, as long as all use the same Epoch ID Distributor. 1460 10.4. Discussion 1462 Implicit and explicit timekeeping can be combined into hybrid 1463 mechanisms. For example, if clocks exist and are considered 1464 trustworthy but are not synchronized, a nonce-based exchange may be 1465 used to determine the (relative) time offset between the involved 1466 peers, followed by any number of timestamp based exchanges. 1468 It is important to note that the actual values in Claims might have 1469 been generated long before the Claims are signed. If so, it is the 1470 signer's responsibility to ensure that the values are still correct 1471 when they are signed. For example, values generated at boot time 1472 might have been saved to secure storage until network connectivity is 1473 established to the remote Verifier and a nonce is obtained. 1475 A more detailed discussion with examples appears in Section 16. 1477 For a discussion on the security of epoch IDs see Section 12.3. 1479 11. Privacy Considerations 1481 The conveyance of Evidence and the resulting Attestation Results 1482 reveal a great deal of information about the internal state of a 1483 device as well as potentially any users of the device. In many 1484 cases, the whole point of attestation procedures is to provide 1485 reliable information about the type of the device and the firmware/ 1486 software that the device is running. This information might be 1487 particularly interesting to many attackers. For example, knowing 1488 that a device is running a weak version of firmware provides a way to 1489 aim attacks better. 1491 Many Claims in Evidence and Attestation Results are potentially 1492 Personally Identifying Information (PII) depending on the end-to-end 1493 use case of the remote attestation procedure. Remote attestation 1494 that goes up to include containers and applications, e.g., a blood 1495 pressure monitor, may further reveal details about specific systems 1496 or users. 1498 In some cases, an attacker may be able to make inferences about the 1499 contents of Evidence from the resulting effects or timing of the 1500 processing. For example, an attacker might be able to infer the 1501 value of specific Claims if it knew that only certain values were 1502 accepted by the Relying Party. 1504 Evidence and Attestation Results are expected to be integrity 1505 protected (i.e., either via signing or a secure channel) and 1506 optionally might be confidentiality protected via encryption. If 1507 confidentiality protection via signing the conceptual messages is 1508 omitted or unavailable, the protecting protocols that convey Evidence 1509 or Attestation Results are responsible for detailing what kinds of 1510 information are disclosed, and to whom they are exposed. 1512 As Evidence might contain sensitive or confidential information, 1513 Attesters are responsible for only sending such Evidence to trusted 1514 Verifiers. Some Attesters might want a stronger level of assurance 1515 of the trustworthiness of a Verifier before sending Evidence to it. 1516 In such cases, an Attester can first act as a Relying Party and ask 1517 for the Verifier's own Attestation Result, and appraising it just as 1518 a Relying Party would appraise an Attestation Result for any other 1519 purpose. 1521 Another approach to deal with Evidence is to remove PII from the 1522 Evidence while still being able to verify that the Attester is one of 1523 a large set. This approach is often called "Direct Anonymous 1524 Attestation". See [CCC-DeepDive] section 6.2 for more discussion. 1526 12. Security Considerations 1528 12.1. Attester and Attestation Key Protection 1530 Implementers need to pay close attention to the protection of the 1531 Attester and the manufacturing processes for provisioning attestation 1532 key material. If either of these are compromised, intended levels of 1533 assurance for RATS are compromised because attackers can forge 1534 Evidence or manipulate the Attesting Environment. For example, a 1535 Target Environment should not be able to tamper with the Attesting 1536 Environment that measures it, by isolating the two environments from 1537 each other in some way. 1539 Remote attestation applies to use cases with a range of security 1540 requirements, so the protections discussed here range from low to 1541 high security where low security may be limited to application or 1542 process isolation by the device's operating system, and high security 1543 may involve specialized hardware to defend against physical attacks 1544 on a chip. 1546 12.1.1. On-Device Attester and Key Protection 1548 It is assumed that an Attesting Environment is sufficiently isolated 1549 from the Target Environment it collects Claims about and that it 1550 signs the resulting Claims set with an attestation key, so that the 1551 Target Environment cannot forge Evidence about itself. Such an 1552 isolated environment might be provided by a process, a dedicated 1553 chip, a TEE, a virtual machine, or another secure mode of operation. 1554 The Attesting Environment must be protected from unauthorized 1555 modification to ensure it behaves correctly. Confidentiality 1556 protection of the Attesting Environment's signing key is vital so it 1557 cannot be misused to forge Evidence. 1559 In many cases the user or owner of a device that takes on the role of 1560 Attester must not be able to modify or extract keys from its 1561 Attesting Environments. For example, the owner or user of a mobile 1562 phone or FIDO authenticator might not be trusted to use the keys to 1563 report Evidence about the environment that protects the keys. An 1564 essential value-add provided by RATS is for the Relying Party to be 1565 able to trust the Attester even if the user or owner is not trusted. 1567 Measures for a minimally protected system might include process or 1568 application isolation provided by a high-level operating system, and 1569 restricted access to root or system privileges. In contrast, For 1570 really simple single-use devices that don't use a protected mode 1571 operating system, like a Bluetooth speaker, the only factual 1572 isolation might be the sturdy housing of the device. 1574 Measures for a moderately protected system could include a special 1575 restricted operating environment, such as a TEE. In this case, only 1576 security-oriented software has access to the Attester and key 1577 material. 1579 Measures for a highly protected system could include specialized 1580 hardware that is used to provide protection against chip decapping 1581 attacks, power supply and clock glitching, faulting injection and RF 1582 and power side channel attacks. 1584 12.1.2. Attestation Key Provisioning Processes 1586 Attestation key provisioning is the process that occurs in the 1587 factory or elsewhere to establish signing key material on the device 1588 and the validation key material off the device. Sometimes this is 1589 procedure is referred to as personalization or customization. 1591 One way to provision key material is to first generate it external to 1592 the device and then copy the key onto the device. In this case, 1593 confidentiality protection of the generator, as well as for the path 1594 over which the key is provisioned, is necessary. The manufacturer 1595 needs to take care to protect corresponding key material with 1596 measures appropriate for its value. 1598 Confidentiality protection can be realized via physical provisioning 1599 facility security involving no encryption at all. For low-security 1600 use cases, this might be simply locking doors and limiting personnel 1601 that can enter the facility. For high-security use cases, this might 1602 involve a special area of the facility accessible only to select 1603 security-trained personnel. 1605 Typically, cryptography is used to enable confidentiality protection. 1606 This can result in recursive problems, as the key material used to 1607 provision attestation keys must again somehow have been provisioned 1608 securely beforehand (requiring an additional level of protection, and 1609 so on). 1611 In general, a combination of some physical security measures and some 1612 cryptographic measures is used to establish confidentiality 1613 protection. 1615 Another way to provision key material is to generate it on the device 1616 and export the validation key. If public-key cryptography is being 1617 used, then only integrity is necessary. Confidentiality of public 1618 keys is not necessary. 1620 In all cases, attestation key provisioning must ensure that only 1621 attestation key material that is generated by a valid Endorser is 1622 established in Attesters. For many use cases, this will involve 1623 physical security at the facility, to prevent unauthorized devices 1624 from being manufactured that may be counterfeit or incorrectly 1625 configured. 1627 12.2. Integrity Protection 1629 Any solution that conveys information used for security purposes, 1630 whether such information is in the form of Evidence, Attestation 1631 Results, Endorsements, or appraisal policy must support end-to-end 1632 integrity protection and replay attack prevention, and often also 1633 needs to support additional security properties, including: 1635 * end-to-end encryption, 1637 * denial of service protection, 1639 * authentication, 1641 * auditing, 1642 * fine grained access controls, and 1644 * logging. 1646 Section 10 discusses ways in which freshness can be used in this 1647 architecture to protect against replay attacks. 1649 To assess the security provided by a particular appraisal policy, it 1650 is important to understand the strength of the root of trust, e.g., 1651 whether it is mutable software, or firmware that is read-only after 1652 boot, or immutable hardware/ROM. 1654 It is also important that the appraisal policy was itself obtained 1655 securely. If an attacker can configure appraisal policies for a 1656 Relying Party or for a Verifier, then integrity of the process is 1657 compromised. 1659 Security protections in RATS may be applied at different layers, 1660 whether by a conveyance protocol, or an information encoding format. 1661 This architecture expects conceptual messages (see Section 8) to be 1662 end-to-end protected based on the role interaction context. For 1663 example, if an Attester produces Evidence that is relayed through 1664 some other entity that doesn't implement the Attester or the intended 1665 Verifier roles, then the relaying entity should not expect to have 1666 access to the Evidence. 1668 12.3. Epoch ID-based Attestation 1670 Epoch IDs, described in Section 10.3, can be tampered with, replayed, 1671 dropped, delayed, and reordered by an attacker. 1673 An attacker could be either external or belong to the distribution 1674 group, for example, if one of the Attester entities have been 1675 compromised. 1677 An attacker who is able to tamper with epoch IDs can potentially lock 1678 all the participants in a certain epoch of choice for ever, 1679 effectively freezing time. This is problematic since it destroys the 1680 ability to ascertain freshness of Evidence and Attestation Results. 1682 To mitigate this threat, the transport should be at least integrity 1683 protected and provide origin authentication. 1685 Selective dropping of epoch IDs is equivalent to pinning the victim 1686 node to a past epoch. An attacker could drop epoch IDs to only some 1687 entities and not others, which will typically result in a denial of 1688 service due to the permanent staleness of the Attestation Result or 1689 Evidence. 1691 Delaying or reordering epoch IDs is equivalent to manipulating the 1692 victim's timeline at will. This ability could be used by a malicious 1693 actor (e.g., a compromised router) to mount a confusion attack where, 1694 for example, a Verifier is tricked into accepting Evidence coming 1695 from a past epoch as fresh, while in the meantime the Attester has 1696 been compromised. 1698 Reordering and dropping attacks are mitigated if the transport 1699 provides the ability to detect reordering and drop. However, the 1700 delay attack described above can't be thwarted in this manner. 1702 13. IANA Considerations 1704 This document does not require any actions by IANA. 1706 14. Acknowledgments 1708 Special thanks go to Joerg Borchert, Nancy Cam-Winget, Jessica 1709 Fitzgerald-McKay, Diego Lopez, Laurence Lundblade, Paul Rowe, Hannes 1710 Tschofenig, Frank Xia, and David Wooten. 1712 15. Notable Contributions 1714 Thomas Hardjono created initial versions of the terminology section 1715 in collaboration with Ned Smith. Eric Voit provided the conceptual 1716 separation between Attestation Provision Flows and Attestation 1717 Evidence Flows. Monty Wisemen created the content structure of the 1718 first three architecture drafts. Carsten Bormann provided many of 1719 the motivational building blocks with respect to the Internet Threat 1720 Model. 1722 16. Appendix A: Time Considerations 1724 The table below defines a number of relevant events, with an ID that 1725 is used in subsequent diagrams. The times of said events might be 1726 defined in terms of an absolute clock time, such as the Coordinated 1727 Universal Time timescale, or might be defined relative to some other 1728 timestamp or timeticks counter, such as a clock resetting its epoch 1729 each time it is powered on. 1731 +====+============+=================================================+ 1732 | ID | Event | Explanation of event | 1733 +====+============+=================================================+ 1734 | VG | Value | A value to appear in a Claim was created. | 1735 | | generated | In some cases, a value may have technically | 1736 | | | existed before an Attester became aware of | 1737 | | | it but the Attester might have no idea how | 1738 | | | long it has had that value. In such a | 1739 | | | case, the Value created time is the time at | 1740 | | | which the Claim containing the copy of the | 1741 | | | value was created. | 1742 +----+------------+-------------------------------------------------+ 1743 | NS | Nonce sent | A nonce not predictable to an Attester | 1744 | | | (recentness & uniqueness) is sent to an | 1745 | | | Attester. | 1746 +----+------------+-------------------------------------------------+ 1747 | NR | Nonce | A nonce is relayed to an Attester by | 1748 | | relayed | another entity. | 1749 +----+------------+-------------------------------------------------+ 1750 | IR | Epoch ID | An epoch ID is successfully received and | 1751 | | received | processed by an entity. | 1752 +----+------------+-------------------------------------------------+ 1753 | EG | Evidence | An Attester creates Evidence from collected | 1754 | | generation | Claims. | 1755 +----+------------+-------------------------------------------------+ 1756 | ER | Evidence | A Relying Party relays Evidence to a | 1757 | | relayed | Verifier. | 1758 +----+------------+-------------------------------------------------+ 1759 | RG | Result | A Verifier appraises Evidence and generates | 1760 | | generation | an Attestation Result. | 1761 +----+------------+-------------------------------------------------+ 1762 | RR | Result | A Relying Party relays an Attestation | 1763 | | relayed | Result to a Relying Party. | 1764 +----+------------+-------------------------------------------------+ 1765 | RA | Result | The Relying Party appraises Attestation | 1766 | | appraised | Results. | 1767 +----+------------+-------------------------------------------------+ 1768 | OP | Operation | The Relying Party performs some operation | 1769 | | performed | requested by the Attester via a resource | 1770 | | | access protocol as depicted in Figure 8, | 1771 | | | e.g., across a session created earlier at | 1772 | | | time(RA). | 1773 +----+------------+-------------------------------------------------+ 1774 | RX | Result | An Attestation Result should no longer be | 1775 | | expiry | accepted, according to the Verifier that | 1776 | | | generated it. | 1777 +----+------------+-------------------------------------------------+ 1779 Table 1 1781 Using the table above, a number of hypothetical examples of how a 1782 solution might be built are illustrated below. This list is not 1783 intended to be complete, but is just representative enough to 1784 highlight various timing considerations. 1786 All times are relative to the local clocks, indicated by an "_a" 1787 (Attester), "_v" (Verifier), or "_r" (Relying Party) suffix. 1789 Times with an appended Prime (') indicate a second instance of the 1790 same event. 1792 How and if clocks are synchronized depends upon the model. 1794 In the figures below, curly braces indicate containment. For 1795 example, the notation Evidence{foo} indicates that 'foo' is contained 1796 in the Evidence and is thus covered by its signature. 1798 16.1. Example 1: Timestamp-based Passport Model Example 1800 The following example illustrates a hypothetical Passport Model 1801 solution that uses timestamps and requires roughly synchronized 1802 clocks between the Attester, Verifier, and Relying Party, which 1803 depends on using a secure clock synchronization mechanism. As a 1804 result, the receiver of a conceptual message containing a timestamp 1805 can directly compare it to its own clock and timestamps. 1807 .----------. .----------. .---------------. 1808 | Attester | | Verifier | | Relying Party | 1809 '----------' '----------' '---------------' 1810 time(VG_a) | | 1811 | | | 1812 ~ ~ ~ 1813 | | | 1814 time(EG_a) | | 1815 |------Evidence{time(EG_a)}------>| | 1816 | time(RG_v) | 1817 |<-----Attestation Result---------| | 1818 | {time(RG_v),time(RX_v)} | | 1819 ~ ~ 1820 | | 1821 |----Attestation Result{time(RG_v),time(RX_v)}-->time(RA_r) 1822 | | 1823 ~ ~ 1824 | | 1825 | time(OP_r) 1827 The Verifier can check whether the Evidence is fresh when appraising 1828 it at time(RG_v) by checking "time(RG_v) - time(EG_a) < Threshold", 1829 where the Verifier's threshold is large enough to account for the 1830 maximum permitted clock skew between the Verifier and the Attester. 1832 If time(VG_a) is also included in the Evidence along with the Claim 1833 value generated at that time, and the Verifier decides that it can 1834 trust the time(VG_a) value, the Verifier can also determine whether 1835 the Claim value is recent by checking "time(RG_v) - time(VG_a) < 1836 Threshold". The threshold is decided by the Appraisal Policy for 1837 Evidence, and again needs to take into account the maximum permitted 1838 clock skew between the Verifier and the Attester. 1840 The Relying Party can check whether the Attestation Result is fresh 1841 when appraising it at time(RA_r) by checking "time(RA_r) - time(RG_v) 1842 < Threshold", where the Relying Party's threshold is large enough to 1843 account for the maximum permitted clock skew between the Relying 1844 Party and the Verifier. The result might then be used for some time 1845 (e.g., throughout the lifetime of a connection established at 1846 time(RA_r)). The Relying Party must be careful, however, to not 1847 allow continued use beyond the period for which it deems the 1848 Attestation Result to remain fresh enough. Thus, it might allow use 1849 (at time(OP_r)) as long as "time(OP_r) - time(RG_v) < Threshold". 1850 However, if the Attestation Result contains an expiry time time(RX_v) 1851 then it could explicitly check "time(OP_r) < time(RX_v)". 1853 16.2. Example 2: Nonce-based Passport Model Example 1855 The following example illustrates a hypothetical Passport Model 1856 solution that uses nonces instead of timestamps. Compared to the 1857 timestamp-based example, it requires an extra round trip to retrieve 1858 a nonce, and requires that the Verifier and Relying Party track state 1859 to remember the nonce for some period of time. 1861 The advantage is that it does not require that any clocks are 1862 synchronized. As a result, the receiver of a conceptual message 1863 containing a timestamp cannot directly compare it to its own clock or 1864 timestamps. Thus we use a suffix ("a" for Attester, "v" for 1865 Verifier, and "r" for Relying Party) on the IDs below indicating 1866 which clock generated them, since times from different clocks cannot 1867 be compared. Only the delta between two events from the sender can 1868 be used by the receiver. 1870 .----------. .----------. .---------------. 1871 | Attester | | Verifier | | Relying Party | 1872 '----------' '----------' '---------------' 1873 time(VG_a) | | 1874 | | | 1875 ~ ~ ~ 1876 | | | 1877 |<--Nonce1---------------------time(NS_v) | 1878 time(EG_a) | | 1879 |---Evidence--------------------->| | 1880 | {Nonce1, time(EG_a)-time(VG_a)} | | 1881 | time(RG_v) | 1882 |<--Attestation Result------------| | 1883 | {time(RX_v)-time(RG_v)} | | 1884 ~ ~ 1885 | | 1886 |<--Nonce2-------------------------------------time(NS_r) 1887 time(RR_a) | 1888 |--[Attestation Result{time(RX_v)-time(RG_v)}, -->|time(RA_r) 1889 | Nonce2, time(RR_a)-time(EG_a)] | 1890 ~ ~ 1891 | | 1892 | time(OP_r) 1894 In this example solution, the Verifier can check whether the Evidence 1895 is fresh at "time(RG_v)" by verifying that "time(RG_v)-time(NS_v) < 1896 Threshold". 1898 The Verifier cannot, however, simply rely on a Nonce to determine 1899 whether the value of a Claim is recent, since the Claim value might 1900 have been generated long before the nonce was sent by the Verifier. 1901 However, if the Verifier decides that the Attester can be trusted to 1902 correctly provide the delta "time(EG_a)-time(VG_a)", then it can 1903 determine recency by checking "time(RG_v)-time(NS_v) + time(EG_a)- 1904 time(VG_a) < Threshold". 1906 Similarly if, based on an Attestation Result from a Verifier it 1907 trusts, the Relying Party decides that the Attester can be trusted to 1908 correctly provide time deltas, then it can determine whether the 1909 Attestation Result is fresh by checking "time(OP_r)-time(NS_r) + 1910 time(RR_a)-time(EG_a) < Threshold". Although the Nonce2 and 1911 "time(RR_a)-time(EG_a)" values cannot be inside the Attestation 1912 Result, they might be signed by the Attester such that the 1913 Attestation Result vouches for the Attester's signing capability. 1915 The Relying Party must still be careful, however, to not allow 1916 continued use beyond the period for which it deems the Attestation 1917 Result to remain valid. Thus, if the Attestation Result sends a 1918 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 1919 Relying Party can check "time(OP_r)-time(NS_r) < time(RX_v)- 1920 time(RG_v)". 1922 16.3. Example 3: Epoch ID-based Passport Model Example 1924 The example in Figure 10 illustrates a hypothetical Passport Model 1925 solution that uses epoch IDs instead of nonces or timestamps. 1927 The Epoch ID Distributor broadcasts epoch ID "I" which starts a new 1928 epoch "E" for a protocol participant upon reception at "time(IR)". 1930 The Attester generates Evidence incorporating epoch ID "I" and 1931 conveys it to the Verifier. 1933 The Verifier appraises that the received epoch ID "I" is "fresh" 1934 according to the definition provided in Section 10.3 whereby retries 1935 are required in the case of mismatching epoch IDs, and generates an 1936 Attestation Result. The Attestation Result is conveyed to the 1937 Attester. 1939 After the transmission of epoch ID "I'" a new epoch "E'" is 1940 established when "I'" is received by each protocol participant. The 1941 Attester relays the Attestation Result obtained during epoch "E" 1942 (associated with epoch ID "I") to the Relying Party using the epoch 1943 ID for the current epoch "I'". If the Relying Party had not yet 1944 received "I'", then the Attestation Result would be rejected, but in 1945 this example, it is received. 1947 In the illustrated scenario, the epoch ID for relaying an Attestation 1948 Result to the Relying Party is current, while a previous epoch ID was 1949 used to generate Verifier evaluated evidence. This indicates that at 1950 least one epoch transition has occurred, and the Attestation Results 1951 may only be as fresh as the previous epoch. If the Relying Party 1952 remembers the previous epoch ID "I" during an epoch window as 1953 discussed in Section 10.3, and the message is received during that 1954 window, the Attestation Result is accepted as fresh, and otherwise it 1955 is rejected as stale. 1957 .-------------. 1958 .----------. | Epoch ID | .----------. .---------------. 1959 | Attester | | Distributor | | Verifier | | Relying Party | 1960 '----------' '-------------' '----------' '---------------' 1961 time(VG_a) | | | 1962 | | | | 1963 ~ ~ ~ ~ 1964 | | | | 1965 time(IR_a)<------I--+--I--------time(IR_v)----->time(IR_r) 1966 | | | | 1967 time(EG_a) | | | 1968 |---Evidence--------------------->| | 1969 | {I,time(EG_a)-time(VG_a)} | | 1970 | | | | 1971 | | time(RG_v) | 1972 |<--Attestation Result------------| | 1973 | {I,time(RX_v)-time(RG_v)} | | 1974 | | | | 1975 time(IR'_a)<-----I'-+--I'-------time(IR'_v)---->time(IR'_r) 1976 | | | | 1977 |---[Attestation Result--------------------->time(RA_r) 1978 | {I,time(RX_v)-time(RG_v)},I'] | | 1979 | | | | 1980 ~ ~ ~ ~ 1981 | | | | 1982 | | | time(OP_r) 1984 Figure 10: Epoch ID-based Passport Model 1986 16.4. Example 4: Timestamp-based Background-Check Model Example 1988 The following example illustrates a hypothetical Background-Check 1989 Model solution that uses timestamps and requires roughly synchronized 1990 clocks between the Attester, Verifier, and Relying Party. 1992 .----------. .---------------. .----------. 1993 | Attester | | Relying Party | | Verifier | 1994 '----------' '---------------' '----------' 1995 time(VG_a) | | 1996 | | | 1997 ~ ~ ~ 1998 | | | 1999 time(EG_a) | | 2000 |----Evidence------->| | 2001 | {time(EG_a)} time(ER_r)--Evidence{time(EG_a)}->| 2002 | | time(RG_v) 2003 | time(RA_r)<-Attestation Result---| 2004 | | {time(RX_v)} | 2005 ~ ~ ~ 2006 | | | 2007 | time(OP_r) | 2009 The time considerations in this example are equivalent to those 2010 discussed under Example 1 above. 2012 16.5. Example 5: Nonce-based Background-Check Model Example 2014 The following example illustrates a hypothetical Background-Check 2015 Model solution that uses nonces and thus does not require that any 2016 clocks are synchronized. In this example solution, a nonce is 2017 generated by a Verifier at the request of a Relying Party, when the 2018 Relying Party needs to send one to an Attester. 2020 .----------. .---------------. .----------. 2021 | Attester | | Relying Party | | Verifier | 2022 '----------' '---------------' '----------' 2023 time(VG_a) | | 2024 | | | 2025 ~ ~ ~ 2026 | | | 2027 | |<-------Nonce-----------time(NS_v) 2028 |<---Nonce-----------time(NR_r) | 2029 time(EG_a) | | 2030 |----Evidence{Nonce}--->| | 2031 | time(ER_r)--Evidence{Nonce}--->| 2032 | | time(RG_v) 2033 | time(RA_r)<-Attestation Result-| 2034 | | {time(RX_v)-time(RG_v)} | 2035 ~ ~ ~ 2036 | | | 2037 | time(OP_r) | 2039 The Verifier can check whether the Evidence is fresh, and whether a 2040 Claim value is recent, the same as in Example 2 above. 2042 However, unlike in Example 2, the Relying Party can use the Nonce to 2043 determine whether the Attestation Result is fresh, by verifying that 2044 "time(OP_r)-time(NR_r) < Threshold". 2046 The Relying Party must still be careful, however, to not allow 2047 continued use beyond the period for which it deems the Attestation 2048 Result to remain valid. Thus, if the Attestation Result sends a 2049 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 2050 Relying Party can check "time(OP_r)-time(ER_r) < time(RX_v)- 2051 time(RG_v)". 2053 17. References 2055 17.1. Normative References 2057 [RFC7519] Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token 2058 (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015, 2059 . 2061 [RFC8392] Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, 2062 "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, 2063 May 2018, . 2065 17.2. Informative References 2067 [CCC-DeepDive] 2068 Confidential Computing Consortium, "Confidential Computing 2069 Deep Dive", n.d., 2070 . 2072 [CTAP] FIDO Alliance, "Client to Authenticator Protocol", n.d., 2073 . 2077 [I-D.birkholz-rats-tuda] 2078 Fuchs, A., Birkholz, H., McDonald, I., and C. Bormann, 2079 "Time-Based Uni-Directional Attestation", Work in 2080 Progress, Internet-Draft, draft-birkholz-rats-tuda-04, 13 2081 January 2021, . 2084 [I-D.birkholz-rats-uccs] 2085 Birkholz, H., O'Donoghue, J., Cam-Winget, N., and C. 2086 Bormann, "A CBOR Tag for Unprotected CWT Claims Sets", 2087 Work in Progress, Internet-Draft, draft-birkholz-rats- 2088 uccs-02, 2 December 2020, . 2091 [I-D.ietf-teep-architecture] 2092 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 2093 "Trusted Execution Environment Provisioning (TEEP) 2094 Architecture", Work in Progress, Internet-Draft, draft- 2095 ietf-teep-architecture-13, 2 November 2020, 2096 . 2099 [I-D.tschofenig-tls-cwt] 2100 Tschofenig, H. and M. Brossard, "Using CBOR Web Tokens 2101 (CWTs) in Transport Layer Security (TLS) and Datagram 2102 Transport Layer Security (DTLS)", Work in Progress, 2103 Internet-Draft, draft-tschofenig-tls-cwt-02, 13 July 2020, 2104 . 2107 [OPCUA] OPC Foundation, "OPC Unified Architecture Specification, 2108 Part 2: Security Model, Release 1.03", OPC 10000-2 , 25 2109 November 2015, . 2113 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 2114 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 2115 . 2117 [RFC5209] Sangster, P., Khosravi, H., Mani, M., Narayan, K., and J. 2118 Tardo, "Network Endpoint Assessment (NEA): Overview and 2119 Requirements", RFC 5209, DOI 10.17487/RFC5209, June 2008, 2120 . 2122 [RFC8322] Field, J., Banghart, S., and D. Waltermire, "Resource- 2123 Oriented Lightweight Information Exchange (ROLIE)", 2124 RFC 8322, DOI 10.17487/RFC8322, February 2018, 2125 . 2127 [strengthoffunction] 2128 NISC, "Strength of Function", n.d., 2129 . 2132 [TCG-DICE] Trusted Computing Group, "DICE Certificate Profiles", 2133 n.d., . 2137 [TCGarch] Trusted Computing Group, "Trusted Platform Module Library 2138 - Part 1: Architecture", 8 November 2019, 2139 . 2142 [WebAuthN] W3C, "Web Authentication: An API for accessing Public Key 2143 Credentials", n.d., . 2145 Contributors 2147 Monty Wiseman 2149 Email: montywiseman32@gmail.com 2151 Liang Xia 2153 Email: frank.xialiang@huawei.com 2155 Laurence Lundblade 2157 Email: lgl@island-resort.com 2159 Eliot Lear 2161 Email: elear@cisco.com 2163 Jessica Fitzgerald-McKay 2165 Sarah C. Helbe 2167 Andrew Guinn 2169 Peter Loscocco 2171 Email: pete.loscocco@gmail.com 2172 Eric Voit 2174 Thomas Fossati 2176 Email: thomas.fossati@arm.com 2178 Paul Rowe 2180 Carsten Bormann 2182 Email: cabo@tzi.org 2184 Giri Mandyam 2186 Email: mandyam@qti.qualcomm.com 2188 Kathleen Moriarty 2190 Email: kathleen.moriarty.ietf@gmail.com 2192 Guy Fedorkow 2194 Email: gfedorkow@juniper.net 2196 Simon Frost 2198 Email: Simon.Frost@arm.com 2200 Authors' Addresses 2202 Henk Birkholz 2203 Fraunhofer SIT 2204 Rheinstrasse 75 2205 64295 Darmstadt 2206 Germany 2208 Email: henk.birkholz@sit.fraunhofer.de 2209 Dave Thaler 2210 Microsoft 2211 United States of America 2213 Email: dthaler@microsoft.com 2215 Michael Richardson 2216 Sandelman Software Works 2217 Canada 2219 Email: mcr+ietf@sandelman.ca 2221 Ned Smith 2222 Intel Corporation 2223 United States of America 2225 Email: ned.smith@intel.com 2227 Wei Pan 2228 Huawei Technologies 2230 Email: william.panwei@huawei.com