idnits 2.17.1 draft-ietf-rats-architecture-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 519 has weird spacing: '... Claims v ...' -- The document date (9 February 2021) is 1165 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-birkholz-rats-tuda-04 == Outdated reference: A later version (-03) exists of draft-birkholz-rats-uccs-02 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-13 Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RATS Working Group H. Birkholz 3 Internet-Draft Fraunhofer SIT 4 Intended status: Informational D. Thaler 5 Expires: 13 August 2021 Microsoft 6 M. Richardson 7 Sandelman Software Works 8 N. Smith 9 Intel 10 W. Pan 11 Huawei Technologies 12 9 February 2021 14 Remote Attestation Procedures Architecture 15 draft-ietf-rats-architecture-10 17 Abstract 19 In network protocol exchanges it is often the case that one entity 20 requires believable evidence about the operational state of a remote 21 peer. Such evidence is typically conveyed as claims about the peer's 22 software and hardware platform, and is subsequently appraised in 23 order to assess the peer's trustworthiness. The process of 24 generating and appraising this kind of evidence is known as remote 25 attestation. This document describes an architecture for remote 26 attestation procedures that generate, convey, and appraise evidence 27 about a peer's operational state. 29 Note to Readers 31 Discussion of this document takes place on the RATS Working Group 32 mailing list (rats@ietf.org), which is archived at 33 https://mailarchive.ietf.org/arch/browse/rats/ 34 (https://mailarchive.ietf.org/arch/browse/rats/). 36 Source for this draft and an issue tracker can be found at 37 https://github.com/ietf-rats-wg/architecture (https://github.com/ 38 ietf-rats-wg/architecture). 40 Status of This Memo 42 This Internet-Draft is submitted in full conformance with the 43 provisions of BCP 78 and BCP 79. 45 Internet-Drafts are working documents of the Internet Engineering 46 Task Force (IETF). Note that other groups may also distribute 47 working documents as Internet-Drafts. The list of current Internet- 48 Drafts is at https://datatracker.ietf.org/drafts/current/. 50 Internet-Drafts are draft documents valid for a maximum of six months 51 and may be updated, replaced, or obsoleted by other documents at any 52 time. It is inappropriate to use Internet-Drafts as reference 53 material or to cite them other than as "work in progress." 55 This Internet-Draft will expire on 13 August 2021. 57 Copyright Notice 59 Copyright (c) 2021 IETF Trust and the persons identified as the 60 document authors. All rights reserved. 62 This document is subject to BCP 78 and the IETF Trust's Legal 63 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 64 license-info) in effect on the date of publication of this document. 65 Please review these documents carefully, as they describe your rights 66 and restrictions with respect to this document. Code Components 67 extracted from this document must include Simplified BSD License text 68 as described in Section 4.e of the Trust Legal Provisions and are 69 provided without warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 74 2. Reference Use Cases . . . . . . . . . . . . . . . . . . . . . 4 75 2.1. Network Endpoint Assessment . . . . . . . . . . . . . . . 4 76 2.2. Confidential Machine Learning (ML) Model Protection . . . 5 77 2.3. Confidential Data Protection . . . . . . . . . . . . . . 5 78 2.4. Critical Infrastructure Control . . . . . . . . . . . . . 6 79 2.5. Trusted Execution Environment (TEE) Provisioning . . . . 6 80 2.6. Hardware Watchdog . . . . . . . . . . . . . . . . . . . . 6 81 2.7. FIDO Biometric Authentication . . . . . . . . . . . . . . 7 82 3. Architectural Overview . . . . . . . . . . . . . . . . . . . 7 83 3.1. Appraisal Policies . . . . . . . . . . . . . . . . . . . 9 84 3.2. Reference Values . . . . . . . . . . . . . . . . . . . . 9 85 3.3. Two Types of Environments of an Attester . . . . . . . . 9 86 3.4. Layered Attestation Environments . . . . . . . . . . . . 11 87 3.5. Composite Device . . . . . . . . . . . . . . . . . . . . 13 88 3.6. Implementation Considerations . . . . . . . . . . . . . . 15 89 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 15 90 4.1. Roles . . . . . . . . . . . . . . . . . . . . . . . . . . 15 91 4.2. Artifacts . . . . . . . . . . . . . . . . . . . . . . . . 16 92 5. Topological Patterns . . . . . . . . . . . . . . . . . . . . 18 93 5.1. Passport Model . . . . . . . . . . . . . . . . . . . . . 18 94 5.2. Background-Check Model . . . . . . . . . . . . . . . . . 19 95 5.3. Combinations . . . . . . . . . . . . . . . . . . . . . . 20 96 6. Roles and Entities . . . . . . . . . . . . . . . . . . . . . 21 97 7. Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . 22 98 7.1. Relying Party . . . . . . . . . . . . . . . . . . . . . . 22 99 7.2. Attester . . . . . . . . . . . . . . . . . . . . . . . . 23 100 7.3. Relying Party Owner . . . . . . . . . . . . . . . . . . . 23 101 7.4. Verifier . . . . . . . . . . . . . . . . . . . . . . . . 23 102 7.5. Endorser, Reference Value Provider, and Verifier Owner . 25 103 8. Conceptual Messages . . . . . . . . . . . . . . . . . . . . . 25 104 8.1. Evidence . . . . . . . . . . . . . . . . . . . . . . . . 25 105 8.2. Endorsements . . . . . . . . . . . . . . . . . . . . . . 26 106 8.3. Attestation Results . . . . . . . . . . . . . . . . . . . 26 107 9. Claims Encoding Formats . . . . . . . . . . . . . . . . . . . 27 108 10. Freshness . . . . . . . . . . . . . . . . . . . . . . . . . . 29 109 10.1. Explicit Timekeeping using Synchronized Clocks . . . . . 30 110 10.2. Implicit Timekeeping using Nonces . . . . . . . . . . . 30 111 10.3. Implicit Timekeeping using Epoch Handles . . . . . . . . 30 112 10.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . 31 113 11. Privacy Considerations . . . . . . . . . . . . . . . . . . . 32 114 12. Security Considerations . . . . . . . . . . . . . . . . . . . 33 115 12.1. Attester and Attestation Key Protection . . . . . . . . 33 116 12.1.1. On-Device Attester and Key Protection . . . . . . . 33 117 12.1.2. Attestation Key Provisioning Processes . . . . . . . 34 118 12.2. Integrity Protection . . . . . . . . . . . . . . . . . . 35 119 12.3. Handle-based Attestation . . . . . . . . . . . . . . . . 36 120 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 36 121 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 37 122 15. Notable Contributions . . . . . . . . . . . . . . . . . . . . 37 123 16. Appendix A: Time Considerations . . . . . . . . . . . . . . . 37 124 16.1. Example 1: Timestamp-based Passport Model Example . . . 38 125 16.2. Example 2: Nonce-based Passport Model Example . . . . . 40 126 16.3. Example 3: Handle-based Passport Model Example . . . . . 42 127 16.4. Example 4: Timestamp-based Background-Check Model 128 Example . . . . . . . . . . . . . . . . . . . . . . . . 43 129 16.5. Example 5: Nonce-based Background-Check Model Example . 44 130 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 45 131 17.1. Normative References . . . . . . . . . . . . . . . . . . 45 132 17.2. Informative References . . . . . . . . . . . . . . . . . 45 133 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . 47 134 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 48 136 1. Introduction 138 In Remote Attestation Procedures (RATS), one peer (the "Attester") 139 produces believable information about itself - Evidence - to enable a 140 remote peer (the "Relying Party") to decide whether to consider that 141 Attester a trustworthy peer or not. RATS are facilitated by an 142 additional vital party, the Verifier. 144 The Verifier appraises Evidence via appraisal policies and creates 145 the Attestation Results to support Relying Parties in their decision 146 process. This document defines a flexible architecture consisting of 147 attestation roles and their interactions via conceptual messages. 148 Additionally, this document defines a universal set of terms that can 149 be mapped to various existing and emerging Remote Attestation 150 Procedures. Common topological models and the data flows associated 151 with them, such as the "Passport Model" and the "Background-Check 152 Model" are illustrated. The purpose is to define useful terminology 153 for attestation and enable readers to map their solution architecture 154 to the canonical attestation architecture provided here. Having a 155 common terminology that provides well-understood meanings for common 156 themes such as roles, device composition, topological models, and 157 appraisal is vital for semantic interoperability across solutions and 158 platforms involving multiple vendors and providers. 160 Amongst other things, this document is about trust and 161 trustworthiness. Trust is a choice one makes about another system. 162 Trustworthiness is a quality about the other system that can be used 163 in making one's decision to trust it or not. This is subtle 164 difference and being familiar with the difference is crucial for 165 using this document. Additionally, the concepts of freshness and 166 trust relationships with respect to RATS are elaborated on to enable 167 implementers to choose appropriate solutions to compose their Remote 168 Attestation Procedures. 170 2. Reference Use Cases 172 This section covers a number of representative use cases for remote 173 attestation, independent of specific solutions. The purpose is to 174 provide motivation for various aspects of the architecture presented 175 in this draft. Many other use cases exist, and this document does 176 not intend to have a complete list, only to have a set of use cases 177 that collectively cover all the functionality required in the 178 architecture. 180 Each use case includes a description followed by a summary of the 181 Attester and Relying Party roles. 183 2.1. Network Endpoint Assessment 185 Network operators want a trustworthy report that includes identity 186 and version information about the hardware and software on the 187 machines attached to their network, for purposes such as inventory, 188 audit, anomaly detection, record maintenance and/or trending reports 189 (logging). The network operator may also want a policy by which full 190 access is only granted to devices that meet some definition of 191 hygiene, and so wants to get Claims about such information and verify 192 its validity. Remote attestation is desired to prevent vulnerable or 193 compromised devices from getting access to the network and 194 potentially harming others. 196 Typically, solutions start with a specific component (called a "root 197 of trust") that provides device identity and protected storage for 198 measurements. The system components perform a series of measurements 199 that may be signed by the root of trust, considered as Evidence about 200 the hardware, firmware, BIOS, software, etc. that is present. 202 Attester: A device desiring access to a network 204 Relying Party: Network equipment such as a router, switch, or access 205 point, responsible for admission of the device into the network 207 2.2. Confidential Machine Learning (ML) Model Protection 209 A device manufacturer wants to protect its intellectual property. 210 This is primarily the ML model it developed and runs in the devices 211 purchased by its customers. The goals for the protection include 212 preventing attackers, potentially the customer themselves, from 213 seeing the details of the model. 215 This typically works by having some protected environment in the 216 device go through a remote attestation with some manufacturer service 217 that can assess its trustworthiness. If remote attestation succeeds, 218 then the manufacturer service releases either the model, or a key to 219 decrypt a model the Attester already has in encrypted form, to the 220 requester. 222 Attester: A device desiring to run an ML model 224 Relying Party: A server or service holding ML models it desires to 225 protect 227 2.3. Confidential Data Protection 229 This is a generalization of the ML model use case above, where the 230 data can be any highly confidential data, such as health data about 231 customers, payroll data about employees, future business plans, etc. 232 As part of the attestation procedure, an assessment is made against a 233 set of policies to evaluate the state of the system that is 234 requesting the confidential data. Attestation is desired to prevent 235 leaking data to compromised devices. 237 Attester: An entity desiring to retrieve confidential data 239 Relying Party: An entity that holds confidential data for release to 240 authorized entities 242 2.4. Critical Infrastructure Control 244 In this use case, potentially harmful physical equipment (e.g., power 245 grid, traffic control, hazardous chemical processing, etc.) is 246 connected to a network. The organization managing such 247 infrastructure needs to ensure that only authorized code and users 248 can control such processes, and that these processes are protected 249 from unauthorized manipulation or other threats. When a protocol 250 operation can affect a component of a critical system, the device 251 attached to the critical equipment requires some assurances depending 252 on the security context, including that: the requesting device or 253 application has not been compromised, and the requesters and actors 254 act on applicable policies, As such, remote attestation can be used 255 to only accept commands from requesters that are within policy. 257 Attester: A device or application wishing to control physical 258 equipment 260 Relying Party: A device or application connected to potentially 261 dangerous physical equipment (hazardous chemical processing, 262 traffic control, power grid, etc.) 264 2.5. Trusted Execution Environment (TEE) Provisioning 266 A "Trusted Application Manager (TAM)" server is responsible for 267 managing the applications running in the TEE of a client device. To 268 do this, the TAM wants to assess the state of a TEE, or of 269 applications in the TEE, of a client device. The TEE conducts a 270 remote attestation procedure with the TAM, which can then decide 271 whether the TEE is already in compliance with the TAM's latest 272 policy, or if the TAM needs to uninstall, update, or install approved 273 applications in the TEE to bring it back into compliance with the 274 TAM's policy. 276 Attester: A device with a trusted execution environment capable of 277 running trusted applications that can be updated 279 Relying Party: A Trusted Application Manager 281 2.6. Hardware Watchdog 283 There is a class of malware that holds a device hostage and does not 284 allow it to reboot to prevent updates from being applied. This can 285 be a significant problem, because it allows a fleet of devices to be 286 held hostage for ransom. 288 A solution to this problem is a watchdog timer implemented in a 289 protected environment such as a Trusted Platform Module (TPM), as 290 described in [TCGarch] section 43.3. If the watchdog does not 291 receive regular, and fresh, Attestation Results as to the system's 292 health, then it forces a reboot. 294 Attester: The device that should be protected from being held 295 hostage for a long period of time 297 Relying Party: A watchdog capable of triggering a procedure that 298 resets a device into a known, good operational state. 300 2.7. FIDO Biometric Authentication 302 In the Fast IDentity Online (FIDO) protocol [WebAuthN], [CTAP], the 303 device in the user's hand authenticates the human user, whether by 304 biometrics (such as fingerprints), or by PIN and password. FIDO 305 authentication puts a large amount of trust in the device compared to 306 typical password authentication because it is the device that 307 verifies the biometric, PIN and password inputs from the user, not 308 the server. For the Relying Party to know that the authentication is 309 trustworthy, the Relying Party needs to know that the Authenticator 310 part of the device is trustworthy. The FIDO protocol employs remote 311 attestation for this. 313 The FIDO protocol supports several remote attestation protocols and a 314 mechanism by which new ones can be registered and added. Remote 315 attestation defined by RATS is thus a candidate for use in the FIDO 316 protocol. 318 Other biometric authentication protocols such as the Chinese IFAA 319 standard and WeChat Pay as well as Google Pay make use of attestation 320 in one form or another. 322 Attester: Every FIDO Authenticator contains an Attester. 324 Relying Party: Any web site, mobile application back-end, or service 325 that relies on authentication data based on biometric information. 327 3. Architectural Overview 329 Figure 1 depicts the data that flows between different roles, 330 independent of protocol or use case. 332 ************ ************* ************ ***************** 333 * Endorser * * Reference * * Verifier * * Relying Party * 334 ************ * Value * * Owner * * Owner * 335 | * Provider * ************ ***************** 336 | ************* | | 337 | | | | 338 |Endorsements |Reference |Appraisal |Appraisal 339 | |Values |Policy |Policy for 340 | | |for |Attestation 341 .-----------. | |Evidence |Results 342 | | | | 343 | | | | 344 v v v | 345 .---------------------------. | 346 .----->| Verifier |------. | 347 | '---------------------------' | | 348 | | | 349 | Attestation| | 350 | Results | | 351 | Evidence | | 352 | | | 353 | v v 354 .----------. .---------------. 355 | Attester | | Relying Party | 356 '----------' '---------------' 358 Figure 1: Conceptual Data Flow 360 An Attester creates Evidence that is conveyed to a Verifier. 362 The Verifier uses the Evidence, any Reference Values from Reference 363 Value Providers, and any Endorsements from Endorsers, by applying an 364 Appraisal Policy for Evidence to assess the trustworthiness of the 365 Attester, and generates Attestation Results for use by Relying 366 Parties. The Appraisal Policy for Evidence might be obtained from an 367 Endorser along with the Endorsements, and/or might be obtained via 368 some other mechanism such as being configured in the Verifier by the 369 Verifier Owner. 371 The Relying Party uses Attestation Results by applying its own 372 appraisal policy to make application-specific decisions such as 373 authorization decisions. The Appraisal Policy for Attestation 374 Results is configured in the Relying Party by the Relying Party 375 Owner, and/or is programmed into the Relying Party. 377 3.1. Appraisal Policies 379 The Verifier, when appraising Evidence, or the Relying Party, when 380 appraising Attestation Results, checks the values of some Claims 381 against constraints specified in its appraisal policy. Such 382 constraints might involve a comparison for equality against a 383 Reference Value, or a check for being in a range bounded by Reference 384 Values, or membership in a set of Reference Values, or a check 385 against values in other Claims, or any other test. 387 3.2. Reference Values 389 Reference Values used in appraisal come from a Reference Value 390 Provider and are then used by the appraisal policy. They might be 391 conveyed in any number of ways, including: 393 * as part of the appraisal policy itself, if the Verifier Owner 394 either: acquires Reference Values from a Reference Value Provider 395 or is itself a Reference Value Provider; 397 * as part of an Endorsement, if the Endorser either acquires 398 Reference Values from a Reference Value Provider or is itself a 399 Reference Value Provider; or 401 * via separate communication. 403 The actual data format and semantics of any Reference Values are 404 specific to Claims and implementations. This architecture document 405 does not define any general purpose format for them or general means 406 for comparison. 408 3.3. Two Types of Environments of an Attester 410 As shown in Figure 2, an Attester consists of at least one Attesting 411 Environment and at least one Target Environment. In some 412 implementations, the Attesting and Target Environments might be 413 combined. Other implementations might have multiple Attesting and 414 Target Environments, such as in the examples described in more detail 415 in Section 3.4 and Section 3.5. Other examples may exist. Besides, 416 the examples discussed could be combined into even more complex 417 implementations. 419 .--------------------------------. 420 | | 421 | Verifier | 422 | | 423 '--------------------------------' 424 ^ 425 | 426 .-------------------------|----------. 427 | | | 428 | .----------------. | | 429 | | Target | | | 430 | | Environment | | | 431 | | | | Evidence | 432 | '----------------' | | 433 | | | | 434 | | | | 435 | Collect | | | 436 | Claims | | | 437 | | | | 438 | v | | 439 | .-------------. | 440 | | Attesting | | 441 | | Environment | | 442 | | | | 443 | '-------------' | 444 | Attester | 445 '------------------------------------' 447 Figure 2: Two Types of Environments 449 Claims are collected from Target Environments. That is, Attesting 450 Environments collect the values and the information to be represented 451 in Claims, by reading system registers and variables, calling into 452 subsystems, taking measurements on code, memory, or other security 453 related assets of the Target Environment. Attesting Environments 454 then format the Claims appropriately, and typically use key material 455 and cryptographic functions, such as signing or cipher algorithms, to 456 create Evidence. There is no limit to or requirement on the types of 457 hardware or software environments that can be used to implement an 458 Attesting Environment, for example: Trusted Execution Environments 459 (TEEs), embedded Secure Elements (eSEs), Trusted Platform Modules 460 (TPMs), or BIOS firmware. 462 An arbitrary execution environment may not, by default, be capable of 463 claims collection for a given Target Environment. Execution 464 environments that are designed specifically to be capable of claims 465 collection are referred to in this document as Attesting 466 Environments. For example, a TPM doesn't actively collect claims 467 itself, it instead requires another component to feed various values 468 to the TPM. Thus, an Attesting Environment in such a case would be 469 the combination of the TPM together with whatever component is 470 feeding it the measurements. 472 3.4. Layered Attestation Environments 474 By definition, the Attester role generates Evidence. An Attester may 475 consist of one or more nested environments (layers). The root layer 476 of an Attester includes at least one root of trust. In order to 477 appraise Evidence generated by an Attester, the Verifier needs to 478 trust the Attester's root of trust. Trust in the Attester's root of 479 trust can be established either directly (e.g., the Verifier puts the 480 root of trust's public key into its trust anchor store) or 481 transitively via an Endorser (e.g., the Verifier puts the Endorser's 482 public key into its trust anchor store). In layered attestation, a 483 root of trust is the initial Attesting Environment. Claims can be 484 collected from or about each layer. The corresponding Claims can be 485 structured in a nested fashion that reflects the nesting of the 486 Attester's layers. Normally, Claims are not self-asserted, rather a 487 previous layer acts as the Attesting Environment for the next layer. 488 Claims about a root of trust typically are asserted by Endorsers. 490 The device illustrated in Figure 3 includes (A) a BIOS stored in 491 read-only memory, (B) an operating system kernel, and (C) an 492 application or workload. 494 .----------. .----------. 495 | | | | 496 | Endorser |------------------->| Verifier | 497 | | Endorsements | | 498 '----------' for A, B, and C '----------' 499 ^ 500 .------------------------------------. | 501 | | | 502 | .---------------------------. | | 503 | | Target | | | Layered 504 | | Environment | | | Evidence 505 | | C | | | for 506 | '---------------------------' | | B and C 507 | Collect | | | 508 | Claims | | | 509 | .---------------|-----------. | | 510 | | Target v | | | 511 | | Environment .-----------. | | | 512 | | B | Attesting | | | | 513 | | |Environment|-----------' 514 | | | B | | | 515 | | '-----------' | | 516 | | ^ | | 517 | '---------------------|-----' | 518 | Collect | | Evidence | 519 | Claims v | for B | 520 | .-----------. | 521 | | Attesting | | 522 | |Environment| | 523 | | A | | 524 | '-----------' | 525 | | 526 '------------------------------------' 528 Figure 3: Layered Attester 530 Attesting Environment A, the read-only BIOS in this example, has to 531 ensure the integrity of the bootloader (Target Environment B). There 532 are potentially multiple kernels to boot, and the decision is up to 533 the bootloader. Only a bootloader with intact integrity will make an 534 appropriate decision. Therefore, the Claims relating to the 535 integrity of the bootloader have to be measured securely. At this 536 stage of the boot-cycle of the device, the Claims collected typically 537 cannot be composed into Evidence. 539 After the boot sequence is started, the BIOS conducts the most 540 important and defining feature of layered attestation, which is that 541 the successfully measured Target Environment B now becomes (or 542 contains) an Attesting Environment for the next layer. This 543 procedure in Layered Attestation is sometimes called "staging". It 544 is important that the new Attesting Environment B not be able to 545 alter any Claims about its own Target Environment B. This can be 546 ensured having those Claims be either signed by Attesting Environment 547 A or stored in an untamperable manner by Attesting Environment A. 549 Continuing with this example, the bootloader's Attesting Environment 550 B is now in charge of collecting Claims about Target Environment C, 551 which in this example is the kernel to be booted. The final Evidence 552 thus contains two sets of Claims: one set about the bootloader as 553 measured and signed by the BIOS, plus a set of Claims about the 554 kernel as measured and signed by the bootloader. 556 This example could be extended further by making the kernel become 557 another Attesting Environment for an application as another Target 558 Environment. This would result in a third set of Claims in the 559 Evidence pertaining to that application. 561 The essence of this example is a cascade of staged environments. 562 Each environment has the responsibility of measuring the next 563 environment before the next environment is started. In general, the 564 number of layers may vary by device or implementation, and an 565 Attesting Environment might even have multiple Target Environments 566 that it measures, rather than only one as shown in Figure 3. 568 3.5. Composite Device 570 A Composite Device is an entity composed of multiple sub-entities 571 such that its trustworthiness has to be determined by the appraisal 572 of all these sub-entities. 574 Each sub-entity has at least one Attesting Environment collecting the 575 Claims from at least one Target Environment, then this sub-entity 576 generates Evidence about its trustworthiness. Therefore each sub- 577 entity can be called an Attester. Among all the Attesters, there may 578 be only some which have the ability to communicate with the Verifier 579 while others do not. 581 For example, a carrier-grade router consists of a chassis and 582 multiple slots. The trustworthiness of the router depends on all its 583 slots' trustworthiness. Each slot has an Attesting Environment such 584 as a TEE collecting the Claims of its boot process, after which it 585 generates Evidence from the Claims. Among these slots, only a main 586 slot can communicate with the Verifier while other slots cannot. But 587 other slots can communicate with the main slot by the links between 588 them inside the router. So the main slot collects the Evidence of 589 other slots, produces the final Evidence of the whole router and 590 conveys the final Evidence to the Verifier. Therefore the router is 591 a Composite Device, each slot is an Attester, and the main slot is 592 the lead Attester. 594 Another example is a multi-chassis router composed of multiple single 595 carrier-grade routers. The multi-chassis router provides higher 596 throughput by interconnecting multiple routers and can be logically 597 treated as one router for simpler management. A multi-chassis router 598 provides a management point that connects to the Verifier. Other 599 routers are only connected to the main router by the network cables, 600 and therefore they are managed and appraised via this main router's 601 help. So, in this case, the multi-chassis router is the Composite 602 Device, each router is an Attester and the main router is the lead 603 Attester. 605 Figure 4 depicts the conceptual data flow for a Composite Device. 607 .-----------------------------. 608 | Verifier | 609 '-----------------------------' 610 ^ 611 | 612 | Evidence of 613 | Composite Device 614 | 615 .----------------------------------|-------------------------------. 616 | .--------------------------------|-----. .------------. | 617 | | Collect .------------. | | | | 618 | | Claims .--------->| Attesting |<--------| Attester B |-. | 619 | | | |Environment | | '------------. | | 620 | | .----------------. | |<----------| Attester C |-. | 621 | | | Target | | | | '------------' | | 622 | | | Environment(s) | | |<------------| ... | | 623 | | | | '------------' | Evidence '------------' | 624 | | '----------------' | of | 625 | | | Attesters | 626 | | lead Attester A | (via Internal Links or | 627 | '--------------------------------------' Network Connections) | 628 | | 629 | Composite Device | 630 '------------------------------------------------------------------' 632 Figure 4: Composite Device 634 In the Composite Device, each Attester generates its own Evidence by 635 its Attesting Environment(s) collecting the Claims from its Target 636 Environment(s). The lead Attester collects the Evidence from the 637 other Attesters and conveys it to a Verifier. Collection of Evidence 638 from sub-entities may itself be a form of Claims collection that 639 results in Evidence asserted by the lead Attester. The lead Attester 640 generates the Evidence about the layout of the Composite Device, 641 while sub-Attesters generate Evidence about their respective modules. 643 In this situation, the trust model described in Section 7 is also 644 suitable for this inside Verifier. 646 3.6. Implementation Considerations 648 An entity can take on multiple RATS roles (e.g., Attester, Verifier, 649 Relying Party, etc.) at the same time. Multiple entities can 650 cooperate to implement a single RATS role as well. The combination 651 of roles and entities can be arbitrary. For example, in the 652 Composite Device scenario, the entity inside the lead Attester can 653 also take on the role of a Verifier, and the outer entity of Verifier 654 can take on the role of a Relying Party. After collecting the 655 Evidence of other Attesters, this inside Verifier uses Endorsements 656 and appraisal policies (obtained the same way as any other Verifier) 657 in the verification process to generate Attestation Results. The 658 inside Verifier then conveys the Attestation Results of other 659 Attesters to the outside Verifier, whether in the same conveyance 660 protocol as the Evidence or not. 662 4. Terminology 664 This document uses the following terms. 666 4.1. Roles 668 Attester: A role performed by an entity (typically a device) whose 669 Evidence must be appraised in order to infer the extent to which 670 the Attester is considered trustworthy, such as when deciding 671 whether it is authorized to perform some operation. 673 Produces: Evidence 675 Relying Party: A role performed by an entity that depends on the 676 validity of information about an Attester, for purposes of 677 reliably applying application specific actions. Compare /relying 678 party/ in [RFC4949]. 680 Consumes: Attestation Results 682 Verifier: A role performed by an entity that appraises the validity 683 of Evidence about an Attester and produces Attestation Results to 684 be used by a Relying Party. 686 Consumes: Evidence, Reference Values, Endorsements, Appraisal 687 Policy for Evidence 689 Produces: Attestation Results 691 Relying Party Owner: A role performed by an entity (typically an 692 administrator), that is authorized to configure Appraisal Policy 693 for Attestation Results in a Relying Party. 695 Produces: Appraisal Policy for Attestation Results 697 Verifier Owner: A role performed by an entity (typically an 698 administrator), that is authorized to configure Appraisal Policy 699 for Evidence in a Verifier. 701 Produces: Appraisal Policy for Evidence 703 Endorser: A role performed by an entity (typically a manufacturer) 704 whose Endorsements help Verifiers appraise the authenticity of 705 Evidence. 707 Produces: Endorsements 709 Reference Value Provider: A role performed by an entity (typically a 710 manufacturer) whose Reference Values help Verifiers appraise 711 Evidence to determine if acceptable known Claims have been 712 recorded by the Attester. 714 Produces: Reference Values 716 4.2. Artifacts 718 Claim: A piece of asserted information, often in the form of a name/ 719 value pair. Claims make up the usual structure of Evidence and 720 other RATS artifacts. Compare /claim/ in [RFC7519]. 722 Endorsement: A secure statement that an Endorser vouches for the 723 integrity of an Attester's various capabilities such as Claims 724 collection and Evidence signing. 726 Consumed By: Verifier 728 Produced By: Endorser 730 Evidence: A set of Claims generated by an Attester to be appraised 731 by a Verifier. Evidence may include configuration data, 732 measurements, telemetry, or inferences. 734 Consumed By: Verifier 736 Produced By: Attester 738 Attestation Result: The output generated by a Verifier, typically 739 including information about an Attester, where the Verifier 740 vouches for the validity of the results. 742 Consumed By: Relying Party 744 Produced By: Verifier 746 Appraisal Policy for Evidence: A set of rules that informs how a 747 Verifier evaluates the validity of information about an Attester. 748 Compare /security policy/ in [RFC4949]. 750 Consumed By: Verifier 752 Produced By: Verifier Owner 754 Appraisal Policy for Attestation Results: A set of rules that direct 755 how a Relying Party uses the Attestation Results regarding an 756 Attester generated by the Verifiers. Compare /security policy/ in 757 [RFC4949]. 759 Consumed by: Relying Party 761 Produced by: Relying Party Owner 763 Reference Values: A set of values against which values of Claims can 764 be compared as part of applying an Appraisal Policy for Evidence. 765 Reference Values are sometimes referred to in other documents as 766 known-good values, golden measurements, or nominal values, 767 although those terms typically assume comparison for equality, 768 whereas here Reference Values might be more general and be used in 769 any sort of comparison. 771 Consumed By: Verifier 773 Produced By: Reference Value Provider 775 5. Topological Patterns 777 Figure 1 shows a data-flow diagram for communication between an 778 Attester, a Verifier, and a Relying Party. The Attester conveys its 779 Evidence to the Verifier for appraisal, and the Relying Party gets 780 the Attestation Result from the Verifier. This section refines it by 781 describing two reference models, as well as one example composition 782 thereof. The discussion that follows is for illustrative purposes 783 only and does not constrain the interactions between RATS roles to 784 the presented patterns. 786 5.1. Passport Model 788 The passport model is so named because of its resemblance to how 789 nations issue passports to their citizens. The nature of the 790 Evidence that an individual needs to provide to its local authority 791 is specific to the country involved. The citizen retains control of 792 the resulting passport document and presents it to other entities 793 when it needs to assert a citizenship or identity claim, such as an 794 airport immigration desk. The passport is considered sufficient 795 because it vouches for the citizenship and identity claims, and it is 796 issued by a trusted authority. Thus, in this immigration desk 797 analogy, the passport issuing agency is a Verifier, the passport is 798 an Attestation Result, and the immigration desk is a Relying Party. 800 In this model, an Attester conveys Evidence to a Verifier, which 801 compares the Evidence against its appraisal policy. The Verifier 802 then gives back an Attestation Result. If the Attestation Result was 803 a successful one, the Attester can then present the Attestation 804 Result (and possibly additional Claims) to a Relying Party, which 805 then compares this information against its own appraisal policy. 807 Three ways in which the process may fail include: 809 * First, the Verifier may not issue a positive Attestation Result 810 due to the Evidence not passing the Appraisal Policy for Evidence. 812 * The second way in which the process may fail is when the 813 Attestation Result is examined by the Relying Party, and based 814 upon the Appraisal Policy for Attestation Results, the result does 815 not pass the policy. 817 * The third way is when the Verifier is unreachable or unavailable. 819 Since the resource access protocol between the Attester and Relying 820 Party includes an Attestation Result, in this model the details of 821 that protocol constrain the serialization format of the Attestation 822 Result. The format of the Evidence on the other hand is only 823 constrained by the Attester-Verifier remote attestation protocol. 824 This implies that interoperability and standardization is more 825 relevant for Attestation Results than it is for Evidence. 827 +-------------+ 828 | | Compare Evidence 829 | Verifier | against appraisal policy 830 | | 831 +-------------+ 832 ^ | 833 Evidence| |Attestation 834 | | Result 835 | v 836 +----------+ +---------+ 837 | |------------->| |Compare Attestation 838 | Attester | Attestation | Relying | Result against 839 | | Result | Party | appraisal 840 +----------+ +---------+ policy 842 Figure 5: Passport Model 844 5.2. Background-Check Model 846 The background-check model is so named because of the resemblance of 847 how employers and volunteer organizations perform background checks. 848 When a prospective employee provides claims about education or 849 previous experience, the employer will contact the respective 850 institutions or former employers to validate the claim. Volunteer 851 organizations often perform police background checks on volunteers in 852 order to determine the volunteer's trustworthiness. Thus, in this 853 analogy, a prospective volunteer is an Attester, the organization is 854 the Relying Party, and the organization that issues a report is a 855 Verifier. 857 In this model, an Attester conveys Evidence to a Relying Party, which 858 simply passes it on to a Verifier. The Verifier then compares the 859 Evidence against its appraisal policy, and returns an Attestation 860 Result to the Relying Party. The Relying Party then compares the 861 Attestation Result against its own appraisal policy. 863 The resource access protocol between the Attester and Relying Party 864 includes Evidence rather than an Attestation Result, but that 865 Evidence is not processed by the Relying Party. Since the Evidence 866 is merely forwarded on to a trusted Verifier, any serialization 867 format can be used for Evidence because the Relying Party does not 868 need a parser for it. The only requirement is that the Evidence can 869 be _encapsulated in_ the format required by the resource access 870 protocol between the Attester and Relying Party. 872 However, like in the Passport model, an Attestation Result is still 873 consumed by the Relying Party. Code footprint and attack surface 874 area can be minimized by using a serialization format for which the 875 Relying Party already needs a parser to support the protocol between 876 the Attester and Relying Party, which may be an existing standard or 877 widely deployed resource access protocol. Such minimization is 878 especially important if the Relying Party is a constrained node. 880 +-------------+ 881 | | Compare Evidence 882 | Verifier | against appraisal 883 | | policy 884 +-------------+ 885 ^ | 886 Evidence| |Attestation 887 | | Result 888 | v 889 +------------+ +-------------+ 890 | |-------------->| | Compare Attestation 891 | Attester | Evidence | Relying | Result against 892 | | | Party | appraisal policy 893 +------------+ +-------------+ 895 Figure 6: Background-Check Model 897 5.3. Combinations 899 One variation of the background-check model is where the Relying 900 Party and the Verifier are on the same machine, performing both 901 functions together. In this case, there is no need for a protocol 902 between the two. 904 It is also worth pointing out that the choice of model depends on the 905 use case, and that different Relying Parties may use different 906 topological patterns. 908 The same device may need to create Evidence for different Relying 909 Parties and/or different use cases. For instance, it would use one 910 model to provide Evidence to a network infrastructure device to gain 911 access to the network, and the other model to provide Evidence to a 912 server holding confidential data to gain access to that data. As 913 such, both models may simultaneously be in use by the same device. 915 Figure 7 shows another example of a combination where Relying Party 1 916 uses the passport model, whereas Relying Party 2 uses an extension of 917 the background-check model. Specifically, in addition to the basic 918 functionality shown in Figure 6, Relying Party 2 actually provides 919 the Attestation Result back to the Attester, allowing the Attester to 920 use it with other Relying Parties. This is the model that the 921 Trusted Application Manager plans to support in the TEEP architecture 922 [I-D.ietf-teep-architecture]. 924 +-------------+ 925 | | Compare Evidence 926 | Verifier | against appraisal policy 927 | | 928 +-------------+ 929 ^ | 930 Evidence| |Attestation 931 | | Result 932 | v 933 +-------------+ 934 | | Compare 935 | Relying | Attestation Result 936 | Party 2 | against appraisal policy 937 +-------------+ 938 ^ | 939 Evidence| |Attestation 940 | | Result 941 | v 942 +----------+ +----------+ 943 | |-------------->| | Compare Attestation 944 | Attester | Attestation | Relying | Result against 945 | | Result | Party 1 | appraisal policy 946 +----------+ +----------+ 948 Figure 7: Example Combination 950 6. Roles and Entities 952 An entity in the RATS architecture includes at least one of the roles 953 defined in this document. An entity can aggregate more than one role 954 into itself. These collapsed roles combine the duties of multiple 955 roles. 957 In these cases, interaction between these roles do not necessarily 958 use the Internet Protocol. They can be using a loopback device or 959 other IP-based communication between separate environments, but they 960 do not have to. Alternative channels to convey conceptual messages 961 include function calls, sockets, GPIO interfaces, local busses, or 962 hypervisor calls. This type of conveyance is typically found in 963 Composite Devices. Most importantly, these conveyance methods are 964 out-of-scope of RATS, but they are presumed to exist in order to 965 convey conceptual messages appropriately between roles. 967 For example, an entity that both connects to a wide-area network and 968 to a system bus is taking on both the Attester and Verifier roles. 969 As a system bus-connected entity, a Verifier consumes Evidence from 970 other devices connected to the system bus that implement Attester 971 roles. As a wide-area network connected entity, it may implement an 972 Attester role. 974 In essence, an entity that combines more than one role creates and 975 consumes the corresponding conceptual messages as defined in this 976 document. 978 7. Trust Model 980 7.1. Relying Party 982 This document covers scenarios for which a Relying Party trusts a 983 Verifier that can appraise the trustworthiness of information about 984 an Attester. Such trust might come by the Relying Party trusting the 985 Verifier (or its public key) directly, or might come by trusting an 986 entity (e.g., a Certificate Authority) that is in the Verifier's 987 certificate chain. 989 The Relying Party might implicitly trust a Verifier, such as in a 990 Verifier/Relying Party combination where the Verifier and Relying 991 Party roles are combined. Or, for a stronger level of security, the 992 Relying Party might require that the Verifier first provide 993 information about itself that the Relying Party can use to assess the 994 trustworthiness of the Verifier before accepting its Attestation 995 Results. 997 For example, one explicit way for a Relying Party "A" to establish 998 such trust in a Verifier "B", would be for B to first act as an 999 Attester where A acts as a combined Verifier/Relying Party. If A 1000 then accepts B as trustworthy, it can choose to accept B as a 1001 Verifier for other Attesters. 1003 As another example, the Relying Party can establish trust in the 1004 Verifier by out of band establishment of key material, combined with 1005 a protocol like TLS to communicate. There is an assumption that 1006 between the establishment of the trusted key material and the 1007 creation of the Evidence, that the Verifier has not been compromised. 1009 Similarly, the Relying Party also needs to trust the Relying Party 1010 Owner for providing its Appraisal Policy for Attestation Results, and 1011 in some scenarios the Relying Party might even require that the 1012 Relying Party Owner go through a remote attestation procedure with it 1013 before the Relying Party will accept an updated policy. This can be 1014 done similarly to how a Relying Party could establish trust in a 1015 Verifier as discussed above. 1017 7.2. Attester 1019 In some scenarios, Evidence might contain sensitive information such 1020 as Personally Identifiable Information (PII) or system identifiable 1021 information. Thus, an Attester must trust entities to which it 1022 conveys Evidence, to not reveal sensitive data to unauthorized 1023 parties. The Verifier might share this information with other 1024 authorized parties, according to a governing policy that address the 1025 handling of sensitive information (potentially included in Appraisal 1026 Policies for Evidence). In the background-check model, this Evidence 1027 may also be revealed to Relying Party(s). 1029 When Evidence contains sensitive information, an Attester typically 1030 requires that a Verifier authenticates itself (e.g., at TLS session 1031 establishment) and might even request a remote attestation before the 1032 Attester sends the sensitive Evidence. This can be done by having 1033 the Attester first act as a Verifier/Relying Party, and the Verifier 1034 act as its own Attester, as discussed above. 1036 7.3. Relying Party Owner 1038 The Relying Party Owner might also require that the Relying Party 1039 first act as an Attester, providing Evidence that the Owner can 1040 appraise, before the Owner would give the Relying Party an updated 1041 policy that might contain sensitive information. In such a case, 1042 authentication or attestation in both directions might be needed, in 1043 which case typically one side's Evidence must be considered safe to 1044 share with an untrusted entity, in order to bootstrap the sequence. 1045 See Section 11 for more discussion. 1047 7.4. Verifier 1049 The Verifier trusts (or more specifically, the Verifier's security 1050 policy is written in a way that configures the Verifier to trust) a 1051 manufacturer, or the manufacturer's hardware, so as to be able to 1052 appraise the trustworthiness of that manufacturer's devices. In a 1053 typical solution, a Verifier comes to trust an Attester indirectly by 1054 having an Endorser (such as a manufacturer) vouch for the Attester's 1055 ability to securely generate Evidence. 1057 In some solutions, a Verifier might be configured to directly trust 1058 an Attester by having the Verifier have the Attester's key material 1059 (rather than the Endorser's) in its trust anchor store. 1061 Such direct trust must first be established at the time of trust 1062 anchor store configuration either by checking with an Endorser at 1063 that time, or by conducting a security analysis of the specific 1064 device. Having the Attester directly in the trust anchor store 1065 narrows the Verifier's trust to only specific devices rather than all 1066 devices the Endorser might vouch for, such as all devices 1067 manufactured by the same manufacturer in the case that the Endorser 1068 is a manufacturer. 1070 Such narrowing is often important since physical possession of a 1071 device can also be used to conduct a number of attacks, and so a 1072 device in a physically secure environment (such as one's own 1073 premises) may be considered trusted whereas devices owned by others 1074 would not be. This often results in a desire to either have the 1075 owner run their own Endorser that would only Endorse devices one 1076 owns, or to use Attesters directly in the trust anchor store. When 1077 there are many Attesters owned, the use of an Endorser becomes more 1078 scalable. 1080 That is, it might appraise the trustworthiness of an application 1081 component, operating system component, or service under the 1082 assumption that information provided about it by the lower-layer 1083 firmware or software is true. A stronger level of assurance of 1084 security comes when information can be vouched for by hardware or by 1085 ROM code, especially if such hardware is physically resistant to 1086 hardware tampering. In most cases, components that have to be 1087 vouched for via Endorsements because no Evidence is generated about 1088 them are referred to as roots of trust. 1090 The manufacturer having arranged for an Attesting Environment to be 1091 provisioned with key material with which to sign Evidence, the 1092 Verifier is then provided with some way of verifying the signature on 1093 the Evidence. This may be in the form of an appropriate trust 1094 anchor, or the Verifier may be provided with a database of public 1095 keys (rather than certificates) or even carefully secured lists of 1096 symmetric keys. 1098 The nature of how the Verifier manages to validate the signatures 1099 produced by the Attester is critical to the secure operation of an 1100 Attestation system, but is not the subject of standardization within 1101 this architecture. 1103 A conveyance protocol that provides authentication and integrity 1104 protection can be used to convey Evidence that is otherwise 1105 unprotected (e.g., not signed). Appropriate conveyance of 1106 unprotected Evidence (e.g., [I-D.birkholz-rats-uccs]) relies on the 1107 following conveyance protocol's protection capabilities: 1109 1. The key material used to authenticate and integrity protect the 1110 conveyance channel is trusted by the Verifier to speak for the 1111 Attesting Environment(s) that collected Claims about the Target 1112 Environment(s). 1114 2. All unprotected Evidence that is conveyed is supplied exclusively 1115 by the Attesting Environment that has the key material that 1116 protects the conveyance channel 1118 3. The root of trust protects both the conveyance channel key 1119 material and the Attesting Environment with equivalent strength 1120 protections. 1122 See Section 12 for discussion on security strength. 1124 7.5. Endorser, Reference Value Provider, and Verifier Owner 1126 In some scenarios, the Endorser, Reference Value Provider, and 1127 Verifier Owner may need to trust the Verifier before giving the 1128 Endorsement, Reference Values, or appraisal policy to it. This can 1129 be done similarly to how a Relying Party might establish trust in a 1130 Verifier. 1132 As discusssed in Section 7.3, authentication or attestation in both 1133 directions might be needed, in which case typically one side's 1134 identity or Evidence must be considered safe to share with an 1135 untrusted entity, in order to bootstrap the sequence. See Section 11 1136 for more discussion. 1138 8. Conceptual Messages 1140 8.1. Evidence 1142 Evidence is a set of Claims about the target environment that reveal 1143 operational status, health, configuration or construction that have 1144 security relevance. Evidence is evaluated by a Verifier to establish 1145 its relevance, compliance, and timeliness. Claims need to be 1146 collected in a manner that is reliable. Evidence needs to be 1147 securely associated with the target environment so that the Verifier 1148 cannot be tricked into accepting Claims originating from a different 1149 environment (that may be more trustworthy). Evidence also must be 1150 protected from man-in-the-middle attackers who may observe, change or 1151 misdirect Evidence as it travels from Attester to Verifier. The 1152 timeliness of Evidence can be captured using Claims that pinpoint the 1153 time or interval when changes in operational status, health, and so 1154 forth occur. 1156 8.2. Endorsements 1158 An Endorsement is a secure statement that some entity (e.g., a 1159 manufacturer) vouches for the integrity of the device's signing 1160 capability. For example, if the signing capability is in hardware, 1161 then an Endorsement might be a manufacturer certificate that signs a 1162 public key whose corresponding private key is only known inside the 1163 device's hardware. Thus, when Evidence and such an Endorsement are 1164 used together, an appraisal procedure can be conducted based on 1165 appraisal policies that may not be specific to the device instance, 1166 but merely specific to the manufacturer providing the Endorsement. 1167 For example, an appraisal policy might simply check that devices from 1168 a given manufacturer have information matching a set of Reference 1169 Values, or an appraisal policy might have a set of more complex logic 1170 on how to appraise the validity of information. 1172 However, while an appraisal policy that treats all devices from a 1173 given manufacturer the same may be appropriate for some use cases, it 1174 would be inappropriate to use such an appraisal policy as the sole 1175 means of authorization for use cases that wish to constrain _which_ 1176 compliant devices are considered authorized for some purpose. For 1177 example, an enterprise using remote attestation for Network Endpoint 1178 Assessment may not wish to let every healthy laptop from the same 1179 manufacturer onto the network, but instead only want to let devices 1180 that it legally owns onto the network. Thus, an Endorsement may be 1181 helpful information in authenticating information about a device, but 1182 is not necessarily sufficient to authorize access to resources which 1183 may need device-specific information such as a public key for the 1184 device or component or user on the device. 1186 8.3. Attestation Results 1188 Attestation Results are the input used by the Relying Party to decide 1189 the extent to which it will trust a particular Attester, and allow it 1190 to access some data or perform some operation. 1192 Attestation Results may carry a boolean value indicating compliance 1193 or non-compliance with a Verifier's appraisal policy, or may carry a 1194 richer set of Claims about the Attester, against which the Relying 1195 Party applies its Appraisal Policy for Attestation Results. 1197 The quality of the Attestation Results depend upon the ability of the 1198 Verifier to evaluate the Attester. Different Attesters have a 1199 different _Strength of Function_ [strengthoffunction], which results 1200 in the Attestation Results being qualitatively different in strength. 1202 An Attestation Result that indicates non-compliance can be used by an 1203 Attester (in the passport model) or a Relying Party (in the 1204 background-check model) to indicate that the Attester should not be 1205 treated as authorized and may be in need of remediation. In some 1206 cases, it may even indicate that the Evidence itself cannot be 1207 authenticated as being correct. 1209 By default, the Relying Party does not believe the Attester to be 1210 compliant. Upon receipt of an authentic Attestation Result and given 1211 the Appraisal Policy for Attestation Results is satisfied, then the 1212 Attester is allowed to perform the prescribed actions or access. The 1213 simplest such Appraisal Policy might authorize granting the Attester 1214 full access or control over the resources guarded by the Relying 1215 Party. A more complex Appraisal Policy might involve using the 1216 information provided in the Attestation Result to compare against 1217 expected values, or to apply complex analysis of other information 1218 contained in the Attestation Result. 1220 Thus, Attestation Results often need to include detailed information 1221 about the Attester, for use by Relying Parties, much like physical 1222 passports and drivers licenses include personal information such as 1223 name and date of birth. Unlike Evidence, which is often very device- 1224 and vendor-specific, Attestation Results can be vendor-neutral if the 1225 Verifier has a way to generate vendor-agnostic information based on 1226 the appraisal of vendor-specific information in Evidence. This 1227 allows a Relying Party's appraisal policy to be simpler, potentially 1228 based on standard ways of expressing the information, while still 1229 allowing interoperability with heterogeneous devices. 1231 Finally, whereas Evidence is signed by the device (or indirectly by a 1232 manufacturer, if Endorsements are used), Attestation Results are 1233 signed by a Verifier, allowing a Relying Party to only need a trust 1234 relationship with one entity, rather than a larger set of entities, 1235 for purposes of its appraisal policy. 1237 9. Claims Encoding Formats 1239 The following diagram illustrates a relationship to which remote 1240 attestation is desired to be added: 1242 +-------------+ +------------+ Evaluate 1243 | |-------------->| | request 1244 | Attester | Access some | Relying | against 1245 | | resource | Party | security 1246 +-------------+ +------------+ policy 1248 Figure 8: Typical Resource Access 1250 In this diagram, the protocol between Attester and a Relying Party 1251 can be any new or existing protocol (e.g., HTTP(S), COAP(S), ROLIE 1252 [RFC8322], 802.1x, OPC UA [OPCUA], etc.), depending on the use case. 1254 Such protocols typically already have mechanisms for passing security 1255 information for purposes of authentication and authorization. Common 1256 formats include JWTs [RFC7519], CWTs [RFC8392], and X.509 1257 certificates. 1259 Retrofitting already deployed protocols with remote attestation 1260 requires adding RATS conceptual messages to the existing data flows. 1261 This must be done in a way that doesn't degrade the security 1262 properties of the system and should use the native extension 1263 mechanisms provided by the underlying protocol. For example, if the 1264 TLS handshake is to be extended with remote attestation capabilities, 1265 attestation Evidence may be embedded in an ad hoc X.509 certificate 1266 extension (e.g., [TCG-DICE]), or into a new TLS Certificate Type 1267 (e.g., [I-D.tschofenig-tls-cwt]). 1269 Especially for constrained nodes there is a desire to minimize the 1270 amount of parsing code needed in a Relying Party, in order to both 1271 minimize footprint and to minimize the attack surface area. So while 1272 it would be possible to embed a CWT inside a JWT, or a JWT inside an 1273 X.509 extension, etc., there is a desire to encode the information 1274 natively in the format that is natural for the Relying Party. 1276 This motivates having a common "information model" that describes the 1277 set of remote attestation related information in an encoding-agnostic 1278 way, and allowing multiple encoding formats (CWT, JWT, X.509, etc.) 1279 that encode the same information into the Claims format needed by the 1280 Relying Party. 1282 The following diagram illustrates that Evidence and Attestation 1283 Results might each have multiple possible encoding formats, so that 1284 they can be conveyed by various existing protocols. It also 1285 motivates why the Verifier might also be responsible for accepting 1286 Evidence that encodes Claims in one format, while issuing Attestation 1287 Results that encode Claims in a different format. 1289 Evidence Attestation Results 1290 .--------------. CWT CWT .-------------------. 1291 | Attester-A |------------. .----------->| Relying Party V | 1292 '--------------' v | `-------------------' 1293 .--------------. JWT .------------. JWT .-------------------. 1294 | Attester-B |-------->| Verifier |-------->| Relying Party W | 1295 '--------------' | | `-------------------' 1296 .--------------. X.509 | | X.509 .-------------------. 1297 | Attester-C |-------->| |-------->| Relying Party X | 1298 '--------------' | | `-------------------' 1299 .--------------. TPM | | TPM .-------------------. 1300 | Attester-D |-------->| |-------->| Relying Party Y | 1301 '--------------' '------------' `-------------------' 1302 .--------------. other ^ | other .-------------------. 1303 | Attester-E |------------' '----------->| Relying Party Z | 1304 '--------------' `-------------------' 1306 Figure 9: Multiple Attesters and Relying Parties with Different 1307 Formats 1309 10. Freshness 1311 A Verifier or Relying Party may need to learn the point in time 1312 (i.e., the "epoch") an Evidence or Attestation Result has been 1313 produced. This is essential in deciding whether the included Claims 1314 and their values can be considered fresh, meaning they still reflect 1315 the latest state of the Attester, and that any Attestation Result was 1316 generated using the latest Appraisal Policy for Evidence. 1318 Freshness is assessed based on the Appraisal Policy for Evidence or 1319 Attestation Results that compares the estimated epoch against an 1320 "expiry" threshold defined locally to that policy. There is, 1321 however, always a race condition possible in that the state of the 1322 Attester, and the appraisal policies might change immediately after 1323 the Evidence or Attestation Result was generated. The goal is merely 1324 to narrow their recentness to something the Verifier (for Evidence) 1325 or Relying Party (for Attestation Result) is willing to accept. Some 1326 flexibility on the freshness requirement is a key component for 1327 enabling caching and reuse of both Evidence and Attestation Results, 1328 which is especially valuable in cases where their computation uses a 1329 substantial part of the resource budget (e.g., energy in constrained 1330 devices). 1332 There are three common approaches for determining the epoch of 1333 Evidence or an Attestation Result. 1335 10.1. Explicit Timekeeping using Synchronized Clocks 1337 The first approach is to rely on synchronized and trustworthy clocks, 1338 and include a signed timestamp (see [I-D.birkholz-rats-tuda]) along 1339 with the Claims in the Evidence or Attestation Result. Timestamps 1340 can also be added on a per-Claim basis to distinguish the time of 1341 creation of Evidence or Attestation Result from the time that a 1342 specific Claim was generated. The clock's trustworthiness typically 1343 requires additional Claims about the signer's time synchronization 1344 mechanism. 1346 10.2. Implicit Timekeeping using Nonces 1348 A second approach places the onus of timekeeping solely on the 1349 Verifier (for Evidence) or the Relying Party (for Attestation 1350 Results), and might be suitable, for example, in case the Attester 1351 does not have a reliable clock or time synchronization is otherwise 1352 impaired. In this approach, a non-predictable nonce is sent by the 1353 appraising entity, and the nonce is then signed and included along 1354 with the Claims in the Evidence or Attestation Result. After 1355 checking that the sent and received nonces are the same, the 1356 appraising entity knows that the Claims were signed after the nonce 1357 was generated. This allows associating a "rough" epoch to the 1358 Evidence or Attestation Result. In this case the epoch is said to be 1359 rough because: 1361 * The epoch applies to the entire claim set instead of a more 1362 granular association, and 1364 * The time between the creation of Claims and the collection of 1365 Claims is indistinguishable. 1367 10.3. Implicit Timekeeping using Epoch Handles 1369 A third approach relies on having epoch "handles" periodically sent 1370 to both the sender and receiver of Evidence or Attestation Results by 1371 some "Handle Distributor". 1373 Handles are different from nonces as they can be used more than once 1374 and can even be used by more than one entity at the same time. 1375 Handles are different from timestamps as they do not have to convey 1376 information about a point in time, i.e., they are not necessarily 1377 monotonically increasing integers. 1379 Like the nonce approach, this allows associating a "rough" epoch 1380 without requiring a reliable clock or time synchronization in order 1381 to generate or appraise the freshness of Evidence or Attestation 1382 Results. Only the Handle Distributor requires access to a clock so 1383 it can periodically send new epoch handles. 1385 The most recent handle is included in the produced Evidence or 1386 Attestation Results, and the appraising entity can compare the handle 1387 in received Evidence or Attestation Results against the latest handle 1388 it received from the Handle Distributor to determine if it is within 1389 the current epoch. An actual solution also needs to take into 1390 account race conditions when transitioning to a new epoch, such as by 1391 using a counter signed by the Handle Distributor as the handle, or by 1392 including both the current and previous handles in messages and/or 1393 checks, by requiring retries in case of mismatching handles, or by 1394 buffering incoming messages that might be associated with a handle 1395 that the receiver has not yet obtained. 1397 More generally, in order to prevent an appraising entity from 1398 generating false negatives (e.g., discarding Evidence that is deemed 1399 stale even if it is not), the appraising entity should keep an "epoch 1400 window" consisting of the most recently received handles. The depth 1401 of such epoch window is directly proportional to the maximum network 1402 propagation delay between the first to receive the handle and the 1403 last to receive the handle, and it is inversely proportional to the 1404 epoch duration. The appraising entity shall compare the handle 1405 carried in the received Evidence or Attestation Result with the 1406 handles in its epoch window to find a suitable match. 1408 Whereas the nonce approach typically requires the appraising entity 1409 to keep state for each nonce generated, the handle approach minimizes 1410 the state kept to be independent of the number of Attesters or 1411 Verifiers from which it expects to receive Evidence or Attestation 1412 Results, as long as all use the same Handle Distributor. 1414 10.4. Discussion 1416 Implicit and explicit timekeeping can be combined into hybrid 1417 mechanisms. For example, if clocks exist and are considered 1418 trustworthy but are not synchronized, a nonce-based exchange may be 1419 used to determine the (relative) time offset between the involved 1420 peers, followed by any number of timestamp based exchanges. 1422 It is important to note that the actual values in Claims might have 1423 been generated long before the Claims are signed. If so, it is the 1424 signer's responsibility to ensure that the values are still correct 1425 when they are signed. For example, values generated at boot time 1426 might have been saved to secure storage until network connectivity is 1427 established to the remote Verifier and a nonce is obtained. 1429 A more detailed discussion with examples appears in Section 16. 1431 For a discussion on the security of handles see Section 12.3. 1433 11. Privacy Considerations 1435 The conveyance of Evidence and the resulting Attestation Results 1436 reveal a great deal of information about the internal state of a 1437 device as well as potentially any users of the device. In many 1438 cases, the whole point of the Attestation process is to provide 1439 reliable information about the type of the device and the firmware/ 1440 software that the device is running. This information might be 1441 particularly interesting to many attackers. For example, knowing 1442 that a device is running a weak version of firmware provides a way to 1443 aim attacks better. 1445 Many claims in Attestation Evidence and Attestation Results are 1446 potentially Personally Identifying Information) depending on the end- 1447 to-end use case of the attestation. Attestation that goes up to 1448 include containers and applications may further reveal details about 1449 a specific system or user. 1451 In some cases, an attacker may be able to make inferences about 1452 attestations from the results or timing of the processing. For 1453 example, an attacker might be able to infer the value of specific 1454 Claims if it knew that only certain values were accepted by the 1455 Relying Party. 1457 Evidence and Attestation Results data structures are expected to 1458 support integrity protection encoding (e.g., COSE, JOSE, X.509) and 1459 optionally might support confidentiality protection (e.g., COSE, 1460 JOSE). Therefore, if confidentiality protection is omitted or 1461 unavailable, the protocols that convey Evidence or Attestation 1462 Results are responsible for detailing what kinds of information are 1463 disclosed, and to whom they are exposed. 1465 Furthermore, because Evidence might contain sensitive information, 1466 Attesters are responsible for only sending such Evidence to trusted 1467 Verifiers. Some Attesters might want a stronger level of assurance 1468 of the trustworthiness of a Verifier before sending Evidence to it. 1469 In such cases, an Attester can first act as a Relying Party and ask 1470 for the Verifier's own Attestation Result, and appraising it just as 1471 a Relying Party would appraise an Attestation Result for any other 1472 purpose. 1474 Another approach to deal with Evidence is to remove PII from the 1475 Evidence while still being able to verify that the Attester is one of 1476 a large set. This approach is often called "Direct Anonymous 1477 Attestation". See [CCC-DeepDive] section 6.2 for more discussion. 1479 12. Security Considerations 1481 12.1. Attester and Attestation Key Protection 1483 Implementers need to pay close attention to the protection of the 1484 Attester and the factory processes for provisioning the Attestation 1485 key material. If either of these are compromised, the remote 1486 attestation becomes worthless because an attacker can forge Evidence 1487 or manipulate the Attesting Environment. For example, a Target 1488 Environment should not be able to tamper with the Attesting 1489 Environment that measures it, by isolating the two environments from 1490 each other in some way. 1492 Remote attestation applies to use cases with a range of security 1493 requirements, so the protections discussed here range from low to 1494 high security where low security may be only application or process 1495 isolation by the device's operating system and high security involves 1496 specialized hardware to defend against physical attacks on a chip. 1498 12.1.1. On-Device Attester and Key Protection 1500 It is assumed that an Attesting Environment is sufficiently isolated 1501 from the Target Environment it collects Claims for and signs them 1502 with an Attestation Key, so that the Target Environment cannot forge 1503 Evidence about itself. Such an isolated environment might be 1504 provided by a process, a dedicated chip, a TEE, a virtual machine, or 1505 another secure mode of operation. The Attesting Environment must be 1506 protected from unauthorized modification to ensure it behaves 1507 correctly. There must also be confidentiality so that the signing 1508 key is not captured and used elsewhere to forge Evidence. 1510 In many cases the user or owner of the device must not be able to 1511 modify or exfiltrate keys from the Attesting Environment of the 1512 Attester. For example the owner or user of a mobile phone or FIDO 1513 authenticator, having full control over the keys, might not be 1514 trusted to use the keys to report Evidence about the environment that 1515 protects the keys. The point of remote attestation is for the 1516 Relying Party to be able to trust the Attester even though they don't 1517 trust the user or owner. 1519 Some of the measures for a minimally protected system might include 1520 process or application isolation by a high-level operating system, 1521 and perhaps restricting access to root or system privilege. For 1522 extremely simple single-use devices that don't use a protected mode 1523 operating system, like a Bluetooth speaker, the isolation might only 1524 be the plastic housing for the device. 1526 Measures for a moderately protected system could include a special 1527 restricted operating environment like a Trusted Execution Environment 1528 (TEE) might be used. In this case, only security-oriented software 1529 has access to the Attester and key material. 1531 Measures for a highly protected system could include specialized 1532 hardware that is used to provide protection against chip decapping 1533 attacks, power supply and clock glitching, faulting injection and RF 1534 and power side channel attacks. 1536 12.1.2. Attestation Key Provisioning Processes 1538 Attestation key provisioning is the process that occurs in the 1539 factory or elsewhere that establishes the signing key material on the 1540 device and the verification key material off the device. Sometimes 1541 this is referred to as "personalization". 1543 One way to provision a key is to first generate it external to the 1544 device and then copy the key onto the device. In this case, 1545 confidentiality of the generator, as well as the path over which the 1546 key is provisioned, is necessary. The manufacturer needs to take 1547 care to protect it with measures consistent with its value. This can 1548 be achieved in a number of ways. 1550 Confidentiality can be achieved entirely with physical provisioning 1551 facility security involving no encryption at all. For low-security 1552 use cases, this might be simply locking doors and limiting personnel 1553 that can enter the facility. For high-security use cases, this might 1554 involve a special area of the facility accessible only to select 1555 security-trained personnel. 1557 Cryptography can also be used to support confidentiality, but keys 1558 that are used to then provision attestation keys must somehow have 1559 been provisioned securely beforehand (a recursive problem). 1561 In many cases both some physical security and some cryptography will 1562 be necessary and useful to establish confidentiality. 1564 Another way to provision the key material is to generate it on the 1565 device and export the verification key. If public key cryptography 1566 is being used, then only integrity is necessary. Confidentiality is 1567 not necessary. 1569 In all cases, the Attestation Key provisioning process must ensure 1570 that only attestation key material that is generated by a valid 1571 Endorser is established in Attesters and then configured correctly. 1572 For many use cases, this will involve physical security at the 1573 facility, to prevent unauthorized devices from being manufactured 1574 that may be counterfeit or incorrectly configured. 1576 12.2. Integrity Protection 1578 Any solution that conveys information used for security purposes, 1579 whether such information is in the form of Evidence, Attestation 1580 Results, Endorsements, or appraisal policy must support end-to-end 1581 integrity protection and replay attack prevention, and often also 1582 needs to support additional security properties, including: 1584 * end-to-end encryption, 1586 * denial of service protection, 1588 * authentication, 1590 * auditing, 1592 * fine grained access controls, and 1594 * logging. 1596 Section 10 discusses ways in which freshness can be used in this 1597 architecture to protect against replay attacks. 1599 To assess the security provided by a particular appraisal policy, it 1600 is important to understand the strength of the root of trust, e.g., 1601 whether it is mutable software, or firmware that is read-only after 1602 boot, or immutable hardware/ROM. 1604 It is also important that the appraisal policy was itself obtained 1605 securely. If an attacker can configure appraisal policies for a 1606 Relying Party or for a Verifier, then integrity of the process is 1607 compromised. 1609 The security protecting conveyed information may be applied at 1610 different layers, whether by a conveyance protocol, or an information 1611 encoding format. This architecture expects attestation messages 1612 (i.e., Evidence, Attestation Results, Endorsements, Reference Values, 1613 and Policies) are end-to-end protected based on the role interaction 1614 context. For example, if an Attester produces Evidence that is 1615 relayed through some other entity that doesn't implement the Attester 1616 or the intended Verifier roles, then the relaying entity should not 1617 expect to have access to the Evidence. 1619 12.3. Handle-based Attestation 1621 Handles, described in Section 10.3, can be tampered with, dropped, 1622 delayed and reordered by an attacker. 1624 An attacker could be either external or belong to the distribution 1625 group, for example if one of the Attester entities have been 1626 compromised. 1628 An attacker who is able to tamper with handles can potentially lock 1629 all the participants in a certain epoch of choice for ever, 1630 effectively freezing time. This is problematic since it destroys the 1631 ability to ascertain freshness of Evidence and Attestation Results. 1633 To mitigate this threat, the transport should be at least integrity 1634 protected and provide origin authentication. 1636 Selective dropping of handles is equivalent to pinning the victim 1637 node to a past epoch. An attacker could drop handles to only some 1638 entities and not others, which will typically result in a denial of 1639 service due to the permanent staleness of the Attestation Result or 1640 Evidence. 1642 Delaying or reordering handles is equivalent to manipulating the 1643 victim's timeline at will. This ability could be used by a malicious 1644 actor (e.g., a compromised router) to mount a confusion attack where, 1645 for example, a Verifier is tricked into accepting Evidence coming 1646 from a past epoch as fresh, while in the meantime the Attester has 1647 been compromised. 1649 Reordering and dropping attacks are mitigated if the transport 1650 provides the ability to detect reordering and drop. However, the 1651 delay attack described above can't be thwarted in this manner. 1653 13. IANA Considerations 1655 This document does not require any actions by IANA. 1657 14. Acknowledgments 1659 Special thanks go to Joerg Borchert, Nancy Cam-Winget, Jessica 1660 Fitzgerald-McKay, Diego Lopez, Laurence Lundblade, Paul Rowe, Hannes 1661 Tschofenig, Frank Xia, and David Wooten. 1663 15. Notable Contributions 1665 Thomas Hardjono created older versions of the terminology section in 1666 collaboration with Ned Smith. Eric Voit provided the conceptual 1667 separation between Attestation Provision Flows and Attestation 1668 Evidence Flows. Monty Wisemen created the content structure of the 1669 first three architecture drafts. Carsten Bormann provided many of 1670 the motivational building blocks with respect to the Internet Threat 1671 Model. 1673 16. Appendix A: Time Considerations 1675 The table below defines a number of relevant events, with an ID that 1676 is used in subsequent diagrams. The times of said events might be 1677 defined in terms of an absolute clock time such as Coordinated 1678 Universal Time, or might be defined relative to some other timestamp 1679 or timeticks counter. 1681 +====+============+=================================================+ 1682 | ID | Event | Explanation of event | 1683 +====+============+=================================================+ 1684 | VG | Value | A value to appear in a Claim was created. | 1685 | | generated | In some cases, a value may have technically | 1686 | | | existed before an Attester became aware of | 1687 | | | it but the Attester might have no idea how | 1688 | | | long it has had that value. In such a | 1689 | | | case, the Value created time is the time at | 1690 | | | which the Claim containing the copy of the | 1691 | | | value was created. | 1692 +----+------------+-------------------------------------------------+ 1693 | NS | Nonce sent | A nonce not predictable to an Attester | 1694 | | | (recentness & uniqueness) is sent to an | 1695 | | | Attester. | 1696 +----+------------+-------------------------------------------------+ 1697 | NR | Nonce | A nonce is relayed to an Attester by | 1698 | | relayed | another entity. | 1699 +----+------------+-------------------------------------------------+ 1700 | HR | Handle | A handle is successfully received and | 1701 | | received | processed by an entity. | 1702 +----+------------+-------------------------------------------------+ 1703 | EG | Evidence | An Attester creates Evidence from collected | 1704 | | generation | Claims. | 1705 +----+------------+-------------------------------------------------+ 1706 | ER | Evidence | A Relying Party relays Evidence to a | 1707 | | relayed | Verifier. | 1708 +----+------------+-------------------------------------------------+ 1709 | RG | Result | A Verifier appraises Evidence and generates | 1710 | | generation | an Attestation Result. | 1711 +----+------------+-------------------------------------------------+ 1712 | RR | Result | A Relying Party relays an Attestation | 1713 | | relayed | Result to a Relying Party. | 1714 +----+------------+-------------------------------------------------+ 1715 | RA | Result | The Relying Party appraises Attestation | 1716 | | appraised | Results. | 1717 +----+------------+-------------------------------------------------+ 1718 | OP | Operation | The Relying Party performs some operation | 1719 | | performed | requested by the Attester. For example, | 1720 | | | acting upon some message just received | 1721 | | | across a session created earlier at | 1722 | | | time(RA). | 1723 +----+------------+-------------------------------------------------+ 1724 | RX | Result | An Attestation Result should no longer be | 1725 | | expiry | accepted, according to the Verifier that | 1726 | | | generated it. | 1727 +----+------------+-------------------------------------------------+ 1729 Table 1 1731 Using the table above, a number of hypothetical examples of how a 1732 solution might be built are illustrated below. a solution might be 1733 built. This list is not intended to be complete, but is just 1734 representative enough to highlight various timing considerations. 1736 All times are relative to the local clocks, indicated by an "a" 1737 (Attester), "v" (Verifier), or "r" (Relying Party) suffix. 1739 Times with an appended Prime (') indicate a second instance of the 1740 same event. 1742 How and if clocks are synchronized depends upon the model. 1744 16.1. Example 1: Timestamp-based Passport Model Example 1746 The following example illustrates a hypothetical Passport Model 1747 solution that uses timestamps and requires roughly synchronized 1748 clocks between the Attester, Verifier, and Relying Party, which 1749 depends on using a secure clock synchronization mechanism. As a 1750 result, the receiver of a conceptual message containing a timestamp 1751 can directly compare it to its own clock and timestamps. 1753 .----------. .----------. .---------------. 1754 | Attester | | Verifier | | Relying Party | 1755 '----------' '----------' '---------------' 1756 time(VG_a) | | 1757 | | | 1758 ~ ~ ~ 1759 | | | 1760 time(EG_a) | | 1761 |------Evidence{time(EG_a)}------>| | 1762 | time(RG_v) | 1763 |<-----Attestation Result---------| | 1764 | {time(RG_v),time(RX_v)} | | 1765 ~ ~ 1766 | | 1767 |----Attestation Result{time(RG_v),time(RX_v)}-->time(RA_r) 1768 | | 1769 ~ ~ 1770 | | 1771 | time(OP_r) 1773 In the figures above and in subsequent sections, curly braces 1774 indicate containment. For example, the notation Evidence{foo} 1775 indicates that 'foo' is contained in the Evidence and is thus covered 1776 by its signature. 1778 The Verifier can check whether the Evidence is fresh when appraising 1779 it at time(RG_v) by checking "time(RG_v) - time(EG_a) < Threshold", 1780 where the Verifier's threshold is large enough to account for the 1781 maximum permitted clock skew between the Verifier and the Attester. 1783 If time(VG_a) is also included in the Evidence along with the claim 1784 value generated at that time, and the Verifier decides that it can 1785 trust the time(VG_a) value, the Verifier can also determine whether 1786 the claim value is recent by checking "time(RG_v) - time(VG_a) < 1787 Threshold". The threshold is decided by the Appraisal Policy for 1788 Evidence, and again needs to take into account the maximum permitted 1789 clock skew between the Verifier and the Attester. 1791 The Relying Party can check whether the Attestation Result is fresh 1792 when appraising it at time(RA_r) by checking "time(RA_r) - time(RG_v) 1793 < Threshold", where the Relying Party's threshold is large enough to 1794 account for the maximum permitted clock skew between the Relying 1795 Party and the Verifier. The result might then be used for some time 1796 (e.g., throughout the lifetime of a connection established at 1797 time(RA_r)). The Relying Party must be careful, however, to not 1798 allow continued use beyond the period for which it deems the 1799 Attestation Result to remain fresh enough. Thus, it might allow use 1800 (at time(OP_r)) as long as "time(OP_r) - time(RG_v) < Threshold". 1801 However, if the Attestation Result contains an expiry time time(RX_v) 1802 then it could explicitly check "time(OP_r) < time(RX_v)". 1804 16.2. Example 2: Nonce-based Passport Model Example 1806 The following example illustrates a hypothetical Passport Model 1807 solution that uses nonces instead of timestamps. Compared to the 1808 timestamp-based example, it requires an extra round trip to retrieve 1809 a nonce, and requires that the Verifier and Relying Party track state 1810 to remember the nonce for some period of time. 1812 The advantage is that it does not require that any clocks are 1813 synchronized. As a result, the receiver of a conceptual message 1814 containing a timestamp cannot directly compare it to its own clock or 1815 timestamps. Thus we use a suffix ("a" for Attester, "v" for 1816 Verifier, and "r" for Relying Party) on the IDs below indicating 1817 which clock generated them, since times from different clocks cannot 1818 be compared. Only the delta between two events from the sender can 1819 be used by the receiver. 1821 .----------. .----------. .---------------. 1822 | Attester | | Verifier | | Relying Party | 1823 '----------' '----------' '---------------' 1824 time(VG_a) | | 1825 | | | 1826 ~ ~ ~ 1827 | | | 1828 |<--Nonce1---------------------time(NS_v) | 1829 time(EG_a) | | 1830 |---Evidence--------------------->| | 1831 | {Nonce1, time(EG_a)-time(VG_a)} | | 1832 | time(RG_v) | 1833 |<--Attestation Result------------| | 1834 | {time(RX_v)-time(RG_v)} | | 1835 ~ ~ 1836 | | 1837 |<--Nonce2-------------------------------------time(NS_r) 1838 time(RR_a) | 1839 |--[Attestation Result{time(RX_v)-time(RG_v)}, -->|time(RA_r) 1840 | Nonce2, time(RR_a)-time(EG_a)] | 1841 ~ ~ 1842 | | 1843 | time(OP_r) 1845 In this example solution, the Verifier can check whether the Evidence 1846 is fresh at "time(RG_v)" by verifying that "time(RG_v)-time(NS_v) < 1847 Threshold". 1849 The Verifier cannot, however, simply rely on a Nonce to determine 1850 whether the value of a claim is recent, since the claim value might 1851 have been generated long before the nonce was sent by the Verifier. 1852 However, if the Verifier decides that the Attester can be trusted to 1853 correctly provide the delta "time(EG_a)-time(VG_a)", then it can 1854 determine recency by checking "time(RG_v)-time(NS_v) + time(EG_a)- 1855 time(VG_a) < Threshold". 1857 Similarly if, based on an Attestation Result from a Verifier it 1858 trusts, the Relying Party decides that the Attester can be trusted to 1859 correctly provide time deltas, then it can determine whether the 1860 Attestation Result is fresh by checking "time(OP_r)-time(NS_r) + 1861 time(RR_a)-time(EG_a) < Threshold". Although the Nonce2 and 1862 "time(RR_a)-time(EG_a)" values cannot be inside the Attestation 1863 Result, they might be signed by the Attester such that the 1864 Attestation Result vouches for the Attester's signing capability. 1866 The Relying Party must still be careful, however, to not allow 1867 continued use beyond the period for which it deems the Attestation 1868 Result to remain valid. Thus, if the Attestation Result sends a 1869 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 1870 Relying Party can check "time(OP_r)-time(NS_r) < time(RX_v)- 1871 time(RG_v)". 1873 16.3. Example 3: Handle-based Passport Model Example 1875 The example in Figure 10 illustrates a hypothetical Passport Model 1876 solution that uses handles instead of nonces or timestamps. 1878 The Handle Distributor broadcasts handle "H" which starts a new epoch 1879 "E" for a protocol participant upon reception at "time(HR)". 1881 The Attester generates Evidence incorporating handle "H" and conveys 1882 it to the Verifier. 1884 The Verifier appraises that the received handle "H" is "fresh" 1885 according to the definition provided in Section 10.3 whereby retries 1886 are required in the case of mismatching handles, and generates an 1887 Attestation Result. The Attestation Result is conveyed to the 1888 Attester. 1890 After the transmission of handle "H'" a new epoch "E'" is established 1891 when "H'" is received by each protocol participant. The Attester 1892 relays the Attestation Result obtained during epoch "E" (associated 1893 with handle "H") to the Relying Party using the handle for the 1894 current epoch "H'". If the Relying Party had not yet received "H'", 1895 then the Attestation Result would be rejected, but in this example, 1896 it is received. 1898 In the illustrated scenario, the handle for relaying an Attestation 1899 Result to the Relying Party is current, while a previous handle was 1900 used to generate Verifier evaluated evidence. This indicates that at 1901 least one epoch transition has occurred, and the Attestation Results 1902 may only be as fresh as the previous epoch. If the Relying Party 1903 remembers the previous handle H during an epoch window as discussed 1904 in Section 10.3, and the message is received during that window, the 1905 Attestation Result is accepted as fresh, and otherwise it is rejected 1906 as stale. 1908 .-------------. 1909 .----------. | Handle | .----------. .---------------. 1910 | Attester | | Distributor | | Verifier | | Relying Party | 1911 '----------' '-------------' '----------' '---------------' 1912 time(VG_a) | | | 1913 | | | | 1914 ~ ~ ~ ~ 1915 | | | | 1916 time(HR_a)<------H--+--H--------time(HR_v)----->time(HR_r) 1917 | | | | 1918 time(EG_a) | | | 1919 |---Evidence--------------------->| | 1920 | {H,time(EG_a)-time(VG_a)} | | 1921 | | | | 1922 | | time(RG_v) | 1923 |<--Attestation Result------------| | 1924 | {H,time(RX_v)-time(RG_v)} | | 1925 | | | | 1926 time(HR'_a)<-----H'-+--H'-------time(HR'_v)---->time(HR'_r) 1927 | | | | 1928 |---[Attestation Result--------------------->time(RA_r) 1929 | {H,time(RX_v)-time(RG_v)},H'] | | 1930 | | | | 1931 ~ ~ ~ ~ 1932 | | | | 1933 | | | time(OP_r) 1935 Figure 10: Handle-based Passport Model 1937 16.4. Example 4: Timestamp-based Background-Check Model Example 1939 The following example illustrates a hypothetical Background-Check 1940 Model solution that uses timestamps and requires roughly synchronized 1941 clocks between the Attester, Verifier, and Relying Party. 1943 .----------. .---------------. .----------. 1944 | Attester | | Relying Party | | Verifier | 1945 '----------' '---------------' '----------' 1946 time(VG_a) | | 1947 | | | 1948 ~ ~ ~ 1949 | | | 1950 time(EG_a) | | 1951 |----Evidence------->| | 1952 | {time(EG_a)} time(ER_r)--Evidence{time(EG_a)}->| 1953 | | time(RG_v) 1954 | time(RA_r)<-Attestation Result---| 1955 | | {time(RX_v)} | 1956 ~ ~ ~ 1957 | | | 1958 | time(OP_r) | 1960 The time considerations in this example are equivalent to those 1961 discussed under Example 1 above. 1963 16.5. Example 5: Nonce-based Background-Check Model Example 1965 The following example illustrates a hypothetical Background-Check 1966 Model solution that uses nonces and thus does not require that any 1967 clocks are synchronized. In this example solution, a nonce is 1968 generated by a Verifier at the request of a Relying Party, when the 1969 Relying Party needs to send one to an Attester. 1971 .----------. .---------------. .----------. 1972 | Attester | | Relying Party | | Verifier | 1973 '----------' '---------------' '----------' 1974 time(VG_a) | | 1975 | | | 1976 ~ ~ ~ 1977 | | | 1978 | |<-------Nonce-----------time(NS_v) 1979 |<---Nonce-----------time(NR_r) | 1980 time(EG_a) | | 1981 |----Evidence{Nonce}--->| | 1982 | time(ER_r)--Evidence{Nonce}--->| 1983 | | time(RG_v) 1984 | time(RA_r)<-Attestation Result-| 1985 | | {time(RX_v)-time(RG_v)} | 1986 ~ ~ ~ 1987 | | | 1988 | time(OP_r) | 1990 The Verifier can check whether the Evidence is fresh, and whether a 1991 claim value is recent, the same as in Example 2 above. 1993 However, unlike in Example 2, the Relying Party can use the Nonce to 1994 determine whether the Attestation Result is fresh, by verifying that 1995 "time(OP_r)-time(NR_r) < Threshold". 1997 The Relying Party must still be careful, however, to not allow 1998 continued use beyond the period for which it deems the Attestation 1999 Result to remain valid. Thus, if the Attestation Result sends a 2000 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 2001 Relying Party can check "time(OP_r)-time(ER_r) < time(RX_v)- 2002 time(RG_v)". 2004 17. References 2006 17.1. Normative References 2008 [RFC7519] Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token 2009 (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015, 2010 . 2012 [RFC8392] Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, 2013 "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, 2014 May 2018, . 2016 17.2. Informative References 2018 [CCC-DeepDive] 2019 Confidential Computing Consortium, "Confidential Computing 2020 Deep Dive", n.d., 2021 . 2023 [CTAP] FIDO Alliance, "Client to Authenticator Protocol", n.d., 2024 . 2028 [I-D.birkholz-rats-tuda] 2029 Fuchs, A., Birkholz, H., McDonald, I., and C. Bormann, 2030 "Time-Based Uni-Directional Attestation", Work in 2031 Progress, Internet-Draft, draft-birkholz-rats-tuda-04, 13 2032 January 2021, . 2035 [I-D.birkholz-rats-uccs] 2036 Birkholz, H., O'Donoghue, J., Cam-Winget, N., and C. 2037 Bormann, "A CBOR Tag for Unprotected CWT Claims Sets", 2038 Work in Progress, Internet-Draft, draft-birkholz-rats- 2039 uccs-02, 2 December 2020, . 2042 [I-D.ietf-teep-architecture] 2043 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 2044 "Trusted Execution Environment Provisioning (TEEP) 2045 Architecture", Work in Progress, Internet-Draft, draft- 2046 ietf-teep-architecture-13, 2 November 2020, 2047 . 2050 [I-D.tschofenig-tls-cwt] 2051 Tschofenig, H. and M. Brossard, "Using CBOR Web Tokens 2052 (CWTs) in Transport Layer Security (TLS) and Datagram 2053 Transport Layer Security (DTLS)", Work in Progress, 2054 Internet-Draft, draft-tschofenig-tls-cwt-02, 13 July 2020, 2055 . 2058 [OPCUA] OPC Foundation, "OPC Unified Architecture Specification, 2059 Part 2: Security Model, Release 1.03", OPC 10000-2 , 25 2060 November 2015, . 2064 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 2065 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 2066 . 2068 [RFC8322] Field, J., Banghart, S., and D. Waltermire, "Resource- 2069 Oriented Lightweight Information Exchange (ROLIE)", 2070 RFC 8322, DOI 10.17487/RFC8322, February 2018, 2071 . 2073 [strengthoffunction] 2074 NISC, "Strength of Function", n.d., 2075 . 2078 [TCG-DICE] Trusted Computing Group, "DICE Certificate Profiles", 2079 n.d., . 2083 [TCGarch] Trusted Computing Group, "Trusted Platform Module Library 2084 - Part 1: Architecture", 8 November 2019, 2085 . 2088 [WebAuthN] W3C, "Web Authentication: An API for accessing Public Key 2089 Credentials", n.d., . 2091 Contributors 2093 Monty Wiseman 2095 Email: montywiseman32@gmail.com 2097 Liang Xia 2099 Email: frank.xialiang@huawei.com 2101 Laurence Lundblade 2103 Email: lgl@island-resort.com 2105 Eliot Lear 2107 Email: elear@cisco.com 2109 Jessica Fitzgerald-McKay 2111 Sarah C. Helbe 2113 Andrew Guinn 2115 Peter Loscocco 2117 Email: pete.loscocco@gmail.com 2119 Eric Voit 2120 Thomas Fossati 2122 Email: thomas.fossati@arm.com 2124 Paul Rowe 2126 Carsten Bormann 2128 Email: cabo@tzi.org 2130 Giri Mandyam 2132 Email: mandyam@qti.qualcomm.com 2134 Authors' Addresses 2136 Henk Birkholz 2137 Fraunhofer SIT 2138 Rheinstrasse 75 2139 64295 Darmstadt 2140 Germany 2142 Email: henk.birkholz@sit.fraunhofer.de 2144 Dave Thaler 2145 Microsoft 2146 United States of America 2148 Email: dthaler@microsoft.com 2150 Michael Richardson 2151 Sandelman Software Works 2152 Canada 2154 Email: mcr+ietf@sandelman.ca 2156 Ned Smith 2157 Intel Corporation 2158 United States of America 2160 Email: ned.smith@intel.com 2161 Wei Pan 2162 Huawei Technologies 2164 Email: william.panwei@huawei.com