idnits 2.17.1 draft-ietf-rats-architecture-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 514 has weird spacing: '... claims v ...' -- The document date (5 February 2021) is 1148 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-birkholz-rats-tuda-04 == Outdated reference: A later version (-03) exists of draft-birkholz-rats-uccs-02 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-13 Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RATS Working Group H. Birkholz 3 Internet-Draft Fraunhofer SIT 4 Intended status: Informational D. Thaler 5 Expires: 9 August 2021 Microsoft 6 M. Richardson 7 Sandelman Software Works 8 N. Smith 9 Intel 10 W. Pan 11 Huawei Technologies 12 5 February 2021 14 Remote Attestation Procedures Architecture 15 draft-ietf-rats-architecture-09 17 Abstract 19 In network protocol exchanges it is often the case that one entity 20 requires believable evidence about the operational state of a remote 21 peer. Such evidence is typically conveyed as claims about the peer's 22 software and hardware platform, and is subsequently appraised in 23 order to assess the peer's trustworthiness. The process of 24 generating and appraising this kind of evidence is known as remote 25 attestation. This document describes an architecture for remote 26 attestation procedures that generate, convey, and appraise evidence 27 about a peer's operational state. 29 Note to Readers 31 Discussion of this document takes place on the RATS Working Group 32 mailing list (rats@ietf.org), which is archived at 33 https://mailarchive.ietf.org/arch/browse/rats/ 34 (https://mailarchive.ietf.org/arch/browse/rats/). 36 Source for this draft and an issue tracker can be found at 37 https://github.com/ietf-rats-wg/architecture (https://github.com/ 38 ietf-rats-wg/architecture). 40 Status of This Memo 42 This Internet-Draft is submitted in full conformance with the 43 provisions of BCP 78 and BCP 79. 45 Internet-Drafts are working documents of the Internet Engineering 46 Task Force (IETF). Note that other groups may also distribute 47 working documents as Internet-Drafts. The list of current Internet- 48 Drafts is at https://datatracker.ietf.org/drafts/current/. 50 Internet-Drafts are draft documents valid for a maximum of six months 51 and may be updated, replaced, or obsoleted by other documents at any 52 time. It is inappropriate to use Internet-Drafts as reference 53 material or to cite them other than as "work in progress." 55 This Internet-Draft will expire on 9 August 2021. 57 Copyright Notice 59 Copyright (c) 2021 IETF Trust and the persons identified as the 60 document authors. All rights reserved. 62 This document is subject to BCP 78 and the IETF Trust's Legal 63 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 64 license-info) in effect on the date of publication of this document. 65 Please review these documents carefully, as they describe your rights 66 and restrictions with respect to this document. Code Components 67 extracted from this document must include Simplified BSD License text 68 as described in Section 4.e of the Trust Legal Provisions and are 69 provided without warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 74 2. Reference Use Cases . . . . . . . . . . . . . . . . . . . . . 4 75 2.1. Network Endpoint Assessment . . . . . . . . . . . . . . . 4 76 2.2. Confidential Machine Learning (ML) Model Protection . . . 5 77 2.3. Confidential Data Protection . . . . . . . . . . . . . . 5 78 2.4. Critical Infrastructure Control . . . . . . . . . . . . . 6 79 2.5. Trusted Execution Environment (TEE) Provisioning . . . . 6 80 2.6. Hardware Watchdog . . . . . . . . . . . . . . . . . . . . 6 81 2.7. FIDO Biometric Authentication . . . . . . . . . . . . . . 7 82 3. Architectural Overview . . . . . . . . . . . . . . . . . . . 7 83 3.1. Appraisal Policies . . . . . . . . . . . . . . . . . . . 9 84 3.2. Reference Values . . . . . . . . . . . . . . . . . . . . 9 85 3.3. Two Types of Environments of an Attester . . . . . . . . 9 86 3.4. Layered Attestation Environments . . . . . . . . . . . . 11 87 3.5. Composite Device . . . . . . . . . . . . . . . . . . . . 13 88 3.6. Implementation Considerations . . . . . . . . . . . . . . 15 89 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 15 90 4.1. Roles . . . . . . . . . . . . . . . . . . . . . . . . . . 15 91 4.2. Artifacts . . . . . . . . . . . . . . . . . . . . . . . . 16 92 5. Topological Patterns . . . . . . . . . . . . . . . . . . . . 18 93 5.1. Passport Model . . . . . . . . . . . . . . . . . . . . . 18 94 5.2. Background-Check Model . . . . . . . . . . . . . . . . . 19 95 5.3. Combinations . . . . . . . . . . . . . . . . . . . . . . 20 96 6. Roles and Entities . . . . . . . . . . . . . . . . . . . . . 21 97 7. Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . 22 98 7.1. Relying Party . . . . . . . . . . . . . . . . . . . . . . 22 99 7.2. Attester . . . . . . . . . . . . . . . . . . . . . . . . 23 100 7.3. Relying Party Owner . . . . . . . . . . . . . . . . . . . 23 101 7.4. Verifier . . . . . . . . . . . . . . . . . . . . . . . . 24 102 7.5. Endorser, Reference Value Provider, and Verifier Owner . 25 103 8. Conceptual Messages . . . . . . . . . . . . . . . . . . . . . 26 104 8.1. Evidence . . . . . . . . . . . . . . . . . . . . . . . . 26 105 8.2. Endorsements . . . . . . . . . . . . . . . . . . . . . . 26 106 8.3. Attestation Results . . . . . . . . . . . . . . . . . . . 27 107 9. Claims Encoding Formats . . . . . . . . . . . . . . . . . . . 28 108 10. Freshness . . . . . . . . . . . . . . . . . . . . . . . . . . 29 109 10.1. Explicit Timekeeping using Synchronized Clocks . . . . . 30 110 10.2. Implicit Timekeeping using Nonces . . . . . . . . . . . 30 111 10.3. Implicit Timekeeping using Epoch Handles . . . . . . . . 30 112 10.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . 31 113 11. Privacy Considerations . . . . . . . . . . . . . . . . . . . 32 114 12. Security Considerations . . . . . . . . . . . . . . . . . . . 33 115 12.1. Attester and Attestation Key Protection . . . . . . . . 33 116 12.1.1. On-Device Attester and Key Protection . . . . . . . 33 117 12.1.2. Attestation Key Provisioning Processes . . . . . . . 34 118 12.2. Integrity Protection . . . . . . . . . . . . . . . . . . 35 119 12.3. Handle-based Attestation . . . . . . . . . . . . . . . . 36 120 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 36 121 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 37 122 15. Notable Contributions . . . . . . . . . . . . . . . . . . . . 37 123 16. Appendix A: Time Considerations . . . . . . . . . . . . . . . 37 124 16.1. Example 1: Timestamp-based Passport Model Example . . . 38 125 16.2. Example 2: Nonce-based Passport Model Example . . . . . 40 126 16.3. Example 3: Handle-based Passport Model Example . . . . . 42 127 16.4. Example 4: Timestamp-based Background-Check Model 128 Example . . . . . . . . . . . . . . . . . . . . . . . . 43 129 16.5. Example 5: Nonce-based Background-Check Model Example . 44 130 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 45 131 17.1. Normative References . . . . . . . . . . . . . . . . . . 45 132 17.2. Informative References . . . . . . . . . . . . . . . . . 45 133 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . 47 134 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 48 136 1. Introduction 138 In Remote Attestation Procedures (RATS), one peer (the "Attester") 139 produces believable information about itself - Evidence - to enable a 140 remote peer (the "Relying Party") to decide whether to consider that 141 Attester a trustworthy peer or not. RATS are facilitated by an 142 additional vital party, the Verifier. 144 The Verifier appraises Evidence via appraisal policies and creates 145 the Attestation Results to support Relying Parties in their decision 146 process. This document defines a flexible architecture consisting of 147 attestation roles and their interactions via conceptual messages. 148 Additionally, this document defines a universal set of terms that can 149 be mapped to various existing and emerging Remote Attestation 150 Procedures. Common topological models and the data flows associated 151 with them, such as the "Passport Model" and the "Background-Check 152 Model" are illustrated. The purpose is to define useful terminology 153 for attestation and enable readers to map their solution architecture 154 to the canonical attestation architecture provided here. Having a 155 common terminology that provides well-understood meanings for common 156 themes such as roles, device composition, topological models, and 157 appraisal is vital for semantic interoperability across solutions and 158 platforms involving multiple vendors and providers. 160 Amongst other things, this document is about trust and 161 trustworthiness. Trust is a choice one makes about another system. 162 Trustworthiness is a quality about the other system that can be used 163 in making one's decision to trust it or not. This is subtle 164 difference and being familiar with the difference is crucial for 165 using this document. Additionally, the concepts of freshness and 166 trust relationships with respect to RATS are elaborated on to enable 167 implementers to choose appropriate solutions to compose their Remote 168 Attestation Procedures. 170 2. Reference Use Cases 172 This section covers a number of representative use cases for remote 173 attestation, independent of specific solutions. The purpose is to 174 provide motivation for various aspects of the architecture presented 175 in this draft. Many other use cases exist, and this document does 176 not intend to have a complete list, only to have a set of use cases 177 that collectively cover all the functionality required in the 178 architecture. 180 Each use case includes a description followed by a summary of the 181 Attester and Relying Party roles. 183 2.1. Network Endpoint Assessment 185 Network operators want a trustworthy report that includes identity 186 and version information about the hardware and software on the 187 machines attached to their network, for purposes such as inventory, 188 audit, anomaly detection, record maintenance and/or trending reports 189 (logging). The network operator may also want a policy by which full 190 access is only granted to devices that meet some definition of 191 hygiene, and so wants to get claims about such information and verify 192 its validity. Remote attestation is desired to prevent vulnerable or 193 compromised devices from getting access to the network and 194 potentially harming others. 196 Typically, solutions start with a specific component (called a "root 197 of trust") that provides device identity and protected storage for 198 measurements. The system components perform a series of measurements 199 that may be signed by the root of trust, considered as Evidence about 200 the hardware, firmware, BIOS, software, etc. that is present. 202 Attester: A device desiring access to a network 204 Relying Party: Network equipment such as a router, switch, or access 205 point, responsible for admission of the device into the network 207 2.2. Confidential Machine Learning (ML) Model Protection 209 A device manufacturer wants to protect its intellectual property. 210 This is primarily the ML model it developed and runs in the devices 211 purchased by its customers. The goals for the protection include 212 preventing attackers, potentially the customer themselves, from 213 seeing the details of the model. 215 This typically works by having some protected environment in the 216 device go through a remote attestation with some manufacturer service 217 that can assess its trustworthiness. If remote attestation succeeds, 218 then the manufacturer service releases either the model, or a key to 219 decrypt a model the Attester already has in encrypted form, to the 220 requester. 222 Attester: A device desiring to run an ML model 224 Relying Party: A server or service holding ML models it desires to 225 protect 227 2.3. Confidential Data Protection 229 This is a generalization of the ML model use case above, where the 230 data can be any highly confidential data, such as health data about 231 customers, payroll data about employees, future business plans, etc. 232 As part of the attestation procedure, an assessment is made against a 233 set of policies to evaluate the state of the system that is 234 requesting the confidential data. Attestation is desired to prevent 235 leaking data to compromised devices. 237 Attester: An entity desiring to retrieve confidential data 239 Relying Party: An entity that holds confidential data for release to 240 authorized entities 242 2.4. Critical Infrastructure Control 244 In this use case, potentially harmful physical equipment (e.g., power 245 grid, traffic control, hazardous chemical processing, etc.) is 246 connected to a network. The organization managing such 247 infrastructure needs to ensure that only authorized code and users 248 can control such processes, and that these processes are protected 249 from unauthorized manipulation or other threats. When a protocol 250 operation can affect a component of a critical system, the device 251 attached to the critical equipment requires some assurances depending 252 on the security context, including that: the requesting device or 253 application has not been compromised, and the requesters and actors 254 act on applicable policies, As such, remote attestation can be used 255 to only accept commands from requesters that are within policy. 257 Attester: A device or application wishing to control physical 258 equipment 260 Relying Party: A device or application connected to potentially 261 dangerous physical equipment (hazardous chemical processing, 262 traffic control, power grid, etc.) 264 2.5. Trusted Execution Environment (TEE) Provisioning 266 A "Trusted Application Manager (TAM)" server is responsible for 267 managing the applications running in the TEE of a client device. To 268 do this, the TAM wants to assess the state of a TEE, or of 269 applications in the TEE, of a client device. The TEE conducts a 270 remote attestation procedure with the TAM, which can then decide 271 whether the TEE is already in compliance with the TAM's latest 272 policy, or if the TAM needs to uninstall, update, or install approved 273 applications in the TEE to bring it back into compliance with the 274 TAM's policy. 276 Attester: A device with a trusted execution environment capable of 277 running trusted applications that can be updated 279 Relying Party: A Trusted Application Manager 281 2.6. Hardware Watchdog 283 There is a class of malware that holds a device hostage and does not 284 allow it to reboot to prevent updates from being applied. This can 285 be a significant problem, because it allows a fleet of devices to be 286 held hostage for ransom. 288 A solution to this problem is a watchdog timer implemented in a 289 protected environment such as a Trusted Platform Module (TPM), as 290 described in [TCGarch] section 43.3. If the watchdog does not 291 receive regular, and fresh, Attestation Results as to the system's 292 health, then it forces a reboot. 294 Attester: The device that should be protected from being held 295 hostage for a long period of time 297 Relying Party: A watchdog capable of triggering a procedure that 298 resets a device into a known, good operational state. 300 2.7. FIDO Biometric Authentication 302 In the Fast IDentity Online (FIDO) protocol [WebAuthN], [CTAP], the 303 device in the user's hand authenticates the human user, whether by 304 biometrics (such as fingerprints), or by PIN and password. FIDO 305 authentication puts a large amount of trust in the device compared to 306 typical password authentication because it is the device that 307 verifies the biometric, PIN and password inputs from the user, not 308 the server. For the Relying Party to know that the authentication is 309 trustworthy, the Relying Party needs to know that the Authenticator 310 part of the device is trustworthy. The FIDO protocol employs remote 311 attestation for this. 313 The FIDO protocol supports several remote attestation protocols and a 314 mechanism by which new ones can be registered and added. Remote 315 attestation defined by RATS is thus a candidate for use in the FIDO 316 protocol. 318 Other biometric authentication protocols such as the Chinese IFAA 319 standard and WeChat Pay as well as Google Pay make use of attestation 320 in one form or another. 322 Attester: Every FIDO Authenticator contains an Attester. 324 Relying Party: Any web site, mobile application back-end, or service 325 that relies on authentication data based on biometric information. 327 3. Architectural Overview 329 Figure 1 depicts the data that flows between different roles, 330 independent of protocol or use case. 332 ************ ************* ************ ***************** 333 * Endorser * * Reference * * Verifier * * Relying Party * 334 ************ * Value * * Owner * * Owner * 335 | * Provider * ************ ***************** 336 | ************* | | 337 | | | | 338 |Endorsements |Reference |Appraisal |Appraisal 339 | |Values |Policy |Policy for 340 | | |for |Attestation 341 .-----------. | |Evidence |Results 342 | | | | 343 | | | | 344 v v v | 345 .---------------------------. | 346 .----->| Verifier |------. | 347 | '---------------------------' | | 348 | | | 349 | Attestation| | 350 | Results | | 351 | Evidence | | 352 | | | 353 | v v 354 .----------. .---------------. 355 | Attester | | Relying Party | 356 '----------' '---------------' 358 Figure 1: Conceptual Data Flow 360 An Attester creates Evidence that is conveyed to a Verifier. 362 The Verifier uses the Evidence, any Reference Values from Reference 363 Value Providers, and any Endorsements from Endorsers, by applying an 364 Appraisal Policy for Evidence to assess the trustworthiness of the 365 Attester, and generates Attestation Results for use by Relying 366 Parties. The Appraisal Policy for Evidence might be obtained from an 367 Endorser along with the Endorsements, and/or might be obtained via 368 some other mechanism such as being configured in the Verifier by the 369 Verifier Owner. 371 The Relying Party uses Attestation Results by applying its own 372 appraisal policy to make application-specific decisions such as 373 authorization decisions. The Appraisal Policy for Attestation 374 Results is configured in the Relying Party by the Relying Party 375 Owner, and/or is programmed into the Relying Party. 377 3.1. Appraisal Policies 379 The Verifier, when appraising Evidence, or the Relying Party, when 380 appraising Attestation Results, checks the values of some claims 381 against constraints specified in its appraisal policy. Such 382 constraints might involve a comparison for equality against a 383 Reference Value, or a check for being in a range bounded by Reference 384 Values, or membership in a set of Reference Values, or a check 385 against values in other claims, or any other test. 387 3.2. Reference Values 389 Reference Values used in appraisal come from a Reference Value 390 Provider and are then used by the appraisal policy. They might be 391 conveyed in any number of ways, including: * as part of the appraisal 392 policy itself, if the Verifier Owner either: acquires Reference 393 Values from a Reference Value Provider or is itself a Reference Value 394 Provider; * as part of an Endorsement, if the Endorser either 395 acquires Reference Values from a Reference Value Provider or is 396 itself a Reference Value Provider; or * via separate communication. 398 The actual data format and semantics of any Reference Values are 399 specific to claims and implementations. This architecture document 400 does not define any general purpose format for them or general means 401 for comparison. 403 3.3. Two Types of Environments of an Attester 405 As shown in Figure 2, an Attester consists of at least one Attesting 406 Environment and at least one Target Environment. In some 407 implementations, the Attesting and Target Environments might be 408 combined. Other implementations might have multiple Attesting and 409 Target Environments, such as in the examples described in more detail 410 in Section 3.4 and Section 3.5. Other examples may exist. Besides, 411 the examples discussed could be combined into even more complex 412 implementations. 414 .--------------------------------. 415 | | 416 | Verifier | 417 | | 418 '--------------------------------' 419 ^ 420 | 421 .-------------------------|----------. 422 | | | 423 | .----------------. | | 424 | | Target | | | 425 | | Environment | | | 426 | | | | Evidence | 427 | '----------------' | | 428 | | | | 429 | | | | 430 | Collect | | | 431 | Claims | | | 432 | | | | 433 | v | | 434 | .-------------. | 435 | | Attesting | | 436 | | Environment | | 437 | | | | 438 | '-------------' | 439 | Attester | 440 '------------------------------------' 442 Figure 2: Two Types of Environments 444 Claims are collected from Target Environments. That is, Attesting 445 Environments collect the values and the information to be represented 446 in Claims, by reading system registers and variables, calling into 447 subsystems, taking measurements on code, memory, or other security 448 related assets of the Target Environment. Attesting Environments 449 then format the claims appropriately, and typically use key material 450 and cryptographic functions, such as signing or cipher algorithms, to 451 create Evidence. There is no limit to or requirement on the types of 452 hardware or software environments that can be used to implement an 453 Attesting Environment, for example: Trusted Execution Environments 454 (TEEs), embedded Secure Elements (eSEs), Trusted Platform Modules 455 (TPMs), or BIOS firmware. 457 An arbitrary execution environment may not, by default, be capable of 458 claims collection for a given Target Environment. Execution 459 environments that are designed specifically to be capable of claims 460 collection are referred to in this document as Attesting 461 Environments. For example, a TPM doesn't actively collect claims 462 itself, it instead requires another component to feed various values 463 to the TPM. Thus, an Attesting Environment in such a case would be 464 the combination of the TPM together with whatever component is 465 feeding it the measurements. 467 3.4. Layered Attestation Environments 469 By definition, the Attester role generates Evidence. An Attester may 470 consist of one or more nested environments (layers). The root layer 471 of an Attester includes at least one root of trust. In order to 472 appraise Evidence generated by an Attester, the Verifier needs to 473 trust the Attester's root of trust. Trust in the Attester's root of 474 trust can be established either directly (e.g., the Verifier puts the 475 root of trust's public key into its trust anchor store) or 476 transitively via an Endorser (e.g., the Verifier puts the Endorser's 477 public key into its trust anchor store). In layered attestation, a 478 root of trust is the initial Attesting Environment. Claims can be 479 collected from or about each layer. The corresponding Claims can be 480 structured in a nested fashion that reflects the nesting of the 481 Attester's layers. Normally, Claims are not self-asserted, rather a 482 previous layer acts as the Attesting Environment for the next layer. 483 Claims about a root of trust typically are asserted by Endorsers. 485 The device illustrated in Figure 3 includes (A) a BIOS stored in 486 read-only memory, (B) an updatable bootloader, and (C) an operating 487 system kernel. 489 .----------. .----------. 490 | | | | 491 | Endorser |------------------->| Verifier | 492 | | Endorsements | | 493 '----------' for A, B, and C '----------' 494 ^ 495 .------------------------------------. | 496 | | | 497 | .---------------------------. | | 498 | | Target | | | Layered 499 | | Environment | | | Evidence 500 | | C | | | for 501 | '---------------------------' | | B and C 502 | Collect | | | 503 | claims | | | 504 | .---------------|-----------. | | 505 | | Target v | | | 506 | | Environment .-----------. | | | 507 | | B | Attesting | | | | 508 | | |Environment|-----------' 509 | | | B | | | 510 | | '-----------' | | 511 | | ^ | | 512 | '---------------------|-----' | 513 | Collect | | Evidence | 514 | claims v | for B | 515 | .-----------. | 516 | | Attesting | | 517 | |Environment| | 518 | | A | | 519 | '-----------' | 520 | | 521 '------------------------------------' 523 Figure 3: Layered Attester 525 Attesting Environment A, the read-only BIOS in this example, has to 526 ensure the integrity of the bootloader (Target Environment B). There 527 are potentially multiple kernels to boot, and the decision is up to 528 the bootloader. Only a bootloader with intact integrity will make an 529 appropriate decision. Therefore, the Claims relating to the 530 integrity of the bootloader have to be measured securely. At this 531 stage of the boot-cycle of the device, the Claims collected typically 532 cannot be composed into Evidence. 534 After the boot sequence is started, the BIOS conducts the most 535 important and defining feature of layered attestation, which is that 536 the successfully measured Target Environment B now becomes (or 537 contains) an Attesting Environment for the next layer. This 538 procedure in Layered Attestation is sometimes called "staging". It 539 is important that the new Attesting Environment B not be able to 540 alter any Claims about its own Target Environment B. This can be 541 ensured having those Claims be either signed by Attesting Environment 542 A or stored in an untamperable manner by Attesting Environment A. 544 Continuing with this example, the bootloader's Attesting Environment 545 B is now in charge of collecting Claims about Target Environment C, 546 which in this example is the kernel to be booted. The final Evidence 547 thus contains two sets of Claims: one set about the bootloader as 548 measured and signed by the BIOS, plus a set of Claims about the 549 kernel as measured and signed by the bootloader. 551 This example could be extended further by making the kernel become 552 another Attesting Environment for an application as another Target 553 Environment. This would result in a third set of Claims in the 554 Evidence pertaining to that application. 556 The essence of this example is a cascade of staged environments. 557 Each environment has the responsibility of measuring the next 558 environment before the next environment is started. In general, the 559 number of layers may vary by device or implementation, and an 560 Attesting Environment might even have multiple Target Environments 561 that it measures, rather than only one as shown in Figure 3. 563 3.5. Composite Device 565 A Composite Device is an entity composed of multiple sub-entities 566 such that its trustworthiness has to be determined by the appraisal 567 of all these sub-entities. 569 Each sub-entity has at least one Attesting Environment collecting the 570 claims from at least one Target Environment, then this sub-entity 571 generates Evidence about its trustworthiness. Therefore each sub- 572 entity can be called an Attester. Among all the Attesters, there may 573 be only some which have the ability to communicate with the Verifier 574 while others do not. 576 For example, a carrier-grade router consists of a chassis and 577 multiple slots. The trustworthiness of the router depends on all its 578 slots' trustworthiness. Each slot has an Attesting Environment such 579 as a TEE collecting the claims of its boot process, after which it 580 generates Evidence from the claims. Among these slots, only a main 581 slot can communicate with the Verifier while other slots cannot. But 582 other slots can communicate with the main slot by the links between 583 them inside the router. So the main slot collects the Evidence of 584 other slots, produces the final Evidence of the whole router and 585 conveys the final Evidence to the Verifier. Therefore the router is 586 a Composite Device, each slot is an Attester, and the main slot is 587 the lead Attester. 589 Another example is a multi-chassis router composed of multiple single 590 carrier-grade routers. The multi-chassis router provides higher 591 throughput by interconnecting multiple routers and can be logically 592 treated as one router for simpler management. A multi-chassis router 593 provides a management point that connects to the Verifier. Other 594 routers are only connected to the main router by the network cables, 595 and therefore they are managed and appraised via this main router's 596 help. So, in this case, the multi-chassis router is the Composite 597 Device, each router is an Attester and the main router is the lead 598 Attester. 600 Figure 4 depicts the conceptual data flow for a Composite Device. 602 .-----------------------------. 603 | Verifier | 604 '-----------------------------' 605 ^ 606 | 607 | Evidence of 608 | Composite Device 609 | 610 .----------------------------------|-------------------------------. 611 | .--------------------------------|-----. .------------. | 612 | | Collect .------------. | | | | 613 | | Claims .--------->| Attesting |<--------| Attester B |-. | 614 | | | |Environment | | '------------. | | 615 | | .----------------. | |<----------| Attester C |-. | 616 | | | Target | | | | '------------' | | 617 | | | Environment(s) | | |<------------| ... | | 618 | | | | '------------' | Evidence '------------' | 619 | | '----------------' | of | 620 | | | Attesters | 621 | | lead Attester A | (via Internal Links or | 622 | '--------------------------------------' Network Connections) | 623 | | 624 | Composite Device | 625 '------------------------------------------------------------------' 627 Figure 4: Composite Device 629 In the Composite Device, each Attester generates its own Evidence by 630 its Attesting Environment(s) collecting the claims from its Target 631 Environment(s). The lead Attester collects the Evidence from the 632 other Attesters and conveys it to a Verifier. Collection of Evidence 633 from sub-entities may itself be a form of Claims collection that 634 results in Evidence asserted by the lead Attester. The lead Attester 635 generates the Evidence about the layout of the Composite Device, 636 while sub-Attesters generate Evidence about their respective modules. 638 In this situation, the trust model described in Section 7 is also 639 suitable for this inside Verifier. 641 3.6. Implementation Considerations 643 An entity can take on multiple RATS roles (e.g., Attester, Verifier, 644 Relying Party, etc.) at the same time. Multiple entities can 645 cooperate to implement a single RATS role as well. The combination 646 of roles and entities can be arbitrary. For example, in the 647 Composite Device scenario, the entity inside the lead Attester can 648 also take on the role of a Verifier, and the outer entity of Verifier 649 can take on the role of a Relying Party. After collecting the 650 Evidence of other Attesters, this inside Verifier uses Endorsements 651 and appraisal policies (obtained the same way as any other Verifier) 652 in the verification process to generate Attestation Results. The 653 inside Verifier then conveys the Attestation Results of other 654 Attesters to the outside Verifier, whether in the same conveyance 655 protocol as the Evidence or not. 657 4. Terminology 659 This document uses the following terms. 661 4.1. Roles 663 Attester: A role performed by an entity (typically a device) whose 664 Evidence must be appraised in order to infer the extent to which 665 the Attester is considered trustworthy, such as when deciding 666 whether it is authorized to perform some operation. 668 Produces: Evidence 670 Relying Party: A role performed by an entity that depends on the 671 validity of information about an Attester, for purposes of 672 reliably applying application specific actions. Compare /relying 673 party/ in [RFC4949]. 675 Consumes: Attestation Results 677 Verifier: A role performed by an entity that appraises the validity 678 of Evidence about an Attester and produces Attestation Results to 679 be used by a Relying Party. 681 Consumes: Evidence, Reference Values, Endorsements, Appraisal 682 Policy for Evidence 684 Produces: Attestation Results 686 Relying Party Owner: A role performed by an entity (typically an 687 administrator), that is authorized to configure Appraisal Policy 688 for Attestation Results in a Relying Party. 690 Produces: Appraisal Policy for Attestation Results 692 Verifier Owner: A role performed by an entity (typically an 693 administrator), that is authorized to configure Appraisal Policy 694 for Evidence in a Verifier. 696 Produces: Appraisal Policy for Evidence 698 Endorser: A role performed by an entity (typically a manufacturer) 699 whose Endorsements help Verifiers appraise the authenticity of 700 Evidence. 702 Produces: Endorsements 704 Reference Value Provider: A role performed by an entity (typically a 705 manufacturer) whose Reference Values help Verifiers appraise 706 Evidence to determine if acceptable known Claims have been 707 recorded by the Attester. 709 Produces: Reference Values 711 4.2. Artifacts 713 Claim: A piece of asserted information, often in the form of a name/ 714 value pair. Claims make up the usual structure of Evidence and 715 other RATS artifacts. Compare /claim/ in [RFC7519]. 717 Endorsement: A secure statement that an Endorser vouches for the 718 integrity of an Attester's various capabilities such as Claims 719 collection and Evidence signing. 721 Consumed By: Verifier 723 Produced By: Endorser 725 Evidence: A set of Claims generated by an Attester to be appraised 726 by a Verifier. Evidence may include configuration data, 727 measurements, telemetry, or inferences. 729 Consumed By: Verifier 731 Produced By: Attester 733 Attestation Result: The output generated by a Verifier, typically 734 including information about an Attester, where the Verifier 735 vouches for the validity of the results. 737 Consumed By: Relying Party 739 Produced By: Verifier 741 Appraisal Policy for Evidence: A set of rules that informs how a 742 Verifier evaluates the validity of information about an Attester. 743 Compare /security policy/ in [RFC4949]. 745 Consumed By: Verifier 747 Produced By: Verifier Owner 749 Appraisal Policy for Attestation Results: A set of rules that direct 750 how a Relying Party uses the Attestation Results regarding an 751 Attester generated by the Verifiers. Compare /security policy/ in 752 [RFC4949]. 754 Consumed by: Relying Party 756 Produced by: Relying Party Owner 758 Reference Values: A set of values against which values of Claims can 759 be compared as part of applying an Appraisal Policy for Evidence. 760 Reference Values are sometimes referred to in other documents as 761 known-good values, golden measurements, or nominal values, 762 although those terms typically assume comparison for equality, 763 whereas here Reference Values might be more general and be used in 764 any sort of comparison. 766 Consumed By: Verifier 768 Produced By: Reference Value Provider 770 5. Topological Patterns 772 Figure 1 shows a data-flow diagram for communication between an 773 Attester, a Verifier, and a Relying Party. The Attester conveys its 774 Evidence to the Verifier for appraisal, and the Relying Party gets 775 the Attestation Result from the Verifier. This section refines it by 776 describing two reference models, as well as one example composition 777 thereof. The discussion that follows is for illustrative purposes 778 only and does not constrain the interactions between RATS roles to 779 the presented patterns. 781 5.1. Passport Model 783 The passport model is so named because of its resemblance to how 784 nations issue passports to their citizens. The nature of the 785 Evidence that an individual needs to provide to its local authority 786 is specific to the country involved. The citizen retains control of 787 the resulting passport document and presents it to other entities 788 when it needs to assert a citizenship or identity claim, such as an 789 airport immigration desk. The passport is considered sufficient 790 because it vouches for the citizenship and identity claims, and it is 791 issued by a trusted authority. Thus, in this immigration desk 792 analogy, the passport issuing agency is a Verifier, the passport is 793 an Attestation Result, and the immigration desk is a Relying Party. 795 In this model, an Attester conveys Evidence to a Verifier, which 796 compares the Evidence against its appraisal policy. The Verifier 797 then gives back an Attestation Result. If the Attestation Result was 798 a successful one, the Attester can then present the Attestation 799 Result (and possibly additional Claims) to a Relying Party, which 800 then compares this information against its own appraisal policy. 802 Three ways in which the process may fail include: 804 * First, the Verifier may not issue a positive Attestation Result 805 due to the Evidence not passing the Appraisal Policy for Evidence. 807 * The second way in which the process may fail is when the 808 Attestation Result is examined by the Relying Party, and based 809 upon the Appraisal Policy for Attestation Results, the result does 810 not pass the policy. 812 * The third way is when the Verifier is unreachable or unavailable. 814 Since the resource access protocol between the Attester and Relying 815 Party includes an Attestation Result, in this model the details of 816 that protocol constrain the serialization format of the Attestation 817 Result. The format of the Evidence on the other hand is only 818 constrained by the Attester-Verifier remote attestation protocol. 820 +-------------+ 821 | | Compare Evidence 822 | Verifier | against appraisal policy 823 | | 824 +-------------+ 825 ^ | 826 Evidence| |Attestation 827 | | Result 828 | v 829 +----------+ +---------+ 830 | |------------->| |Compare Attestation 831 | Attester | Attestation | Relying | Result against 832 | | Result | Party | appraisal 833 +----------+ +---------+ policy 835 Figure 5: Passport Model 837 5.2. Background-Check Model 839 The background-check model is so named because of the resemblance of 840 how employers and volunteer organizations perform background checks. 841 When a prospective employee provides claims about education or 842 previous experience, the employer will contact the respective 843 institutions or former employers to validate the claim. Volunteer 844 organizations often perform police background checks on volunteers in 845 order to determine the volunteer's trustworthiness. Thus, in this 846 analogy, a prospective volunteer is an Attester, the organization is 847 the Relying Party, and the organization that issues a report is a 848 Verifier. 850 In this model, an Attester conveys Evidence to a Relying Party, which 851 simply passes it on to a Verifier. The Verifier then compares the 852 Evidence against its appraisal policy, and returns an Attestation 853 Result to the Relying Party. The Relying Party then compares the 854 Attestation Result against its own appraisal policy. 856 The resource access protocol between the Attester and Relying Party 857 includes Evidence rather than an Attestation Result, but that 858 Evidence is not processed by the Relying Party. Since the Evidence 859 is merely forwarded on to a trusted Verifier, any serialization 860 format can be used for Evidence because the Relying Party does not 861 need a parser for it. The only requirement is that the Evidence can 862 be _encapsulated in_ the format required by the resource access 863 protocol between the Attester and Relying Party. 865 However, like in the Passport model, an Attestation Result is still 866 consumed by the Relying Party. Code footprint and attack surface 867 area can be minimized by using a serialization format for which the 868 Relying Party already needs a parser to support the protocol between 869 the Attester and Relying Party, which may be an existing standard or 870 widely deployed resource access protocol. Such minimization is 871 especially important if the Relying Party is a constrained node. 873 +-------------+ 874 | | Compare Evidence 875 | Verifier | against appraisal 876 | | policy 877 +-------------+ 878 ^ | 879 Evidence| |Attestation 880 | | Result 881 | v 882 +------------+ +-------------+ 883 | |-------------->| | Compare Attestation 884 | Attester | Evidence | Relying | Result against 885 | | | Party | appraisal policy 886 +------------+ +-------------+ 888 Figure 6: Background-Check Model 890 5.3. Combinations 892 One variation of the background-check model is where the Relying 893 Party and the Verifier are on the same machine, performing both 894 functions together. In this case, there is no need for a protocol 895 between the two. 897 It is also worth pointing out that the choice of model depends on the 898 use case, and that different Relying Parties may use different 899 topological patterns. 901 The same device may need to create Evidence for different Relying 902 Parties and/or different use cases. For instance, it would use one 903 model to provide Evidence to a network infrastructure device to gain 904 access to the network, and the other model to provide Evidence to a 905 server holding confidential data to gain access to that data. As 906 such, both models may simultaneously be in use by the same device. 908 Figure 7 shows another example of a combination where Relying Party 1 909 uses the passport model, whereas Relying Party 2 uses an extension of 910 the background-check model. Specifically, in addition to the basic 911 functionality shown in Figure 6, Relying Party 2 actually provides 912 the Attestation Result back to the Attester, allowing the Attester to 913 use it with other Relying Parties. This is the model that the 914 Trusted Application Manager plans to support in the TEEP architecture 915 [I-D.ietf-teep-architecture]. 917 +-------------+ 918 | | Compare Evidence 919 | Verifier | against appraisal policy 920 | | 921 +-------------+ 922 ^ | 923 Evidence| |Attestation 924 | | Result 925 | v 926 +-------------+ 927 | | Compare 928 | Relying | Attestation Result 929 | Party 2 | against appraisal policy 930 +-------------+ 931 ^ | 932 Evidence| |Attestation 933 | | Result 934 | v 935 +----------+ +----------+ 936 | |-------------->| | Compare Attestation 937 | Attester | Attestation | Relying | Result against 938 | | Result | Party 1 | appraisal policy 939 +----------+ +----------+ 941 Figure 7: Example Combination 943 6. Roles and Entities 945 An entity in the RATS architecture includes at least one of the roles 946 defined in this document. An entity can aggregate more than one role 947 into itself. These collapsed roles combine the duties of multiple 948 roles. 950 In these cases, interaction between these roles do not necessarily 951 use the Internet Protocol. They can be using a loopback device or 952 other IP-based communication between separate environments, but they 953 do not have to. Alternative channels to convey conceptual messages 954 include function calls, sockets, GPIO interfaces, local busses, or 955 hypervisor calls. This type of conveyance is typically found in 956 Composite Devices. Most importantly, these conveyance methods are 957 out-of-scope of RATS, but they are presumed to exist in order to 958 convey conceptual messages appropriately between roles. 960 For example, an entity that both connects to a wide-area network and 961 to a system bus is taking on both the Attester and Verifier roles. 962 As a system bus-connected entity, a Verifier consumes Evidence from 963 other devices connected to the system bus that implement Attester 964 roles. As a wide-area network connected entity, it may implement an 965 Attester role. 967 In essence, an entity that combines more than one role creates and 968 consumes the corresponding conceptual messages as defined in this 969 document. 971 7. Trust Model 973 7.1. Relying Party 975 This document covers scenarios for which a Relying Party trusts a 976 Verifier that can appraise the trustworthiness of information about 977 an Attester. Such trust might come by the Relying Party trusting the 978 Verifier (or its public key) directly, or might come by trusting an 979 entity (e.g., a Certificate Authority) that is in the Verifier's 980 certificate chain. 982 The Relying Party might implicitly trust a Verifier, such as in a 983 Verifier/Relying Party combination where the Verifier and Relying 984 Party roles are combined. Or, for a stronger level of security, the 985 Relying Party might require that the Verifier first provide 986 information about itself that the Relying Party can use to assess the 987 trustworthiness of the Verifier before accepting its Attestation 988 Results. 990 For example, one explicit way for a Relying Party "A" to establish 991 such trust in a Verifier "B", would be for B to first act as an 992 Attester where A acts as a combined Verifier/Relying Party. If A 993 then accepts B as trustworthy, it can choose to accept B as a 994 Verifier for other Attesters. 996 As another example, the Relying Party can establish trust in the 997 Verifier by out of band establishment of key material, combined with 998 a protocol like TLS to communicate. There is an assumption that 999 between the establishment of the trusted key material and the 1000 creation of the Evidence, that the Verifier has not been compromised. 1002 Similarly, the Relying Party also needs to trust the Relying Party 1003 Owner for providing its Appraisal Policy for Attestation Results, and 1004 in some scenarios the Relying Party might even require that the 1005 Relying Party Owner go through a remote attestation procedure with it 1006 before the Relying Party will accept an updated policy. This can be 1007 done similarly to how a Relying Party could establish trust in a 1008 Verifier as discussed above. 1010 7.2. Attester 1012 In some scenarios, Evidence might contain sensitive information such 1013 as Personally Identifiable Information (PII) or system identifiable 1014 information. Thus, an Attester must trust entities to which it 1015 conveys Evidence, to not reveal sensitive data to unauthorized 1016 parties. The Verifier might share this information with other 1017 authorized parties, according to a governing policy that address the 1018 handling of sensitive information (potentially included in Appraisal 1019 Policies for Evidence). In the background-check model, this Evidence 1020 may also be revealed to Relying Party(s). 1022 When Evidence contains sensitive information, an Attester typically 1023 requires that a Verifier authenticates itself (e.g., at TLS session 1024 establishment) and might even request a remote attestation before the 1025 Attester sends the sensitive Evidence. This can be done by having 1026 the Attester first act as a Verifier/Relying Party, and the Verifier 1027 act as its own Attester, as discussed above. 1029 7.3. Relying Party Owner 1031 The Relying Party Owner might also require that the Relying Party 1032 first act as an Attester, providing Evidence that the Owner can 1033 appraise, before the Owner would give the Relying Party an updated 1034 policy that might contain sensitive information. In such a case, 1035 authentication or attestation in both directions might be needed, in 1036 which case typically one side's Evidence must be considered safe to 1037 share with an untrusted entity, in order to bootstrap the sequence. 1038 See Section 11 for more discussion. 1040 7.4. Verifier 1042 The Verifier trusts (or more specifically, the Verifier's security 1043 policy is written in a way that configures the Verifier to trust) a 1044 manufacturer, or the manufacturer's hardware, so as to be able to 1045 appraise the trustworthiness of that manufacturer's devices. In a 1046 typical solution, a Verifier comes to trust an Attester indirectly by 1047 having an Endorser (such as a manufacturer) vouch for the Attester's 1048 ability to securely generate Evidence. 1050 In some solutions, a Verifier might be configured to directly trust 1051 an Attester by having the Verifier have the Attester's key material 1052 (rather than the Endorser's) in its trust anchor store. 1054 Such direct trust must first be established at the time of trust 1055 anchor store configuration either by checking with an Endorser at 1056 that time, or by conducting a security analysis of the specific 1057 device. Having the Attester directly in the trust anchor store 1058 narrows the Verifier's trust to only specific devices rather than all 1059 devices the Endorser might vouch for, such as all devices 1060 manufactured by the same manufacturer in the case that the Endorser 1061 is a manufacturer. 1063 Such narrowing is often important since physical possession of a 1064 device can also be used to conduct a number of attacks, and so a 1065 device in a physically secure environment (such as one's own 1066 premises) may be considered trusted whereas devices owned by others 1067 would not be. This often results in a desire to either have the 1068 owner run their own Endorser that would only Endorse devices one 1069 owns, or to use Attesters directly in the trust anchor store. When 1070 there are many Attesters owned, the use of an Endorser becomes more 1071 scalable. 1073 That is, it might appraise the trustworthiness of an application 1074 component, operating system component, or service under the 1075 assumption that information provided about it by the lower-layer 1076 firmware or software is true. A stronger level of assurance of 1077 security comes when information can be vouched for by hardware or by 1078 ROM code, especially if such hardware is physically resistant to 1079 hardware tampering. In most cases, components that have to be 1080 vouched for via Endorsements because no Evidence is generated about 1081 them are referred to as roots of trust. 1083 The manufacturer having arranged for an Attesting Environment to be 1084 provisioned with key material with which to sign Evidence, the 1085 Verifier is then provided with some way of verifying the signature on 1086 the Evidence. This may be in the form of an appropriate trust 1087 anchor, or the Verifier may be provided with a database of public 1088 keys (rather than certificates) or even carefully secured lists of 1089 symmetric keys. 1091 The nature of how the Verifier manages to validate the signatures 1092 produced by the Attester is critical to the secure operation of an 1093 Attestation system, but is not the subject of standardization within 1094 this architecture. 1096 A conveyance protocol that provides authentication and integrity 1097 protection can be used to convey Evidence that is otherwise 1098 unprotected (e.g., not signed). Appropriate conveyance of 1099 unprotected Evidence (e.g., [I-D.birkholz-rats-uccs]) relies on the 1100 following conveyance protocol's protection capabilities: 1102 1. The key material used to authenticate and integrity protect the 1103 conveyance channel is trusted by the Verifier to speak for the 1104 Attesting Environment(s) that collected Claims about the Target 1105 Environment(s). 1107 2. All unprotected Evidence that is conveyed is supplied exclusively 1108 by the Attesting Environment that has the key material that 1109 protects the conveyance channel 1111 3. The root of trust protects both the conveyance channel key 1112 material and the Attesting Environment with equivalent strength 1113 protections. 1115 See Section 12 for discussion on security strength. 1117 7.5. Endorser, Reference Value Provider, and Verifier Owner 1119 In some scenarios, the Endorser, Reference Value Provider, and 1120 Verifier Owner may need to trust the Verifier before giving the 1121 Endorsement, Reference Values, or appraisal policy to it. This can 1122 be done similarly to how a Relying Party might establish trust in a 1123 Verifier. 1125 As discusssed in Section 7.3, authentication or attestation in both 1126 directions might be needed, in which case typically one side's 1127 identity or Evidence must be considered safe to share with an 1128 untrusted entity, in order to bootstrap the sequence. See Section 11 1129 for more discussion. 1131 8. Conceptual Messages 1133 8.1. Evidence 1135 Evidence is a set of claims about the target environment that reveal 1136 operational status, health, configuration or construction that have 1137 security relevance. Evidence is evaluated by a Verifier to establish 1138 its relevance, compliance, and timeliness. Claims need to be 1139 collected in a manner that is reliable. Evidence needs to be 1140 securely associated with the target environment so that the Verifier 1141 cannot be tricked into accepting claims originating from a different 1142 environment (that may be more trustworthy). Evidence also must be 1143 protected from man-in-the-middle attackers who may observe, change or 1144 misdirect Evidence as it travels from Attester to Verifier. The 1145 timeliness of Evidence can be captured using claims that pinpoint the 1146 time or interval when changes in operational status, health, and so 1147 forth occur. 1149 8.2. Endorsements 1151 An Endorsement is a secure statement that some entity (e.g., a 1152 manufacturer) vouches for the integrity of the device's signing 1153 capability. For example, if the signing capability is in hardware, 1154 then an Endorsement might be a manufacturer certificate that signs a 1155 public key whose corresponding private key is only known inside the 1156 device's hardware. Thus, when Evidence and such an Endorsement are 1157 used together, an appraisal procedure can be conducted based on 1158 appraisal policies that may not be specific to the device instance, 1159 but merely specific to the manufacturer providing the Endorsement. 1160 For example, an appraisal policy might simply check that devices from 1161 a given manufacturer have information matching a set of Reference 1162 Values, or an appraisal policy might have a set of more complex logic 1163 on how to appraise the validity of information. 1165 However, while an appraisal policy that treats all devices from a 1166 given manufacturer the same may be appropriate for some use cases, it 1167 would be inappropriate to use such an appraisal policy as the sole 1168 means of authorization for use cases that wish to constrain _which_ 1169 compliant devices are considered authorized for some purpose. For 1170 example, an enterprise using remote attestation for Network Endpoint 1171 Assessment may not wish to let every healthy laptop from the same 1172 manufacturer onto the network, but instead only want to let devices 1173 that it legally owns onto the network. Thus, an Endorsement may be 1174 helpful information in authenticating information about a device, but 1175 is not necessarily sufficient to authorize access to resources which 1176 may need device-specific information such as a public key for the 1177 device or component or user on the device. 1179 8.3. Attestation Results 1181 Attestation Results are the input used by the Relying Party to decide 1182 the extent to which it will trust a particular Attester, and allow it 1183 to access some data or perform some operation. 1185 Attestation Results may carry a boolean value indicating compliance 1186 or non-compliance with a Verifier's appraisal policy, or may carry a 1187 richer set of Claims about the Attester, against which the Relying 1188 Party applies its Appraisal Policy for Attestation Results. 1190 The quality of the Attestation Results depend upon the ability of the 1191 Verifier to evaluate the Attester. Different Attesters have a 1192 different _Strength of Function_ [strengthoffunction], which results 1193 in the Attestation Results being qualitatively different in strength. 1195 A result that indicates non-compliance can be used by an Attester (in 1196 the passport model) or a Relying Party (in the background-check 1197 model) to indicate that the Attester should not be treated as 1198 authorized and may be in need of remediation. In some cases, it may 1199 even indicate that the Evidence itself cannot be authenticated as 1200 being correct. 1202 An Attestation Result that indicates compliance can be used by a 1203 Relying Party to make authorization decisions based on the Relying 1204 Party's appraisal policy. The simplest such policy might be to 1205 simply authorize any party supplying a compliant Attestation Result 1206 signed by a trusted Verifier. A more complex policy might also 1207 entail comparing information provided in the result against Reference 1208 Values, or applying more complex logic on such information. 1210 Thus, Attestation Results often need to include detailed information 1211 about the Attester, for use by Relying Parties, much like physical 1212 passports and drivers licenses include personal information such as 1213 name and date of birth. Unlike Evidence, which is often very device- 1214 and vendor-specific, Attestation Results can be vendor-neutral if the 1215 Verifier has a way to generate vendor-agnostic information based on 1216 the appraisal of vendor-specific information in Evidence. This 1217 allows a Relying Party's appraisal policy to be simpler, potentially 1218 based on standard ways of expressing the information, while still 1219 allowing interoperability with heterogeneous devices. 1221 Finally, whereas Evidence is signed by the device (or indirectly by a 1222 manufacturer, if Endorsements are used), Attestation Results are 1223 signed by a Verifier, allowing a Relying Party to only need a trust 1224 relationship with one entity, rather than a larger set of entities, 1225 for purposes of its appraisal policy. 1227 9. Claims Encoding Formats 1229 The following diagram illustrates a relationship to which remote 1230 attestation is desired to be added: 1232 +-------------+ +------------+ Evaluate 1233 | |-------------->| | request 1234 | Attester | Access some | Relying | against 1235 | | resource | Party | security 1236 +-------------+ +------------+ policy 1238 Figure 8: Typical Resource Access 1240 In this diagram, the protocol between Attester and a Relying Party 1241 can be any new or existing protocol (e.g., HTTP(S), COAP(S), ROLIE 1242 [RFC8322], 802.1x, OPC UA [OPCUA], etc.), depending on the use case. 1244 Such protocols typically already have mechanisms for passing security 1245 information for purposes of authentication and authorization. Common 1246 formats include JWTs [RFC7519], CWTs [RFC8392], and X.509 1247 certificates. 1249 Retrofitting already deployed protocols with remote attestation 1250 requires adding RATS conceptual messages to the existing data flows. 1251 This must be done in a way that doesn't degrade the security 1252 properties of the system and should use the native extension 1253 mechanisms provided by the underlying protocol. For example, if the 1254 TLS handshake is to be extended with remote attestation capabilities, 1255 attestation Evidence may be embedded in an ad hoc X.509 certificate 1256 extension (e.g., [TCG-DICE]), or into a new TLS Certificate Type 1257 (e.g., [I-D.tschofenig-tls-cwt]). 1259 Especially for constrained nodes there is a desire to minimize the 1260 amount of parsing code needed in a Relying Party, in order to both 1261 minimize footprint and to minimize the attack surface area. So while 1262 it would be possible to embed a CWT inside a JWT, or a JWT inside an 1263 X.509 extension, etc., there is a desire to encode the information 1264 natively in the format that is natural for the Relying Party. 1266 This motivates having a common "information model" that describes the 1267 set of remote attestation related information in an encoding-agnostic 1268 way, and allowing multiple encoding formats (CWT, JWT, X.509, etc.) 1269 that encode the same information into the claims format needed by the 1270 Relying Party. 1272 The following diagram illustrates that Evidence and Attestation 1273 Results might each have multiple possible encoding formats, so that 1274 they can be conveyed by various existing protocols. It also 1275 motivates why the Verifier might also be responsible for accepting 1276 Evidence that encodes claims in one format, while issuing Attestation 1277 Results that encode claims in a different format. 1279 Evidence Attestation Results 1280 .--------------. CWT CWT .-------------------. 1281 | Attester-A |------------. .----------->| Relying Party V | 1282 '--------------' v | `-------------------' 1283 .--------------. JWT .------------. JWT .-------------------. 1284 | Attester-B |-------->| Verifier |-------->| Relying Party W | 1285 '--------------' | | `-------------------' 1286 .--------------. X.509 | | X.509 .-------------------. 1287 | Attester-C |-------->| |-------->| Relying Party X | 1288 '--------------' | | `-------------------' 1289 .--------------. TPM | | TPM .-------------------. 1290 | Attester-D |-------->| |-------->| Relying Party Y | 1291 '--------------' '------------' `-------------------' 1292 .--------------. other ^ | other .-------------------. 1293 | Attester-E |------------' '----------->| Relying Party Z | 1294 '--------------' `-------------------' 1296 Figure 9: Multiple Attesters and Relying Parties with Different 1297 Formats 1299 10. Freshness 1301 A Verifier or Relying Party may need to learn the point in time 1302 (i.e., the "epoch") an Evidence or Attestation Result has been 1303 produced. This is essential in deciding whether the included Claims 1304 and their values can be considered fresh, meaning they still reflect 1305 the latest state of the Attester, and that any Attestation Result was 1306 generated using the latest Appraisal Policy for Evidence. 1308 Freshness is assessed based on the Appraisal Policy for Evidence or 1309 Attestation Results that compares the estimated epoch against an 1310 "expiry" threshold defined locally to that policy. There is, 1311 however, always a race condition possible in that the state of the 1312 Attester, and the appraisal policies might change immediately after 1313 the Evidence or Attestation Result was generated. The goal is merely 1314 to narrow their recentness to something the Verifier (for Evidence) 1315 or Relying Party (for Attestation Result) is willing to accept. Some 1316 flexibility on the freshness requirement is a key component for 1317 enabling caching and reuse of both Evidence and Attestation Results, 1318 which is especially valuable in cases where their computation uses a 1319 substantial part of the resource budget (e.g., energy in constrained 1320 devices). 1322 There are three common approaches for determining the epoch of 1323 Evidence or an Attestation Result. 1325 10.1. Explicit Timekeeping using Synchronized Clocks 1327 The first approach is to rely on synchronized and trustworthy clocks, 1328 and include a signed timestamp (see [I-D.birkholz-rats-tuda]) along 1329 with the Claims in the Evidence or Attestation Result. Timestamps 1330 can also be added on a per-Claim basis to distinguish the time of 1331 creation of Evidence or Attestation Result from the time that a 1332 specific Claim was generated. The clock's trustworthiness typically 1333 requires additional Claims about the signer's time synchronization 1334 mechanism. 1336 10.2. Implicit Timekeeping using Nonces 1338 A second approach places the onus of timekeeping solely on the 1339 Verifier (for Evidence) or the Relying Party (for Attestation 1340 Results), and might be suitable, for example, in case the Attester 1341 does not have a reliable clock or time synchronization is otherwise 1342 impaired. In this approach, a non-predictable nonce is sent by the 1343 appraising entity, and the nonce is then signed and included along 1344 with the Claims in the Evidence or Attestation Result. After 1345 checking that the sent and received nonces are the same, the 1346 appraising entity knows that the Claims were signed after the nonce 1347 was generated. This allows associating a "rough" epoch to the 1348 Evidence or Attestation Result. In this case the epoch is said to be 1349 rough because: 1351 * The epoch applies to the entire claim set instead of a more 1352 granular association, and 1354 * The time between the creation of Claims and the collection of 1355 Claims is indistinguishable. 1357 10.3. Implicit Timekeeping using Epoch Handles 1359 A third approach relies on having epoch "handles" periodically sent 1360 to both the sender and receiver of Evidence or Attestation Results by 1361 some "Handle Distributor". 1363 Handles are different from nonces as they can be used more than once 1364 and can even be used by more than one entity at the same time. 1365 Handles are different from timestamps as they do not have to convey 1366 information about a point in time, i.e., they are not necessarily 1367 monotonically increasing integers. 1369 Like the nonce approach, this allows associating a "rough" epoch 1370 without requiring a reliable clock or time synchronization in order 1371 to generate or appraise the freshness of Evidence or Attestation 1372 Results. Only the Handle Distributor requires access to a clock so 1373 it can periodically send new epoch handles. 1375 The most recent handle is included in the produced Evidence or 1376 Attestation Results, and the appraising entity can compare the handle 1377 in received Evidence or Attestation Results against the latest handle 1378 it received from the Handle Distributor to determine if it is within 1379 the current epoch. An actual solution also needs to take into 1380 account race conditions when transitioning to a new epoch, such as by 1381 using a counter signed by the Handle Distributor as the handle, or by 1382 including both the current and previous handles in messages and/or 1383 checks, by requiring retries in case of mismatching handles, or by 1384 buffering incoming messages that might be associated with a handle 1385 that the receiver has not yet obtained. 1387 More generally, in order to prevent an appraising entity from 1388 generating false negatives (e.g., discarding Evidence that is deemed 1389 stale even if it is not), the appraising entity should keep an "epoch 1390 window" consisting of the most recently received handles. The depth 1391 of such epoch window is directly proportional to the maximum network 1392 propagation delay between the first to receive the handle and the 1393 last to receive the handle, and it is inversely proportional to the 1394 epoch duration. The appraising entity shall compare the handle 1395 carried in the received Evidence or Attestation Result with the 1396 handles in its epoch window to find a suitable match. 1398 Whereas the nonce approach typically requires the appraising entity 1399 to keep state for each nonce generated, the handle approach minimizes 1400 the state kept to be independent of the number of Attesters or 1401 Verifiers from which it expects to receive Evidence or Attestation 1402 Results, as long as all use the same Handle Distributor. 1404 10.4. Discussion 1406 Implicit and explicit timekeeping can be combined into hybrid 1407 mechanisms. For example, if clocks exist and are considered 1408 trustworthy but are not synchronized, a nonce-based exchange may be 1409 used to determine the (relative) time offset between the involved 1410 peers, followed by any number of timestamp based exchanges. 1412 It is important to note that the actual values in Claims might have 1413 been generated long before the Claims are signed. If so, it is the 1414 signer's responsibility to ensure that the values are still correct 1415 when they are signed. For example, values generated at boot time 1416 might have been saved to secure storage until network connectivity is 1417 established to the remote Verifier and a nonce is obtained. 1419 A more detailed discussion with examples appears in Section 16. 1421 For a discussion on the security of handles see Section 12.3. 1423 11. Privacy Considerations 1425 The conveyance of Evidence and the resulting Attestation Results 1426 reveal a great deal of information about the internal state of a 1427 device as well as potentially any users of the device. In many 1428 cases, the whole point of the Attestation process is to provide 1429 reliable information about the type of the device and the firmware/ 1430 software that the device is running. This information might be 1431 particularly interesting to many attackers. For example, knowing 1432 that a device is running a weak version of firmware provides a way to 1433 aim attacks better. 1435 Many claims in Attestation Evidence and Attestation Results are 1436 potentially Personally Identifying Information) depending on the end- 1437 to-end use case of the attestation. Attestation that goes up to 1438 include containers and applications may further reveal details about 1439 a specific system or user. 1441 In some cases, an attacker may be able to make inferences about 1442 attestations from the results or timing of the processing. For 1443 example, an attacker might be able to infer the value of specific 1444 claims if it knew that only certain values were accepted by the 1445 Relying Party. 1447 Evidence and Attestation Results data structures are expected to 1448 support integrity protection encoding (e.g., COSE, JOSE, X.509) and 1449 optionally might support confidentiality protection (e.g., COSE, 1450 JOSE). Therefore, if confidentiality protection is omitted or 1451 unavailable, the protocols that convey Evidence or Attestation 1452 Results are responsible for detailing what kinds of information are 1453 disclosed, and to whom they are exposed. 1455 Furthermore, because Evidence might contain sensitive information, 1456 Attesters are responsible for only sending such Evidence to trusted 1457 Verifiers. Some Attesters might want a stronger level of assurance 1458 of the trustworthiness of a Verifier before sending Evidence to it. 1459 In such cases, an Attester can first act as a Relying Party and ask 1460 for the Verifier's own Attestation Result, and appraising it just as 1461 a Relying Party would appraise an Attestation Result for any other 1462 purpose. 1464 Another approach to deal with Evidence is to remove PII from the 1465 Evidence while still being able to verify that the Attester is one of 1466 a large set. This approach is often called "Direct Anonymous 1467 Attestation". See [CCC-DeepDive] section 6.2 for more discussion. 1469 12. Security Considerations 1471 12.1. Attester and Attestation Key Protection 1473 Implementers need to pay close attention to the protection of the 1474 Attester and the factory processes for provisioning the Attestation 1475 key material. If either of these are compromised, the remote 1476 attestation becomes worthless because an attacker can forge Evidence 1477 or manipulate the Attesting Environment. For example, a Target 1478 Environment should not be able to tamper with the Attesting 1479 Environment that measures it, by isolating the two environments from 1480 each other in some way. 1482 Remote attestation applies to use cases with a range of security 1483 requirements, so the protections discussed here range from low to 1484 high security where low security may be only application or process 1485 isolation by the device's operating system and high security involves 1486 specialized hardware to defend against physical attacks on a chip. 1488 12.1.1. On-Device Attester and Key Protection 1490 It is assumed that an Attesting Environment is sufficiently isolated 1491 from the Target Environment it collects Claims for and signs them 1492 with an Attestation Key, so that the Target Environment cannot forge 1493 Evidence about itself. Such an isolated environment might be 1494 provided by a process, a dedicated chip, a TEE, a virtual machine, or 1495 another secure mode of operation. The Attesting Environment must be 1496 protected from unauthorized modification to ensure it behaves 1497 correctly. There must also be confidentiality so that the signing 1498 key is not captured and used elsewhere to forge Evidence. 1500 In many cases the user or owner of the device must not be able to 1501 modify or exfiltrate keys from the Attesting Environment of the 1502 Attester. For example the owner or user of a mobile phone or FIDO 1503 authenticator, having full control over the keys, might not be 1504 trusted to use the keys to report Evidence about the environment that 1505 protects the keys. The point of remote attestation is for the 1506 Relying Party to be able to trust the Attester even though they don't 1507 trust the user or owner. 1509 Some of the measures for a minimally protected system might include 1510 process or application isolation by a high-level operating system, 1511 and perhaps restricting access to root or system privilege. For 1512 extremely simple single-use devices that don't use a protected mode 1513 operating system, like a Bluetooth speaker, the isolation might only 1514 be the plastic housing for the device. 1516 Measures for a moderately protected system could include a special 1517 restricted operating environment like a Trusted Execution Environment 1518 (TEE) might be used. In this case, only security-oriented software 1519 has access to the Attester and key material. 1521 Measures for a highly protected system could include specialized 1522 hardware that is used to provide protection against chip decapping 1523 attacks, power supply and clock glitching, faulting injection and RF 1524 and power side channel attacks. 1526 12.1.2. Attestation Key Provisioning Processes 1528 Attestation key provisioning is the process that occurs in the 1529 factory or elsewhere that establishes the signing key material on the 1530 device and the verification key material off the device. Sometimes 1531 this is referred to as "personalization". 1533 One way to provision a key is to first generate it external to the 1534 device and then copy the key onto the device. In this case, 1535 confidentiality of the generator, as well as the path over which the 1536 key is provisioned, is necessary. The manufacturer needs to take 1537 care to protect it with measures consistent with its value. This can 1538 be achieved in a number of ways. 1540 Confidentiality can be achieved entirely with physical provisioning 1541 facility security involving no encryption at all. For low-security 1542 use cases, this might be simply locking doors and limiting personnel 1543 that can enter the facility. For high-security use cases, this might 1544 involve a special area of the facility accessible only to select 1545 security-trained personnel. 1547 Cryptography can also be used to support confidentiality, but keys 1548 that are used to then provision attestation keys must somehow have 1549 been provisioned securely beforehand (a recursive problem). 1551 In many cases both some physical security and some cryptography will 1552 be necessary and useful to establish confidentiality. 1554 Another way to provision the key material is to generate it on the 1555 device and export the verification key. If public key cryptography 1556 is being used, then only integrity is necessary. Confidentiality is 1557 not necessary. 1559 In all cases, the Attestation Key provisioning process must ensure 1560 that only attestation key material that is generated by a valid 1561 Endorser is established in Attesters and then configured correctly. 1562 For many use cases, this will involve physical security at the 1563 facility, to prevent unauthorized devices from being manufactured 1564 that may be counterfeit or incorrectly configured. 1566 12.2. Integrity Protection 1568 Any solution that conveys information used for security purposes, 1569 whether such information is in the form of Evidence, Attestation 1570 Results, Endorsements, or appraisal policy must support end-to-end 1571 integrity protection and replay attack prevention, and often also 1572 needs to support additional security properties, including: 1574 * end-to-end encryption, 1576 * denial of service protection, 1578 * authentication, 1580 * auditing, 1582 * fine grained access controls, and 1584 * logging. 1586 Section 10 discusses ways in which freshness can be used in this 1587 architecture to protect against replay attacks. 1589 To assess the security provided by a particular appraisal policy, it 1590 is important to understand the strength of the root of trust, e.g., 1591 whether it is mutable software, or firmware that is read-only after 1592 boot, or immutable hardware/ROM. 1594 It is also important that the appraisal policy was itself obtained 1595 securely. If an attacker can configure appraisal policies for a 1596 Relying Party or for a Verifier, then integrity of the process is 1597 compromised. 1599 The security of conveyed information may be applied at different 1600 layers, whether by a conveyance protocol, or an information encoding 1601 format. This architecture expects attestation messages (i.e., 1602 Evidence, Attestation Results, Endorsements, Reference Values, and 1603 Policies) are end-to-end protected based on the role interaction 1604 context. For example, if an Attester produces Evidence that is 1605 relayed through some other entity that doesn't implement the Attester 1606 or the intended Verifier roles, then the relaying entity should not 1607 expect to have access to the Evidence. 1609 12.3. Handle-based Attestation 1611 Handles, described in Section 10.3, can be tampered with, dropped, 1612 delayed and reordered by an attacker. 1614 An attacker could be either external or belong to the distribution 1615 group, for example if one of the Attester entities have been 1616 compromised. 1618 An attacker who is able to tamper with handles can potentially lock 1619 all the participants in a certain epoch of choice for ever, 1620 effectively freezing time. This is problematic since it destroys the 1621 ability to ascertain freshness of Evidence and Attestation Results. 1623 To mitigate this threat, the transport should be at least integrity 1624 protected and provide origin authentication. 1626 Selective dropping of handles is equivalent to pinning the victim 1627 node to a past epoch. An attacker could drop handles to only some 1628 entities and not others, which will typically result in a denial of 1629 service due to the permanent staleness of the Attestation Result or 1630 Evidence. 1632 Delaying or reordering handles is equivalent to manipulating the 1633 victim's timeline at will. This ability could be used by a malicious 1634 actor (e.g., a compromised router) to mount a confusion attack where, 1635 for example, a Verifier is tricked into accepting Evidence coming 1636 from a past epoch as fresh, while in the meantime the Attester has 1637 been compromised. 1639 Reordering and dropping attacks are mitigated if the transport 1640 provides the ability to detect reordering and drop. However, the 1641 delay attack described above can't be thwarted in this manner. 1643 13. IANA Considerations 1645 This document does not require any actions by IANA. 1647 14. Acknowledgments 1649 Special thanks go to Joerg Borchert, Nancy Cam-Winget, Jessica 1650 Fitzgerald-McKay, Diego Lopez, Laurence Lundblade, Paul Rowe, Hannes 1651 Tschofenig, Frank Xia, and David Wooten. 1653 15. Notable Contributions 1655 Thomas Hardjono created older versions of the terminology section in 1656 collaboration with Ned Smith. Eric Voit provided the conceptual 1657 separation between Attestation Provision Flows and Attestation 1658 Evidence Flows. Monty Wisemen created the content structure of the 1659 first three architecture drafts. Carsten Bormann provided many of 1660 the motivational building blocks with respect to the Internet Threat 1661 Model. 1663 16. Appendix A: Time Considerations 1665 The table below defines a number of relevant events, with an ID that 1666 is used in subsequent diagrams. The times of said events might be 1667 defined in terms of an absolute clock time such as Coordinated 1668 Universal Time, or might be defined relative to some other timestamp 1669 or timeticks counter. 1671 +====+============+=================================================+ 1672 | ID | Event | Explanation of event | 1673 +====+============+=================================================+ 1674 | VG | Value | A value to appear in a Claim was created. | 1675 | | generated | In some cases, a value may have technically | 1676 | | | existed before an Attester became aware of | 1677 | | | it but the Attester might have no idea how | 1678 | | | long it has had that value. In such a | 1679 | | | case, the Value created time is the time at | 1680 | | | which the Claim containing the copy of the | 1681 | | | value was created. | 1682 +----+------------+-------------------------------------------------+ 1683 | NS | Nonce sent | A nonce not predictable to an Attester | 1684 | | | (recentness & uniqueness) is sent to an | 1685 | | | Attester. | 1686 +----+------------+-------------------------------------------------+ 1687 | NR | Nonce | A nonce is relayed to an Attester by | 1688 | | relayed | another entity. | 1689 +----+------------+-------------------------------------------------+ 1690 | HR | Handle | A handle is successfully received and | 1691 | | received | processed by an entity. | 1692 +----+------------+-------------------------------------------------+ 1693 | EG | Evidence | An Attester creates Evidence from collected | 1694 | | generation | Claims. | 1695 +----+------------+-------------------------------------------------+ 1696 | ER | Evidence | A Relying Party relays Evidence to a | 1697 | | relayed | Verifier. | 1698 +----+------------+-------------------------------------------------+ 1699 | RG | Result | A Verifier appraises Evidence and generates | 1700 | | generation | an Attestation Result. | 1701 +----+------------+-------------------------------------------------+ 1702 | RR | Result | A Relying Party relays an Attestation | 1703 | | relayed | Result to a Relying Party. | 1704 +----+------------+-------------------------------------------------+ 1705 | RA | Result | The Relying Party appraises Attestation | 1706 | | appraised | Results. | 1707 +----+------------+-------------------------------------------------+ 1708 | OP | Operation | The Relying Party performs some operation | 1709 | | performed | requested by the Attester. For example, | 1710 | | | acting upon some message just received | 1711 | | | across a session created earlier at | 1712 | | | time(RA). | 1713 +----+------------+-------------------------------------------------+ 1714 | RX | Result | An Attestation Result should no longer be | 1715 | | expiry | accepted, according to the Verifier that | 1716 | | | generated it. | 1717 +----+------------+-------------------------------------------------+ 1719 Table 1 1721 Using the table above, a number of hypothetical examples of how a 1722 solution might be built are illustrated below. a solution might be 1723 built. This list is not intended to be complete, but is just 1724 representative enough to highlight various timing considerations. 1726 All times are relative to the local clocks, indicated by an "a" 1727 (Attester), "v" (Verifier), or "r" (Relying Party) suffix. 1729 Times with an appended Prime (') indicate a second instance of the 1730 same event. 1732 How and if clocks are synchronized depends upon the model. 1734 16.1. Example 1: Timestamp-based Passport Model Example 1736 The following example illustrates a hypothetical Passport Model 1737 solution that uses timestamps and requires roughly synchronized 1738 clocks between the Attester, Verifier, and Relying Party, which 1739 depends on using a secure clock synchronization mechanism. As a 1740 result, the receiver of a conceptual message containing a timestamp 1741 can directly compare it to its own clock and timestamps. 1743 .----------. .----------. .---------------. 1744 | Attester | | Verifier | | Relying Party | 1745 '----------' '----------' '---------------' 1746 time(VG_a) | | 1747 | | | 1748 ~ ~ ~ 1749 | | | 1750 time(EG_a) | | 1751 |------Evidence{time(EG_a)}------>| | 1752 | time(RG_v) | 1753 |<-----Attestation Result---------| | 1754 | {time(RG_v),time(RX_v)} | | 1755 ~ ~ 1756 | | 1757 |----Attestation Result{time(RG_v),time(RX_v)}-->time(RA_r) 1758 | | 1759 ~ ~ 1760 | | 1761 | time(OP_r) 1763 In the figures above and in subsequent sections, curly braces 1764 indicate containment. For example, the notation Evidence{foo} 1765 indicates that 'foo' is contained in the Evidence and is thus covered 1766 by its signature. 1768 The Verifier can check whether the Evidence is fresh when appraising 1769 it at time(RG_v) by checking "time(RG_v) - time(EG_a) < Threshold", 1770 where the Verifier's threshold is large enough to account for the 1771 maximum permitted clock skew between the Verifier and the Attester. 1773 If time(VG_a) is also included in the Evidence along with the claim 1774 value generated at that time, and the Verifier decides that it can 1775 trust the time(VG_a) value, the Verifier can also determine whether 1776 the claim value is recent by checking "time(RG_v) - time(VG_a) < 1777 Threshold". The threshold is decided by the Appraisal Policy for 1778 Evidence, and again needs to take into account the maximum permitted 1779 clock skew between the Verifier and the Attester. 1781 The Relying Party can check whether the Attestation Result is fresh 1782 when appraising it at time(RA_r) by checking "time(RA_r) - time(RG_v) 1783 < Threshold", where the Relying Party's threshold is large enough to 1784 account for the maximum permitted clock skew between the Relying 1785 Party and the Verifier. The result might then be used for some time 1786 (e.g., throughout the lifetime of a connection established at 1787 time(RA_r)). The Relying Party must be careful, however, to not 1788 allow continued use beyond the period for which it deems the 1789 Attestation Result to remain fresh enough. Thus, it might allow use 1790 (at time(OP_r)) as long as "time(OP_r) - time(RG_v) < Threshold". 1791 However, if the Attestation Result contains an expiry time time(RX_v) 1792 then it could explicitly check "time(OP_r) < time(RX_v)". 1794 16.2. Example 2: Nonce-based Passport Model Example 1796 The following example illustrates a hypothetical Passport Model 1797 solution that uses nonces instead of timestamps. Compared to the 1798 timestamp-based example, it requires an extra round trip to retrieve 1799 a nonce, and requires that the Verifier and Relying Party track state 1800 to remember the nonce for some period of time. 1802 The advantage is that it does not require that any clocks are 1803 synchronized. As a result, the receiver of a conceptual message 1804 containing a timestamp cannot directly compare it to its own clock or 1805 timestamps. Thus we use a suffix ("a" for Attester, "v" for 1806 Verifier, and "r" for Relying Party) on the IDs below indicating 1807 which clock generated them, since times from different clocks cannot 1808 be compared. Only the delta between two events from the sender can 1809 be used by the receiver. 1811 .----------. .----------. .---------------. 1812 | Attester | | Verifier | | Relying Party | 1813 '----------' '----------' '---------------' 1814 time(VG_a) | | 1815 | | | 1816 ~ ~ ~ 1817 | | | 1818 |<--Nonce1---------------------time(NS_v) | 1819 time(EG_a) | | 1820 |---Evidence--------------------->| | 1821 | {Nonce1, time(EG_a)-time(VG_a)} | | 1822 | time(RG_v) | 1823 |<--Attestation Result------------| | 1824 | {time(RX_v)-time(RG_v)} | | 1825 ~ ~ 1826 | | 1827 |<--Nonce2-------------------------------------time(NS_r) 1828 time(RR_a) | 1829 |--[Attestation Result{time(RX_v)-time(RG_v)}, -->|time(RA_r) 1830 | Nonce2, time(RR_a)-time(EG_a)] | 1831 ~ ~ 1832 | | 1833 | time(OP_r) 1835 In this example solution, the Verifier can check whether the Evidence 1836 is fresh at "time(RG_v)" by verifying that "time(RG_v)-time(NS_v) < 1837 Threshold". 1839 The Verifier cannot, however, simply rely on a Nonce to determine 1840 whether the value of a claim is recent, since the claim value might 1841 have been generated long before the nonce was sent by the Verifier. 1842 However, if the Verifier decides that the Attester can be trusted to 1843 correctly provide the delta "time(EG_a)-time(VG_a)", then it can 1844 determine recency by checking "time(RG_v)-time(NS_v) + time(EG_a)- 1845 time(VG_a) < Threshold". 1847 Similarly if, based on an Attestation Result from a Verifier it 1848 trusts, the Relying Party decides that the Attester can be trusted to 1849 correctly provide time deltas, then it can determine whether the 1850 Attestation Result is fresh by checking "time(OP_r)-time(NS_r) + 1851 time(RR_a)-time(EG_a) < Threshold". Although the Nonce2 and 1852 "time(RR_a)-time(EG_a)" values cannot be inside the Attestation 1853 Result, they might be signed by the Attester such that the 1854 Attestation Result vouches for the Attester's signing capability. 1856 The Relying Party must still be careful, however, to not allow 1857 continued use beyond the period for which it deems the Attestation 1858 Result to remain valid. Thus, if the Attestation Result sends a 1859 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 1860 Relying Party can check "time(OP_r)-time(NS_r) < time(RX_v)- 1861 time(RG_v)". 1863 16.3. Example 3: Handle-based Passport Model Example 1865 The example in Figure 10 illustrates a hypothetical Passport Model 1866 solution that uses handles instead of nonces or timestamps. 1868 The Handle Distributor broadcasts handle "H" which starts a new epoch 1869 "E" for a protocol participant upon reception at "time(HR)". 1871 The Attester generates Evidence incorporating handle "H" and conveys 1872 it to the Verifier. 1874 The Verifier appraises that the received handle "H" is "fresh" 1875 according to the definition provided in Section 10.3 whereby retries 1876 are required in the case of mismatching handles, and generates an 1877 Attestation Result. The Attestation Result is conveyed to the 1878 Attester. 1880 After the transmission of handle "H'" a new epoch "E'" is established 1881 when "H'" is received by each protocol participant. The Attester 1882 relays the Attestation Result obtained during epoch "E" (associated 1883 with handle "H") to the Relying Party using the handle for the 1884 current epoch "H'". If the Relying Party had not yet received "H'", 1885 then the Attestation Result would be rejected, but in this example, 1886 it is received. 1888 In the illustrated scenario, the handle for relaying an Attestation 1889 Result to the Relying Party is current, while a previous handle was 1890 used to generate Verifier evaluated evidence. This indicates that at 1891 least one epoch transition has occurred, and the Attestation Results 1892 may only be as fresh as the previous epoch. 1894 .-------------. 1895 .----------. | Handle | .----------. .---------------. 1896 | Attester | | Distributor | | Verifier | | Relying Party | 1897 '----------' '-------------' '----------' '---------------' 1898 time(VG_a) | | | 1899 | | | | 1900 ~ ~ ~ ~ 1901 | | | | 1902 time(HR_a)<---------+-----------time(HR_v)----->time(HR_r) 1903 | | | | 1904 time(EG_a) | | | 1905 |---Evidence{H,time(EG_a)-time(VG_a)}----->| | 1906 | | | | 1907 | | time(RG_v) | 1908 |<--Attestation Result------------| | 1909 | {H,time(RX_v)-time(RG_v)} | | 1910 | | | | 1911 time(HR'_a)<--------+-----------time(HR'_v)---->time(HR'_r) 1912 | | | | 1913 time(RR_a) | | | 1914 |---Attestation Result---------------------->time(RA_r) 1915 | {H',R{H,time(RX_v)-time(RG_v)}} | | 1916 | | | | 1917 ~ ~ ~ ~ 1918 | | | | 1919 | | | time(OP_r) 1921 Figure 10: Handle-based Passport Model 1923 16.4. Example 4: Timestamp-based Background-Check Model Example 1925 The following example illustrates a hypothetical Background-Check 1926 Model solution that uses timestamps and requires roughly synchronized 1927 clocks between the Attester, Verifier, and Relying Party. 1929 .----------. .---------------. .----------. 1930 | Attester | | Relying Party | | Verifier | 1931 '----------' '---------------' '----------' 1932 time(VG_a) | | 1933 | | | 1934 ~ ~ ~ 1935 | | | 1936 time(EG_a) | | 1937 |----Evidence------->| | 1938 | {time(EG_a)} time(ER_r)--Evidence{time(EG_a)}->| 1939 | | time(RG_v) 1940 | time(RA_r)<-Attestation Result---| 1941 | | {time(RX_v)} | 1942 ~ ~ ~ 1943 | | | 1944 | time(OP_r) | 1946 The time considerations in this example are equivalent to those 1947 discussed under Example 1 above. 1949 16.5. Example 5: Nonce-based Background-Check Model Example 1951 The following example illustrates a hypothetical Background-Check 1952 Model solution that uses nonces and thus does not require that any 1953 clocks are synchronized. In this example solution, a nonce is 1954 generated by a Verifier at the request of a Relying Party, when the 1955 Relying Party needs to send one to an Attester. 1957 .----------. .---------------. .----------. 1958 | Attester | | Relying Party | | Verifier | 1959 '----------' '---------------' '----------' 1960 time(VG_a) | | 1961 | | | 1962 ~ ~ ~ 1963 | | | 1964 | |<-------Nonce-----------time(NS_v) 1965 |<---Nonce-----------time(NR_r) | 1966 time(EG_a) | | 1967 |----Evidence{Nonce}--->| | 1968 | time(ER_r)--Evidence{Nonce}--->| 1969 | | time(RG_v) 1970 | time(RA_r)<-Attestation Result-| 1971 | | {time(RX_v)-time(RG_v)} | 1972 ~ ~ ~ 1973 | | | 1974 | time(OP_r) | 1976 The Verifier can check whether the Evidence is fresh, and whether a 1977 claim value is recent, the same as in Example 2 above. 1979 However, unlike in Example 2, the Relying Party can use the Nonce to 1980 determine whether the Attestation Result is fresh, by verifying that 1981 "time(OP_r)-time(NR_r) < Threshold". 1983 The Relying Party must still be careful, however, to not allow 1984 continued use beyond the period for which it deems the Attestation 1985 Result to remain valid. Thus, if the Attestation Result sends a 1986 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 1987 Relying Party can check "time(OP_r)-time(ER_r) < time(RX_v)- 1988 time(RG_v)". 1990 17. References 1992 17.1. Normative References 1994 [RFC7519] Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token 1995 (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015, 1996 . 1998 [RFC8392] Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, 1999 "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, 2000 May 2018, . 2002 17.2. Informative References 2004 [CCC-DeepDive] 2005 Confidential Computing Consortium, "Confidential Computing 2006 Deep Dive", n.d., 2007 . 2009 [CTAP] FIDO Alliance, "Client to Authenticator Protocol", n.d., 2010 . 2014 [I-D.birkholz-rats-tuda] 2015 Fuchs, A., Birkholz, H., McDonald, I., and C. Bormann, 2016 "Time-Based Uni-Directional Attestation", Work in 2017 Progress, Internet-Draft, draft-birkholz-rats-tuda-04, 13 2018 January 2021, . 2021 [I-D.birkholz-rats-uccs] 2022 Birkholz, H., O'Donoghue, J., Cam-Winget, N., and C. 2023 Bormann, "A CBOR Tag for Unprotected CWT Claims Sets", 2024 Work in Progress, Internet-Draft, draft-birkholz-rats- 2025 uccs-02, 2 December 2020, . 2028 [I-D.ietf-teep-architecture] 2029 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 2030 "Trusted Execution Environment Provisioning (TEEP) 2031 Architecture", Work in Progress, Internet-Draft, draft- 2032 ietf-teep-architecture-13, 2 November 2020, 2033 . 2036 [I-D.tschofenig-tls-cwt] 2037 Tschofenig, H. and M. Brossard, "Using CBOR Web Tokens 2038 (CWTs) in Transport Layer Security (TLS) and Datagram 2039 Transport Layer Security (DTLS)", Work in Progress, 2040 Internet-Draft, draft-tschofenig-tls-cwt-02, 13 July 2020, 2041 . 2044 [OPCUA] OPC Foundation, "OPC Unified Architecture Specification, 2045 Part 2: Security Model, Release 1.03", OPC 10000-2 , 25 2046 November 2015, . 2050 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 2051 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 2052 . 2054 [RFC8322] Field, J., Banghart, S., and D. Waltermire, "Resource- 2055 Oriented Lightweight Information Exchange (ROLIE)", 2056 RFC 8322, DOI 10.17487/RFC8322, February 2018, 2057 . 2059 [strengthoffunction] 2060 NISC, "Strength of Function", n.d., 2061 . 2064 [TCG-DICE] Trusted Computing Group, "DICE Certificate Profiles", 2065 n.d., . 2069 [TCGarch] Trusted Computing Group, "Trusted Platform Module Library 2070 - Part 1: Architecture", 8 November 2019, 2071 . 2074 [WebAuthN] W3C, "Web Authentication: An API for accessing Public Key 2075 Credentials", n.d., . 2077 Contributors 2079 Monty Wiseman 2081 Email: montywiseman32@gmail.com 2083 Liang Xia 2085 Email: frank.xialiang@huawei.com 2087 Laurence Lundblade 2089 Email: lgl@island-resort.com 2091 Eliot Lear 2093 Email: elear@cisco.com 2095 Jessica Fitzgerald-McKay 2097 Sarah C. Helbe 2099 Andrew Guinn 2101 Peter Lostcco 2103 Email: pete.loscocco@gmail.com 2105 Eric Voit 2106 Thomas Fossati 2108 Email: thomas.fossati@arm.com 2110 Paul Rowe 2112 Carsten Bormann 2114 Email: cabo@tzi.org 2116 Giri Mandyam 2118 Email: mandyam@qti.qualcomm.com 2120 Authors' Addresses 2122 Henk Birkholz 2123 Fraunhofer SIT 2124 Rheinstrasse 75 2125 64295 Darmstadt 2126 Germany 2128 Email: henk.birkholz@sit.fraunhofer.de 2130 Dave Thaler 2131 Microsoft 2132 United States of America 2134 Email: dthaler@microsoft.com 2136 Michael Richardson 2137 Sandelman Software Works 2138 Canada 2140 Email: mcr+ietf@sandelman.ca 2142 Ned Smith 2143 Intel Corporation 2144 United States of America 2146 Email: ned.smith@intel.com 2147 Wei Pan 2148 Huawei Technologies 2150 Email: william.panwei@huawei.com