idnits 2.17.1 draft-ietf-rats-architecture-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 492 has weird spacing: '... claims v ...' -- The document date (8 December 2020) is 1229 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-birkholz-rats-tuda-03 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-13 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RATS Working Group H. Birkholz 3 Internet-Draft Fraunhofer SIT 4 Intended status: Informational D. Thaler 5 Expires: 11 June 2021 Microsoft 6 M. Richardson 7 Sandelman Software Works 8 N. Smith 9 Intel 10 W. Pan 11 Huawei Technologies 12 8 December 2020 14 Remote Attestation Procedures Architecture 15 draft-ietf-rats-architecture-08 17 Abstract 19 In network protocol exchanges it is often the case that one entity 20 requires believable evidence about the operational state of a remote 21 peer. Such evidence is typically conveyed as claims about the peer's 22 software and hardware platform, and is subsequently appraised in 23 order to assess the peer's trustworthiness. The process of 24 generating and appraising this kind of evidence is known as remote 25 attestation. This document describes an architecture for remote 26 attestation procedures that generate, convey, and appraise evidence 27 about a peer's operational state. 29 Note to Readers 31 Discussion of this document takes place on the RATS Working Group 32 mailing list (rats@ietf.org), which is archived at 33 https://mailarchive.ietf.org/arch/browse/rats/ 34 (https://mailarchive.ietf.org/arch/browse/rats/). 36 Source for this draft and an issue tracker can be found at 37 https://github.com/ietf-rats-wg/architecture (https://github.com/ 38 ietf-rats-wg/architecture). 40 Status of This Memo 42 This Internet-Draft is submitted in full conformance with the 43 provisions of BCP 78 and BCP 79. 45 Internet-Drafts are working documents of the Internet Engineering 46 Task Force (IETF). Note that other groups may also distribute 47 working documents as Internet-Drafts. The list of current Internet- 48 Drafts is at https://datatracker.ietf.org/drafts/current/. 50 Internet-Drafts are draft documents valid for a maximum of six months 51 and may be updated, replaced, or obsoleted by other documents at any 52 time. It is inappropriate to use Internet-Drafts as reference 53 material or to cite them other than as "work in progress." 55 This Internet-Draft will expire on 11 June 2021. 57 Copyright Notice 59 Copyright (c) 2020 IETF Trust and the persons identified as the 60 document authors. All rights reserved. 62 This document is subject to BCP 78 and the IETF Trust's Legal 63 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 64 license-info) in effect on the date of publication of this document. 65 Please review these documents carefully, as they describe your rights 66 and restrictions with respect to this document. Code Components 67 extracted from this document must include Simplified BSD License text 68 as described in Section 4.e of the Trust Legal Provisions and are 69 provided without warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 74 2. Reference Use Cases . . . . . . . . . . . . . . . . . . . . . 4 75 2.1. Network Endpoint Assessment . . . . . . . . . . . . . . . 4 76 2.2. Confidential Machine Learning (ML) Model Protection . . . 5 77 2.3. Confidential Data Retrieval . . . . . . . . . . . . . . . 5 78 2.4. Critical Infrastructure Control . . . . . . . . . . . . . 6 79 2.5. Trusted Execution Environment (TEE) Provisioning . . . . 6 80 2.6. Hardware Watchdog . . . . . . . . . . . . . . . . . . . . 6 81 2.7. FIDO Biometric Authentication . . . . . . . . . . . . . . 7 82 3. Architectural Overview . . . . . . . . . . . . . . . . . . . 7 83 3.1. Appraisal Policies . . . . . . . . . . . . . . . . . . . 9 84 3.2. Reference Values . . . . . . . . . . . . . . . . . . . . 9 85 3.3. Two Types of Environments of an Attester . . . . . . . . 9 86 3.4. Layered Attestation Environments . . . . . . . . . . . . 10 87 3.5. Composite Device . . . . . . . . . . . . . . . . . . . . 12 88 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 15 89 4.1. Roles . . . . . . . . . . . . . . . . . . . . . . . . . . 15 90 4.2. Artifacts . . . . . . . . . . . . . . . . . . . . . . . . 15 91 5. Topological Models . . . . . . . . . . . . . . . . . . . . . 16 92 5.1. Passport Model . . . . . . . . . . . . . . . . . . . . . 17 93 5.2. Background-Check Model . . . . . . . . . . . . . . . . . 18 94 5.3. Combinations . . . . . . . . . . . . . . . . . . . . . . 19 95 6. Roles and Entities . . . . . . . . . . . . . . . . . . . . . 20 96 7. Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . 21 97 7.1. Relying Party . . . . . . . . . . . . . . . . . . . . . . 21 98 7.2. Attester . . . . . . . . . . . . . . . . . . . . . . . . 22 99 7.3. Relying Party Owner . . . . . . . . . . . . . . . . . . . 22 100 7.4. Verifier . . . . . . . . . . . . . . . . . . . . . . . . 22 101 7.5. Endorser, Reference Value Provider, and Verifier Owner . 24 102 8. Conceptual Messages . . . . . . . . . . . . . . . . . . . . . 24 103 8.1. Evidence . . . . . . . . . . . . . . . . . . . . . . . . 24 104 8.2. Endorsements . . . . . . . . . . . . . . . . . . . . . . 25 105 8.3. Attestation Results . . . . . . . . . . . . . . . . . . . 25 106 9. Claims Encoding Formats . . . . . . . . . . . . . . . . . . . 26 107 10. Freshness . . . . . . . . . . . . . . . . . . . . . . . . . . 28 108 11. Privacy Considerations . . . . . . . . . . . . . . . . . . . 30 109 12. Security Considerations . . . . . . . . . . . . . . . . . . . 30 110 12.1. Attester and Attestation Key Protection . . . . . . . . 31 111 12.1.1. On-Device Attester and Key Protection . . . . . . . 31 112 12.1.2. Attestation Key Provisioning Processes . . . . . . . 32 113 12.2. Integrity Protection . . . . . . . . . . . . . . . . . . 32 114 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 33 115 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 33 116 15. Notable Contributions . . . . . . . . . . . . . . . . . . . . 34 117 16. Appendix A: Time Considerations . . . . . . . . . . . . . . . 34 118 16.1. Example 1: Timestamp-based Passport Model Example . . . 35 119 16.2. Example 2: Nonce-based Passport Model Example . . . . . 37 120 16.3. Example 3: Handle-based Passport Model Example . . . . . 38 121 16.4. Example 4: Timestamp-based Background-Check Model 122 Example . . . . . . . . . . . . . . . . . . . . . . . . 40 123 16.5. Example 5: Nonce-based Background-Check Model Example . 41 124 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 42 125 17.1. Normative References . . . . . . . . . . . . . . . . . . 42 126 17.2. Informative References . . . . . . . . . . . . . . . . . 42 127 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . 43 128 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 44 130 1. Introduction 132 In Remote Attestation Procedures (RATS), one peer (the "Attester") 133 produces believable information about itself - Evidence - to enable a 134 remote peer (the "Relying Party") to decide whether to consider that 135 Attester a trustworthy peer or not. RATS are facilitated by an 136 additional vital party, the Verifier. 138 The Verifier appraises Evidence via appraisal policies and creates 139 the Attestation Results to support Relying Parties in their decision 140 process. This document defines a flexible architecture consisting of 141 attestation roles and their interactions via conceptual messages. 142 Additionally, this document defines a universal set of terms that can 143 be mapped to various existing and emerging Remote Attestation 144 Procedures. Common topological models and the data flows associated 145 with them, such as the "Passport Model" and the "Background-Check 146 Model" are illustrated. The purpose is to define useful terminology 147 for attestation and enable readers to map their solution architecture 148 to the canonical attestation architecture provided here. Having a 149 common terminology that provides well-understood meanings for common 150 themes such as roles, device composition, topological models, and 151 appraisal is vital for semantic interoperability across solutions and 152 platforms involving multiple vendors and providers. 154 Amongst other things, this document is about trust and 155 trustworthiness. Trust is a choice one makes about another system. 156 Trustworthiness is a quality about the other system that can be used 157 in making one's decision to trust it or not. This is subtle 158 difference and being familiar with the difference is crucial for 159 using this document. Additionally, the concepts of freshness and 160 trust relationships with respect to RATS are elaborated on to enable 161 implementers to choose appropriate solutions to compose their Remote 162 Attestation Procedures. 164 2. Reference Use Cases 166 This section covers a number of representative use cases for remote 167 attestation, independent of specific solutions. The purpose is to 168 provide motivation for various aspects of the architecture presented 169 in this draft. Many other use cases exist, and this document does 170 not intend to have a complete list, only to have a set of use cases 171 that collectively cover all the functionality required in the 172 architecture. 174 Each use case includes a description followed by a summary of the 175 Attester and Relying Party roles. 177 2.1. Network Endpoint Assessment 179 Network operators want a trustworthy report that includes identity 180 and version of information of the hardware and software on the 181 machines attached to their network, for purposes such as inventory, 182 audit, anomaly detection, record maintenance and/or trending reports 183 (logging). The network operator may also want a policy by which full 184 access is only granted to devices that meet some definition of 185 hygiene, and so wants to get claims about such information and verify 186 its validity. Remote attestation is desired to prevent vulnerable or 187 compromised devices from getting access to the network and 188 potentially harming others. 190 Typically, solutions start with a specific component (called a "root 191 of trust") that provides device identity and protected storage for 192 measurements. The system components perform a series of measurements 193 that may be signed by the root of trust, considered as Evidence about 194 the hardware, firmware, BIOS, software, etc. that is running. 196 Attester: A device desiring access to a network 198 Relying Party: A network infrastructure device such as a router, 199 switch, or access point 201 2.2. Confidential Machine Learning (ML) Model Protection 203 A device manufacturer wants to protect its intellectual property. 204 This is primarily the ML model it developed and runs in the devices 205 purchased by its customers. The goals for the protection include 206 preventing attackers, potentially the customer themselves, from 207 seeing the details of the model. 209 This typically works by having some protected environment in the 210 device go through a remote attestation with some manufacturer service 211 that can assess its trustworthiness. If remote attestation succeeds, 212 then the manufacturer service releases either the model, or a key to 213 decrypt a model the Attester already has in encrypted form, to the 214 requester. 216 Attester: A device desiring to run an ML model 218 Relying Party: A server or service holding ML models it desires to 219 protect 221 2.3. Confidential Data Retrieval 223 This is a generalization of the ML model use case above, where the 224 data can be any highly confidential data, such as health data about 225 customers, payroll data about employees, future business plans, etc. 226 An assessment of system state is made against a set of policies to 227 evaluate the state of a system using attestations for the system 228 requesting data. Attestation is desired to prevent leaking data to 229 compromised devices. 231 Attester: An entity desiring to retrieve confidential data 233 Relying Party: An entity that holds confidential data for release to 234 authorized entities 236 2.4. Critical Infrastructure Control 238 In this use case, potentially dangerous physical equipment (e.g., 239 power grid, traffic control, hazardous chemical processing, etc.) is 240 connected to a network. The organization managing such 241 infrastructure needs to ensure that only authorized code and users 242 can control such processes, and they are protected from malware or 243 other threats. When a protocol operation can affect some critical 244 system, the device attached to the critical equipment thus wants some 245 assurance that the requester has not been compromised. As such, 246 remote attestation can be used to only accept commands from 247 requesters that are within policy. 249 Attester: A device or application wishing to control physical 250 equipment 252 Relying Party: A device or application connected to potentially 253 dangerous physical equipment (hazardous chemical processing, 254 traffic control, power grid, etc.) 256 2.5. Trusted Execution Environment (TEE) Provisioning 258 A "Trusted Application Manager (TAM)" server is responsible for 259 managing the applications running in the TEE of a client device. To 260 do this, the TAM wants to assess the state of a TEE, or of 261 applications in the TEE, of a client device. The TEE conducts a 262 remote attestation procedure with the TAM, which can then decide 263 whether the TEE is already in compliance with the TAM's latest 264 policy, or if the TAM needs to uninstall, update, or install approved 265 applications in the TEE to bring it back into compliance with the 266 TAM's policy. 268 Attester: A device with a trusted execution environment capable of 269 running trusted applications that can be updated 271 Relying Party: A Trusted Application Manager 273 2.6. Hardware Watchdog 275 There is a class of malware that holds a device hostage and does not 276 allow it to reboot to prevent updates from being applied. This can 277 be a significant problem, because it allows a fleet of devices to be 278 held hostage for ransom. 280 In the case, the Relying Party is the watchdog timer in the TPM/ 281 secure enclave itself, as described in [TCGarch] section 43.3. The 282 Attestation Results are returned to the device, and provided to the 283 enclave. 285 If the watchdog does not receive regular, and fresh, Attestation 286 Results as to the systems' health, then it forces a reboot. 288 Attester: The device that should be protected from being held 289 hostage for a long period of time 291 Relying Party: A remote server that will securely grant the Attester 292 permission to continue operating (i.e., not reboot) for a period 293 of time 295 2.7. FIDO Biometric Authentication 297 In the Fast IDentity Online (FIDO) protocol [WebAuthN], [CTAP], the 298 device in the user's hand authenticates the human user, whether by 299 biometrics (such as fingerprints), or by PIN and password. FIDO 300 authentication puts a large amount of trust in the device compared to 301 typical password authentication because it is the device that 302 verifies the biometric, PIN and password inputs from the user, not 303 the server. For the Relying Party to know that the authentication is 304 trustworthy, the Relying Party needs to know that the Authenticator 305 part of the device is trustworthy. The FIDO protocol employs remote 306 attestation for this. 308 The FIDO protocol supports several remote attestation protocols and a 309 mechanism by which new ones can be registered and added. Remote 310 attestation defined by RATS is thus a candidate for use in the FIDO 311 protocol. 313 Other biometric authentication protocols such as the Chinese IFAA 314 standard and WeChat Pay as well as Google Pay make use of attestation 315 in one form or another. 317 Attester: Every FIDO Authenticator contains an Attester. 319 Relying Party: Any web site, mobile application back end or service 320 that does biometric authentication. 322 3. Architectural Overview 324 Figure 1 depicts the data that flows between different roles, 325 independent of protocol or use case. 327 ************ ************* ************ ***************** 328 * Endorser * * Reference * * Verifier * * Relying Party * 329 ************ * Value * * Owner * * Owner * 330 | * Provider * ************ ***************** 331 | ************* | | 332 | | | | 333 |Endorsements |Reference |Appraisal |Appraisal 334 | |Values |Policy |Policy for 335 | | |for |Attestation 336 .-----------. | |Evidence |Results 337 | | | | 338 | | | | 339 v v v | 340 .---------------------------. | 341 .----->| Verifier |------. | 342 | '---------------------------' | | 343 | | | 344 | Attestation| | 345 | Results | | 346 | Evidence | | 347 | | | 348 | v v 349 .----------. .---------------. 350 | Attester | | Relying Party | 351 '----------' '---------------' 353 Figure 1: Conceptual Data Flow 355 An Attester creates Evidence that is conveyed to a Verifier. 357 The Verifier uses the Evidence, and any Endorsements from Endorsers, 358 by applying an Appraisal Policy for Evidence to assess the 359 trustworthiness of the Attester, and generates Attestation Results 360 for use by Relying Parties. The Appraisal Policy for Evidence might 361 be obtained from an Endorser along with the Endorsements, and/or 362 might be obtained via some other mechanism such as being configured 363 in the Verifier by the Verifier Owner. 365 The Relying Party uses Attestation Results by applying its own 366 appraisal policy to make application-specific decisions such as 367 authorization decisions. The Appraisal Policy for Attestation 368 Results is configured in the Relying Party by the Relying Party 369 Owner, and/or is programmed into the Relying Party. 371 3.1. Appraisal Policies 373 The Verifier, when appraising Evidence, or the Relying Party, when 374 appraising Attestation Results, checks the values of some claims 375 against constraints specified in its appraisal policy. Such 376 constraints might involve a comparison for equality against a 377 Reference Value, or a check for being in a range bounded by Reference 378 Values, or membership in a set of Reference Values, or a check 379 against values in other claims, or any other test. 381 3.2. Reference Values 383 Reference Values used in appraisal come from a Reference Value 384 Provider and are then used by the appraisal policy. They might be 385 conveyed in any number of ways, including: * as part of the appraisal 386 policy itself, if the Verifier Owner either: acquires Reference 387 Values from a Reference Value Provider or is itself a Reference Value 388 Provider; * as part of an Endorsement, if the Endorser either 389 acquires Reference Values from a Reference Value Provider or is 390 itself a Reference Value Provider; or * via separate communication. 392 The actual data format and semantics of any Reference Values are 393 specific to claims and implementations. This architecture document 394 does not define any general purpose format for them or general means 395 for comparison. 397 3.3. Two Types of Environments of an Attester 399 An Attester consists of at least one Attesting Environment and at 400 least one Target Environment. In some implementations, the Attesting 401 and Target Environments might be combined. Other implementations 402 might have multiple Attesting and Target Environments, such as in the 403 examples described in more detail in Section 3.4 and Section 3.5. 404 Other examples may exist, and the examples discussed could even be 405 combined into even more complex implementations. 407 Claims are collected from Target Environments, as shown in Figure 2. 408 That is, Attesting Environments collect the values and the 409 information to be represented in Claims, by reading system registers 410 and variables, calling into subsystems, taking measurements on code 411 or memory and so on of the Target Environment. Attesting 412 Environments then format the claims appropriately, and typically use 413 key material and cryptographic functions, such as signing or cipher 414 algorithms, to create Evidence. There is no limit to or requirement 415 on the places that an Attesting Environment can exist, but they 416 typically are in Trusted Execution Environments (TEE), embedded 417 Secure Elements (eSE), and BIOS firmware. An execution environment 418 may not, by default, be capable of claims collection for a given 419 Target Environment. Execution environments that are designed to be 420 capable of claims collection are referred to in this document as 421 Attesting Environments. 423 .--------------------------------. 424 | | 425 | Verifier | 426 | | 427 '--------------------------------' 428 ^ 429 | 430 .-------------------------|----------. 431 | | | 432 | .----------------. | | 433 | | Target | | | 434 | | Environment | | | 435 | | | | Evidence | 436 | '----------------' | | 437 | | | | 438 | | | | 439 | Collect | | | 440 | Claims | | | 441 | | | | 442 | v | | 443 | .-------------. | 444 | | Attesting | | 445 | | Environment | | 446 | | | | 447 | '-------------' | 448 | Attester | 449 '------------------------------------' 451 Figure 2: Two Types of Environments 453 3.4. Layered Attestation Environments 455 By definition, the Attester role creates Evidence. An Attester may 456 consist of one or more nested or staged environments, adding 457 complexity to the architectural structure. The unifying component is 458 the root of trust and the nested, staged, or chained attestation 459 Evidence produced. The nested or chained structure includes Claims, 460 collected by the Attester to aid in the assurance or believability of 461 the attestation Evidence. 463 Figure 3 depicts an example of a device that includes (A) a BIOS 464 stored in read-only memory in this example, (B) an updatable 465 bootloader, and (C) an operating system kernel. 467 .----------. .----------. 468 | | | | 469 | Endorser |------------------->| Verifier | 470 | | Endorsements | | 471 '----------' for A, B, and C '----------' 472 ^ 473 .------------------------------------. | 474 | | | 475 | .---------------------------. | | 476 | | Target | | | Layered 477 | | Environment | | | Evidence 478 | | C | | | for 479 | '---------------------------' | | B and C 480 | Collect | | | 481 | claims | | | 482 | .---------------|-----------. | | 483 | | Target v | | | 484 | | Environment .-----------. | | | 485 | | B | Attesting | | | | 486 | | |Environment|-----------' 487 | | | B | | | 488 | | '-----------' | | 489 | | ^ | | 490 | '---------------------|-----' | 491 | Collect | | Evidence | 492 | claims v | for B | 493 | .-----------. | 494 | | Attesting | | 495 | |Environment| | 496 | | A | | 497 | '-----------' | 498 | | 499 '------------------------------------' 501 Figure 3: Layered Attester 503 Attesting Environment A, the read-only BIOS in this example, has to 504 ensure the integrity of the bootloader (Target Environment B). There 505 are potentially multiple kernels to boot, and the decision is up to 506 the bootloader. Only a bootloader with intact integrity will make an 507 appropriate decision. Therefore, these Claims have to be measured 508 securely. At this stage of the boot-cycle of the device, the Claims 509 collected typically cannot be composed into Evidence. 511 After the boot sequence is started, the BIOS conducts the most 512 important and defining feature of layered attestation, which is that 513 the successfully measured Target Environment B now becomes (or 514 contains) an Attesting Environment for the next layer. This 515 procedure in Layered Attestation is sometimes called "staging". It 516 is important that the new Attesting Environment B not be able to 517 alter any Claims about its own Target Environment B. This can be 518 ensured having those Claims be either signed by Attesting Environment 519 A or stored in an untamperable manner by Attesting Environment A. 521 Continuing with this example, the bootloader's Attesting Environment 522 B is now in charge of collecting Claims about Target Environment C, 523 which in this example is the kernel to be booted. The final Evidence 524 thus contains two sets of Claims: one set about the bootloader as 525 measured and signed by the BIOS, plus a set of Claims about the 526 kernel as measured and signed by the bootloader. 528 This example could be extended further by making the kernel become 529 another Attesting Environment for an application as another Target 530 Environment. This would result in a third set of Claims in the 531 Evidence pertaining to that application. 533 The essence of this example is a cascade of staged environments. 534 Each environment has the responsibility of measuring the next 535 environment before the next environment is started. In general, the 536 number of layers may vary by device or implementation, and an 537 Attesting Environment might even have multiple Target Environments 538 that it measures, rather than only one as shown in Figure 3. 540 3.5. Composite Device 542 A Composite Device is an entity composed of multiple sub-entities 543 such that its trustworthiness has to be determined by the appraisal 544 of all these sub-entities. 546 Each sub-entity has at least one Attesting Environment collecting the 547 claims from at least one Target Environment, then this sub-entity 548 generates Evidence about its trustworthiness. Therefore each sub- 549 entity can be called an Attester. Among all the Attesters, there may 550 be only some which have the ability to communicate with the Verifier 551 while others do not. 553 For example, a carrier-grade router consists of a chassis and 554 multiple slots. The trustworthiness of the router depends on all its 555 slots' trustworthiness. Each slot has an Attesting Environment such 556 as a TEE collecting the claims of its boot process, after which it 557 generates Evidence from the claims. Among these slots, only a main 558 slot can communicate with the Verifier while other slots cannot. But 559 other slots can communicate with the main slot by the links between 560 them inside the router. So the main slot collects the Evidence of 561 other slots, produces the final Evidence of the whole router and 562 conveys the final Evidence to the Verifier. Therefore the router is 563 a Composite Device, each slot is an Attester, and the main slot is 564 the lead Attester. 566 Another example is a multi-chassis router composed of multiple single 567 carrier-grade routers. The multi-chassis router provides higher 568 throughput by interconnecting multiple routers and can be logically 569 treated as one router for simpler management. A multi-chassis router 570 provides a management point that connects to the Verifier. Other 571 routers are only connected to the main router by the network cables, 572 and therefore they are managed and appraised via this main router's 573 help. So, in this case, the multi-chassis router is the Composite 574 Device, each router is an Attester and the main router is the lead 575 Attester. 577 Figure 4 depicts the conceptual data flow for a Composite Device. 579 .-----------------------------. 580 | Verifier | 581 '-----------------------------' 582 ^ 583 | 584 | Evidence of 585 | Composite Device 586 | 587 .----------------------------------|-------------------------------. 588 | .--------------------------------|-----. .------------. | 589 | | Collect .------------. | | | | 590 | | Claims .--------->| Attesting |<--------| Attester B |-. | 591 | | | |Environment | | '------------. | | 592 | | .----------------. | |<----------| Attester C |-. | 593 | | | Target | | | | '------------' | | 594 | | | Environment(s) | | |<------------| ... | | 595 | | | | '------------' | Evidence '------------' | 596 | | '----------------' | of | 597 | | | Attesters | 598 | | lead Attester A | (via Internal Links or | 599 | '--------------------------------------' Network Connections) | 600 | | 601 | Composite Device | 602 '------------------------------------------------------------------' 604 Figure 4: Composite Device 606 In the Composite Device, each Attester generates its own Evidence by 607 its Attesting Environment(s) collecting the claims from its Target 608 Environment(s). The lead Attester collects the Evidence of all other 609 Attesters and then generates the Evidence of the whole Composite 610 Attester. 612 An entity can take on multiple RATS roles (e.g., Attester, Verifier, 613 Relying Party, etc.) at the same time. The combination of roles can 614 be arbitrary. For example, in this Composite Device scenario, the 615 entity inside the lead Attester can also take on the role of a 616 Verifier, and the outside entity of Verifier can take on the role of 617 a Relying Party. After collecting the Evidence of other Attesters, 618 this inside Verifier uses Endorsements and appraisal policies 619 (obtained the same way as any other Verifier) in the verification 620 process to generate Attestation Results. The inside Verifier then 621 conveys the Attestation Results of other Attesters, whether in the 622 same conveyance protocol as the Evidence or not, to the outside 623 Verifier. 625 In this situation, the trust model described in Section 7 is also 626 suitable for this inside Verifier. 628 4. Terminology 630 This document uses the following terms. 632 4.1. Roles 634 Attester: A role performed by an entity (typically a device) whose 635 Evidence must be appraised in order to infer the extent to which 636 the Attester is considered trustworthy, such as when deciding 637 whether it is authorized to perform some operation. Produces: 638 Evidence. 640 Relying Party: A role performed by an entity that depends on the 641 validity of information about an Attester, for purposes of 642 reliably applying application specific actions. Compare /relying 643 party/ in [RFC4949]. Consumes: Attestation Results. 645 Verifier: A role performed by an entity that appraises the validity 646 of Evidence about an Attester and produces Attestation Results to 647 be used by a Relying Party. Consumes: Evidence, Reference Values, 648 Endorsements, Appraisal Policy for Evidence; Produces: Attestation 649 Results. 651 Relying Party Owner: An entity (typically an administrator), that is 652 authorized to configure Appraisal Policy for Attestation Results 653 in a Relying Party. Produces: Appraisal Policy for Attestation 654 Results. 656 Verifier Owner: An entity (typically an administrator), that is 657 authorized to configure Appraisal Policy for Evidence in a 658 Verifier. Produces: Appraisal Policy for Evidence. 660 Endorser: An entity (typically a manufacturer) whose Endorsements 661 help Verifiers appraise the authenticity of Evidence. Produces: 662 Endorsements. 664 Reference Value Provider: An entity (typically a manufacturer) whose 665 Reference Values help Verifiers appraise Evidence to determine if 666 acceptable known claims have been recorded by the Attester. 667 Produces: Reference Values. 669 4.2. Artifacts 671 Claim: A piece of asserted information, often in the form of a name/ 672 value pair. Claims make up the usual structure of Evidence. 673 Compare /claim/ in [RFC7519]. 675 Endorsement: A secure statement that an Endorser vouches for the 676 integrity of an Attester's various capabilities such as Claims 677 collection and Evidence signing Used By: Verifier; Produced By: 678 Endorser. 680 Evidence: A set of Claims generated by an Attester to be appraised 681 by a Verifier. Evidence may include configuration data, 682 measurements, telemetry, or inferences. Used By: Verifier; 683 Produced By: Attester. 685 Attestation Result: The output generated by a Verifier, typically 686 including information about an Attester, where the Verifier 687 vouches for the validity of the results. Used By: Relying Party; 688 Produced By: Verifier. 690 Appraisal Policy for Evidence: A set of rules that informs how a 691 Verifier evaluates the validity of information about an Attester. 692 Compare /security policy/ in [RFC4949]. Used by: Verifier; 693 Produced by: Verifier Owner. 695 Appraisal Policy for Attestation Results: A set of rules that direct 696 how a Relying Party uses the Attestation Results regarding an 697 Attester generated by the Verifiers. Compare /security policy/ in 698 [RFC4949]. Used by: Relying Party; Produced by: Relying Party 699 Owner. 701 Reference Values: A set of values against which values of Claims can 702 be compared as part of applying an Appraisal Policy for Evidence. 703 Reference Values are sometimes referred to in other documents as 704 known-good values, golden measurements, or nominal values, 705 although those terms typically assume comparison for equality, 706 whereas here Reference Values might be more general and be used in 707 any sort of comparison. Used By: Verifier; Produced By: Reference 708 Value Provider. 710 5. Topological Models 712 Figure 1 shows a basic model for communication between an Attester, a 713 Verifier, and a Relying Party. The Attester conveys its Evidence to 714 the Verifier for appraisal, and the Relying Party gets the 715 Attestation Result from the Verifier. There are multiple other 716 possible models. This section includes some reference models. This 717 is not intended to be a restrictive list, and other variations may 718 exist. 720 5.1. Passport Model 722 The passport model is so named because of its resemblance to how 723 nations issue passports to their citizens. The nature of the 724 Evidence that an individual needs to provide to its local authority 725 is specific to the country involved. The citizen retains control of 726 the resulting passport document and presents it to other entities 727 when it needs to assert a citizenship or identity claim, such as an 728 airport immigration desk. The passport is considered sufficient 729 because it vouches for the citizenship and identity claims, and it is 730 issued by a trusted authority. Thus, in this immigration desk 731 analogy, the passport issuing agency is a Verifier, the passport is 732 an Attestation Result, and the immigration desk is a Relying Party. 734 In this model, an Attester conveys Evidence to a Verifier, which 735 compares the Evidence against its appraisal policy. The Verifier 736 then gives back an Attestation Result. If the Attestation Result was 737 a successful one, the Attester can then present the Attestation 738 Result (and possibly additional Claims) to a Relying Party, which 739 then compares this information against its own appraisal policy. 741 There are three ways in which the process may fail. First, the 742 Verifier may refuse to issue the Attestation Result due to some error 743 in processing, or some missing input to the Verifier. The second way 744 in which the process may fail is when the Attestation Result is 745 examined by the Relying Party, and based upon the appraisal policy, 746 the result does not pass the policy. The third way is when the 747 Verifier is unreachable. 749 Since the resource access protocol between the Attester and Relying 750 Party includes an Attestation Result, in this model the details of 751 that protocol constrain the serialization format of the Attestation 752 Result. The format of the Evidence on the other hand is only 753 constrained by the Attester-Verifier remote attestation protocol. 755 +-------------+ 756 | | Compare Evidence 757 | Verifier | against appraisal policy 758 | | 759 +-------------+ 760 ^ | 761 Evidence| |Attestation 762 | | Result 763 | v 764 +----------+ +---------+ 765 | |------------->| |Compare Attestation 766 | Attester | Attestation | Relying | Result against 767 | | Result | Party | appraisal 768 +----------+ +---------+ policy 770 Figure 5: Passport Model 772 5.2. Background-Check Model 774 The background-check model is so named because of the resemblance of 775 how employers and volunteer organizations perform background checks. 776 When a prospective employee provides claims about education or 777 previous experience, the employer will contact the respective 778 institutions or former employers to validate the claim. Volunteer 779 organizations often perform police background checks on volunteers in 780 order to determine the volunteer's trustworthiness. Thus, in this 781 analogy, a prospective volunteer is an Attester, the organization is 782 the Relying Party, and the organization that issues a report is a 783 Verifier. 785 In this model, an Attester conveys Evidence to a Relying Party, which 786 simply passes it on to a Verifier. The Verifier then compares the 787 Evidence against its appraisal policy, and returns an Attestation 788 Result to the Relying Party. The Relying Party then compares the 789 Attestation Result against its own appraisal policy. 791 The resource access protocol between the Attester and Relying Party 792 includes Evidence rather than an Attestation Result, but that 793 Evidence is not processed by the Relying Party. Since the Evidence 794 is merely forwarded on to a trusted Verifier, any serialization 795 format can be used for Evidence because the Relying Party does not 796 need a parser for it. The only requirement is that the Evidence can 797 be _encapsulated in_ the format required by the resource access 798 protocol between the Attester and Relying Party. 800 However, like in the Passport model, an Attestation Result is still 801 consumed by the Relying Party and so the serialization format of the 802 Attestation Result is still important. If the Relying Party is a 803 constrained node whose purpose is to serve a given type resource 804 using a standard resource access protocol, it already needs the 805 parser(s) required by that existing protocol. Hence, the ability to 806 let the Relying Party obtain an Attestation Result in the same 807 serialization format allows minimizing the code footprint and attack 808 surface area of the Relying Party, especially if the Relying Party is 809 a constrained node. 811 +-------------+ 812 | | Compare Evidence 813 | Verifier | against appraisal 814 | | policy 815 +-------------+ 816 ^ | 817 Evidence| |Attestation 818 | | Result 819 | v 820 +------------+ +-------------+ 821 | |-------------->| | Compare Attestation 822 | Attester | Evidence | Relying | Result against 823 | | | Party | appraisal policy 824 +------------+ +-------------+ 826 Figure 6: Background-Check Model 828 5.3. Combinations 830 One variation of the background-check model is where the Relying 831 Party and the Verifier are on the same machine, performing both 832 functions together. In this case, there is no need for a protocol 833 between the two. 835 It is also worth pointing out that the choice of model is generally 836 up to the Relying Party. The same device may need to create Evidence 837 for different Relying Parties and/or different use cases. For 838 instance, it would provide Evidence to a network infrastructure 839 device to gain access to the network, and to a server holding 840 confidential data to gain access to that data. As such, both models 841 may simultaneously be in use by the same device. 843 Figure 7 shows another example of a combination where Relying Party 1 844 uses the passport model, whereas Relying Party 2 uses an extension of 845 the background-check model. Specifically, in addition to the basic 846 functionality shown in Figure 6, Relying Party 2 actually provides 847 the Attestation Result back to the Attester, allowing the Attester to 848 use it with other Relying Parties. This is the model that the 849 Trusted Application Manager plans to support in the TEEP architecture 850 [I-D.ietf-teep-architecture]. 852 +-------------+ 853 | | Compare Evidence 854 | Verifier | against appraisal policy 855 | | 856 +-------------+ 857 ^ | 858 Evidence| |Attestation 859 | | Result 860 | v 861 +-------------+ 862 | | Compare 863 | Relying | Attestation Result 864 | Party 2 | against appraisal policy 865 +-------------+ 866 ^ | 867 Evidence| |Attestation 868 | | Result 869 | v 870 +----------+ +----------+ 871 | |-------------->| | Compare Attestation 872 | Attester | Attestation | Relying | Result against 873 | | Result | Party 1 | appraisal policy 874 +----------+ +----------+ 876 Figure 7: Example Combination 878 6. Roles and Entities 880 An entity in the RATS architecture includes at least one of the roles 881 defined in this document. An entity can aggregate more than one role 882 into itself. These collapsed roles combine the duties of multiple 883 roles. 885 In these cases, interaction between these roles do not necessarily 886 use the Internet Protocol. They can be using a loopback device or 887 other IP-based communication between separate environments, but they 888 do not have to. Alternative channels to convey conceptual messages 889 include function calls, sockets, GPIO interfaces, local busses, or 890 hypervisor calls. This type of conveyance is typically found in 891 Composite Devices. Most importantly, these conveyance methods are 892 out-of-scope of RATS, but they are presumed to exist in order to 893 convey conceptual messages appropriately between roles. 895 For example, an entity that both connects to a wide-area network and 896 to a system bus is taking on both the Attester and Verifier roles. 897 As a system bus entity, a Verifier consumes Evidence from other 898 devices connected to the system bus that implement Attester roles. 899 As a wide-area network connected entity, it may implement an Attester 900 role. The entity, as a system bus Verifier, may choose to fully 901 isolate its role as a wide-area network Attester. 903 In essence, an entity that combines more than one role creates and 904 consumes the corresponding conceptual messages as defined in this 905 document. 907 7. Trust Model 909 7.1. Relying Party 911 The scope of this document is scenarios for which a Relying Party 912 trusts a Verifier that can appraise the trustworthiness of 913 information about an Attester. Such trust might come by the Relying 914 Party trusting the Verifier (or its public key) directly, or might 915 come by trusting an entity (e.g., a Certificate Authority) that is in 916 the Verifier's certificate chain. 918 The Relying Party might implicitly trust a Verifier, such as in a 919 Verifier/Relying Party combination where the Verifier and Relying 920 Party roles are combined. Or, for a stronger level of security, the 921 Relying Party might require that the Verifier first provide 922 information about itself that the Relying Party can use to assess the 923 trustworthiness of the Verifier before accepting its Attestation 924 Results. 926 For example, one explicit way for a Relying Party "A" to establish 927 such trust in a Verifier "B", would be for B to first act as an 928 Attester where A acts as a combined Verifier/Relying Party. If A 929 then accepts B as trustworthy, it can choose to accept B as a 930 Verifier for other Attesters. 932 As another example, the Relying Party can establish trust in the 933 Verifier by out of band establishment of key material, combined with 934 a protocol like TLS to communicate. There is an assumption that 935 between the establishment of the trusted key material and the 936 creation of the Evidence, that the Verifier has not been compromised. 938 Similarly, the Relying Party also needs to trust the Relying Party 939 Owner for providing its Appraisal Policy for Attestation Results, and 940 in some scenarios the Relying Party might even require that the 941 Relying Party Owner go through a remote attestation procedure with it 942 before the Relying Party will accept an updated policy. This can be 943 done similarly to how a Relying Party could establish trust in a 944 Verifier as discussed above. 946 7.2. Attester 948 In some scenarios, Evidence might contain sensitive information such 949 as Personally Identifiable Information. Thus, an Attester must trust 950 entities to which it conveys Evidence, to not reveal sensitive data 951 to unauthorized parties. The Verifier might share this information 952 with other authorized parties, according to rules that it controls. 953 In the background-check model, this Evidence may also be revealed to 954 Relying Party(s). 956 In some cases where Evidence contains sensitive information, an 957 Attester might even require that a Verifier first go through a TLS 958 authentication or a remote attestation procedure with it before the 959 Attester will send the sensitive Evidence. This can be done by 960 having the Attester first act as a Verifier/Relying Party, and the 961 Verifier act as its own Attester, as discussed above. 963 7.3. Relying Party Owner 965 The Relying Party Owner might also require that the Relying Party 966 first act as an Attester, providing Evidence that the Owner can 967 appraise, before the Owner would give the Relying Party an updated 968 policy that might contain sensitive information. In such a case, 969 mutual authentication or attestation might be needed, in which case 970 typically one side's Evidence must be considered safe to share with 971 an untrusted entity, in order to bootstrap the sequence. 973 7.4. Verifier 975 The Verifier trusts (or more specifically, the Verifier's security 976 policy is written in a way that configures the Verifier to trust) a 977 manufacturer, or the manufacturer's hardware, so as to be able to 978 appraise the trustworthiness of that manufacturer's devices. In a 979 typical solution, a Verifier comes to trust an Attester indirectly by 980 having an Endorser (such as a manufacturer) vouch for the Attester's 981 ability to securely generate Evidence. 983 In some solutions, a Verifier might be configured to directly trust 984 an Attester by having the Verifier have the Attester's key material 985 (rather than the Endorser's) in its trust anchor store. 987 Such direct trust must first be established at the time of trust 988 anchor store configuration either by checking with an Endorser at 989 that time, or by conducting a security analysis of the specific 990 device. Having the Attester directly in the trust anchor store 991 narrows the Verifier's trust to only specific devices rather than all 992 devices the Endorser might vouch for, such as all devices 993 manufactured by the same manufacturer in the case that the Endorser 994 is a manufacturer. 996 Such narrowing is often important since physical possession of a 997 device can also be used to conduct a number of attacks, and so a 998 device in a physically secure environment (such as one's own 999 premises) may be considered trusted whereas devices owned by others 1000 would not be. This often results in a desire to either have the 1001 owner run their own Endorser that would only Endorse devices one 1002 owns, or to use Attesters directly in the trust anchor store. When 1003 there are many Attesters owned, the use of an Endorser becomes more 1004 scalable. 1006 That is, it might appraise the trustworthiness of an application 1007 component, operating system component, or service under the 1008 assumption that information provided about it by the lower-layer 1009 firmware or software is true. A stronger level of assurance of 1010 security comes when information can be vouched for by hardware or by 1011 ROM code, especially if such hardware is physically resistant to 1012 hardware tampering. In most cases, components that have to be 1013 vouched for via Endorsements because no Evidence is generated about 1014 them are referred to as roots of trust. 1016 The manufacturer of the Attester arranges for its Attesting 1017 Environment to be provisioned with key material. The key material is 1018 typically in the form of an asymmetric key pair (e.g., an RSA or 1019 ECDSA private key and a manufacturer-signed IDevID certificate) 1020 secured in the Attester. 1022 The Verifier is provided with an appropriate trust anchor, or 1023 provided with a database of public keys (rather than certificates), 1024 or even carefully secured lists of symmetric keys. The nature of how 1025 the Verifier manages to validate the signatures produced by the 1026 Attester is critical to the secure operation an Attestation system, 1027 but is not the subject of standardization within this architecture. 1029 A conveyance protocol that provides authentication and integrity 1030 protection can be used to convey unprotected Evidence, assuming the 1031 following properties exists: 1033 1. The key material used to authenticate and integrity protect the 1034 conveyance channel is trusted by the Verifier to speak for the 1035 Attesting Environment(s) that collected claims about the Target 1036 Environment(s). 1038 2. All unprotected Evidence that is conveyed is supplied exclusively 1039 by the Attesting Environment that has the key material that 1040 protects the conveyance channel 1042 3. The root of trust protects both the conveyance channel key 1043 material and the Attesting Environment with equivalent strength 1044 protections. 1046 See Section 12 for discussion on security strength. 1048 7.5. Endorser, Reference Value Provider, and Verifier Owner 1050 In some scenarios, the Endorser, Reference Value Provider, and 1051 Verifier Owner may need to trust the Verifier before giving the 1052 Endorsement, Reference Values, or appraisal policy to it. This can 1053 be done similarly to how a Relying Party might establish trust in a 1054 Verifier as discussed above, and in such a case, mutual 1055 authentication or attestation might even be needed as discussed in 1056 Section 7.3. 1058 8. Conceptual Messages 1060 8.1. Evidence 1062 Evidence is a set of claims about the target environment that reveal 1063 operational status, health, configuration or construction that have 1064 security relevance. Evidence is evaluated by a Verifier to establish 1065 its relevance, compliance, and timeliness. Claims need to be 1066 collected in a manner that is reliable. Evidence needs to be 1067 securely associated with the target environment so that the Verifier 1068 cannot be tricked into accepting claims originating from a different 1069 environment (that may be more trustworthy). Evidence also must be 1070 protected from man-in-the-middle attackers who may observe, change or 1071 misdirect Evidence as it travels from Attester to Verifier. The 1072 timeliness of Evidence can be captured using claims that pinpoint the 1073 time or interval when changes in operational status, health, and so 1074 forth occur. 1076 8.2. Endorsements 1078 An Endorsement is a secure statement that some entity (e.g., a 1079 manufacturer) vouches for the integrity of the device's signing 1080 capability. For example, if the signing capability is in hardware, 1081 then an Endorsement might be a manufacturer certificate that signs a 1082 public key whose corresponding private key is only known inside the 1083 device's hardware. Thus, when Evidence and such an Endorsement are 1084 used together, an appraisal procedure can be conducted based on 1085 appraisal policies that may not be specific to the device instance, 1086 but merely specific to the manufacturer providing the Endorsement. 1087 For example, an appraisal policy might simply check that devices from 1088 a given manufacturer have information matching a set of Reference 1089 Values, or an appraisal policy might have a set of more complex logic 1090 on how to appraise the validity of information. 1092 However, while an appraisal policy that treats all devices from a 1093 given manufacturer the same may be appropriate for some use cases, it 1094 would be inappropriate to use such an appraisal policy as the sole 1095 means of authorization for use cases that wish to constrain _which_ 1096 compliant devices are considered authorized for some purpose. For 1097 example, an enterprise using remote attestation for Network Endpoint 1098 Assessment may not wish to let every healthy laptop from the same 1099 manufacturer onto the network, but instead only want to let devices 1100 that it legally owns onto the network. Thus, an Endorsement may be 1101 helpful information in authenticating information about a device, but 1102 is not necessarily sufficient to authorize access to resources which 1103 may need device-specific information such as a public key for the 1104 device or component or user on the device. 1106 8.3. Attestation Results 1108 Attestation Results are the input used by the Relying Party to decide 1109 the extent to which it will trust a particular Attester, and allow it 1110 to access some data or perform some operation. 1112 Attestation Results may be a Boolean simply indicating compliance or 1113 non-compliance with a Verifier's appraisal policy, or a rich set of 1114 Claims about the Attester, against which the Relying Party applies 1115 its Appraisal Policy for Attestation Results. 1117 The quality of the Attestation Results depend upon the ability of the 1118 Verifier to evaluate the Attester. Different Attesters have a 1119 different _Strength of Function_ [strengthoffunction], which results 1120 in the Attestation Results being qualitatively different in strength. 1122 A result that indicates non-compliance can be used by an Attester (in 1123 the passport model) or a Relying Party (in the background-check 1124 model) to indicate that the Attester should not be treated as 1125 authorized and may be in need of remediation. In some cases, it may 1126 even indicate that the Evidence itself cannot be authenticated as 1127 being correct. 1129 An Attestation Result that indicates compliance can be used by a 1130 Relying Party to make authorization decisions based on the Relying 1131 Party's appraisal policy. The simplest such policy might be to 1132 simply authorize any party supplying a compliant Attestation Result 1133 signed by a trusted Verifier. A more complex policy might also 1134 entail comparing information provided in the result against Reference 1135 Values, or applying more complex logic on such information. 1137 Thus, Attestation Results often need to include detailed information 1138 about the Attester, for use by Relying Parties, much like physical 1139 passports and drivers licenses include personal information such as 1140 name and date of birth. Unlike Evidence, which is often very device- 1141 and vendor-specific, Attestation Results can be vendor-neutral if the 1142 Verifier has a way to generate vendor-agnostic information based on 1143 the appraisal of vendor-specific information in Evidence. This 1144 allows a Relying Party's appraisal policy to be simpler, potentially 1145 based on standard ways of expressing the information, while still 1146 allowing interoperability with heterogeneous devices. 1148 Finally, whereas Evidence is signed by the device (or indirectly by a 1149 manufacturer, if Endorsements are used), Attestation Results are 1150 signed by a Verifier, allowing a Relying Party to only need a trust 1151 relationship with one entity, rather than a larger set of entities, 1152 for purposes of its appraisal policy. 1154 9. Claims Encoding Formats 1156 The following diagram illustrates a relationship to which remote 1157 attestation is desired to be added: 1159 +-------------+ +------------+ Evaluate 1160 | |-------------->| | request 1161 | Attester | Access some | Relying | against 1162 | | resource | Party | security 1163 +-------------+ +------------+ policy 1165 Figure 8: Typical Resource Access 1167 In this diagram, the protocol between Attester and a Relying Party 1168 can be any new or existing protocol (e.g., HTTP(S), COAP(S), ROLIE 1169 [RFC8322], 802.1x, OPC UA, etc.), depending on the use case. Such 1170 protocols typically already have mechanisms for passing security 1171 information for purposes of authentication and authorization. Common 1172 formats include JWTs [RFC7519], CWTs [RFC8392], and X.509 1173 certificates. 1175 To enable remote attestation to be added to existing protocols, 1176 enabling a higher level of assurance against malware for example, it 1177 is important that information needed for appraising the Attester be 1178 usable with existing protocols that have constraints around what 1179 formats they can transport. For example, OPC UA [OPCUA] (probably 1180 the most common protocol in industrial IoT environments) is defined 1181 to carry X.509 certificates and so security information must be 1182 embedded into an X.509 certificate to be passed in the protocol. 1183 Thus, remote attestation related information could be natively 1184 encoded in X.509 certificate extensions, or could be natively encoded 1185 in some other format (e.g., a CWT) which in turn is then encoded in 1186 an X.509 certificate extension. 1188 Especially for constrained nodes, however, there is a desire to 1189 minimize the amount of parsing code needed in a Relying Party, in 1190 order to both minimize footprint and to minimize the attack surface 1191 area. So while it would be possible to embed a CWT inside a JWT, or 1192 a JWT inside an X.509 extension, etc., there is a desire to encode 1193 the information natively in the format that is natural for the 1194 Relying Party. 1196 This motivates having a common "information model" that describes the 1197 set of remote attestation related information in an encoding-agnostic 1198 way, and allowing multiple encoding formats (CWT, JWT, X.509, etc.) 1199 that encode the same information into the claims format needed by the 1200 Relying Party. 1202 The following diagram illustrates that Evidence and Attestation 1203 Results might each have multiple possible encoding formats, so that 1204 they can be conveyed by various existing protocols. It also 1205 motivates why the Verifier might also be responsible for accepting 1206 Evidence that encodes claims in one format, while issuing Attestation 1207 Results that encode claims in a different format. 1209 Evidence Attestation Results 1210 .--------------. CWT CWT .-------------------. 1211 | Attester-A |------------. .----------->| Relying Party V | 1212 '--------------' v | `-------------------' 1213 .--------------. JWT .------------. JWT .-------------------. 1214 | Attester-B |-------->| Verifier |-------->| Relying Party W | 1215 '--------------' | | `-------------------' 1216 .--------------. X.509 | | X.509 .-------------------. 1217 | Attester-C |-------->| |-------->| Relying Party X | 1218 '--------------' | | `-------------------' 1219 .--------------. TPM | | TPM .-------------------. 1220 | Attester-D |-------->| |-------->| Relying Party Y | 1221 '--------------' '------------' `-------------------' 1222 .--------------. other ^ | other .-------------------. 1223 | Attester-E |------------' '----------->| Relying Party Z | 1224 '--------------' `-------------------' 1226 Figure 9: Multiple Attesters and Relying Parties with Different 1227 Formats 1229 10. Freshness 1231 A Verifier or Relying Party may need to learn the point in time 1232 (i.e., the "epoch") an Evidence or Attestation Result has been 1233 produced. This is essential in deciding whether the included Claims 1234 and their values can be considered fresh, meaning they still reflect 1235 the latest state of the Attester, and that any Attestation Result was 1236 generated using the latest Appraisal Policy for Evidence. 1238 Freshness is assessed based on the Appraisal Policy for Evidence or 1239 Attestation Results, that compares the estimated epoch against an 1240 "expiry" threshold defined locally to that policy. There is, 1241 however, always a race condition possible in that the state of the 1242 Attester, and the appraisal policies might change immediately after 1243 the Evidence or Attestation Result was generated. The goal is merely 1244 to narrow their recentness to something the Verifier (for Evidence) 1245 or Relying Party (for Attestation Result) is willing to accept. 1246 Freshness is a key component for enabling caching and reuse of both 1247 Evidence and Attestation Results, which is especially valuable in 1248 cases where their computation uses a substantial part of the resource 1249 budget (e.g., energy in constrained devices). 1251 There are two common approaches for determining the epoch of an 1252 Evidence or Attestation Result. 1254 The first approach is to rely on synchronized and trustworthy clocks, 1255 and include a signed timestamp (see [I-D.birkholz-rats-tuda]) along 1256 with the Claims in the Evidence or Attestation Result. Timestamps 1257 can be added on a per-Claim basis, to distinguish the time of 1258 creation of Evidence or Attestation Result from the time that a 1259 specific Claim was generated. The clock's trustworthiness typically 1260 requires additional Claims about the signer's time synchronization 1261 mechanism. 1263 A second approach places the onus of timekeeping solely on the 1264 Verifier (for Evidence), or the Relying Party (for Attestation 1265 Results), and might be suitable, for example, in case the Attester 1266 does not have a reliable clock or time synchronization is otherwise 1267 impaired. In this approach, a non-predictable nonce is sent by the 1268 appraising entity, and the nonce is then signed and included along 1269 with the Claims in the Evidence or Attestation Result. After 1270 checking that the sent and received nonces are the same, the 1271 appraising entity knows that the Claims were signed after the nonce 1272 was generated. This allows associating a "rough" epoch to the 1273 Evidence or Attestation Result. In this case the epoch is said to be 1274 rough because: 1276 * The epoch applies to the entire claim set instead of a more 1277 granular association, and 1279 * The time between the creation of Claims and the collection of 1280 Claims is indistinguishable. 1282 Implicit and explicit timekeeping can be combined into hybrid 1283 mechanisms. For example, if clocks exist and are considered 1284 trustworthy but are not synchronized, a nonce-based exchange may be 1285 used to determine the (relative) time offset between the involved 1286 peers, followed by any number of timestamp based exchanges. In 1287 another setup where all Roles (Attesters, Verifiers and Relying 1288 Parties) share the same broadcast channel, the nonce-based approach 1289 may be used to anchor all parties to the same (relative) timeline, 1290 without requiring synchronized clocks, by having a central entity 1291 emit nonces at regular intervals and have the "current" nonce 1292 included in the produced Evidence or Attestation Result. 1294 It is important to note that the actual values in Claims might have 1295 been generated long before the Claims are signed. If so, it is the 1296 signer's responsibility to ensure that the values are still correct 1297 when they are signed. For example, values generated at boot time 1298 might have been saved to secure storage until network connectivity is 1299 established to the remote Verifier and a nonce is obtained. 1301 A more detailed discussion with examples appears in Section 16. 1303 11. Privacy Considerations 1305 The conveyance of Evidence and the resulting Attestation Results 1306 reveal a great deal of information about the internal state of a 1307 device as well as potentially any users of the device. In many 1308 cases, the whole point of the Attestation process is to provide 1309 reliable information about the type of the device and the firmware/ 1310 software that the device is running. This information might be 1311 particularly interesting to many attackers. For example, knowing 1312 that a device is running a weak version of firmware provides a way to 1313 aim attacks better. 1315 Many claims in Attestation Evidence and Attestation Results are 1316 potentially PII (Personally Identifying Information) depending on the 1317 end-to-end use case of the attestation. Attestation that goes up to 1318 include containers and applications may further reveal details about 1319 a specific system or user. 1321 In some cases, an attacker may be able to make inferences about 1322 attestations from the results or timing of the processing. For 1323 example, an attacker might be able to infer the value of specific 1324 claims if it knew that only certain values were accepted by the 1325 Relying Party. 1327 Evidence and Attestation Results data structures are expected to 1328 support integrity protection encoding (e.g., COSE, JOSE, X.509) and 1329 optionally might support confidentiality protection (e.g., COSE, 1330 JOSE). Therefore, if confidentiality protection is omitted or 1331 unavailable, the protocols that convey Evidence or Attestation 1332 Results are responsible for detailing what kinds of information are 1333 disclosed, and to whom they are exposed. 1335 Furthermore, because Evidence might contain sensitive information, 1336 Attesters are responsible for only sending such Evidence to trusted 1337 Verifiers. Some Attesters might want a stronger level of assurance 1338 of the trustworthiness of a Verifier before sending Evidence to it. 1339 In such cases, an Attester can first act as a Relying Party and ask 1340 for the Verifier's own Attestation Result, and appraising it just as 1341 a Relying Party would appraise an Attestation Result for any other 1342 purpose. 1344 12. Security Considerations 1345 12.1. Attester and Attestation Key Protection 1347 Implementers need to pay close attention to the isolation and 1348 protection of the Attester and the factory processes for provisioning 1349 the Attestation key material. When either of these are compromised, 1350 the remote attestation becomes worthless because the attacker can 1351 forge Evidence. 1353 Remote attestation applies to use cases with a range of security 1354 requirements, so the protections discussed here range from low to 1355 high security where low security may be only application or process 1356 isolation by the device's operating system and high security involves 1357 specialized hardware to defend against physical attacks on a chip. 1359 12.1.1. On-Device Attester and Key Protection 1361 It is assumed that the Attester is located in an isolated environment 1362 of a device like a process, a dedicated chip a TEE or such that 1363 collects the Claims, formats them and signs them with an Attestation 1364 Key. The Attester must be protected from unauthorized modification to 1365 ensure it behaves correctly. There must also be confidentiality so 1366 that the signing key is not captured and used elsewhere to forge 1367 evidence. 1369 In many cases the user or owner of the device must not be able to 1370 modify or exfiltrate keys from the Attesting Environment of the 1371 Attester. For example the owner or user of a mobile phone or FIDO 1372 authenticator is not trusted. The point of remote attestation is for 1373 the Relying Party to be able to trust the Attester even though they 1374 don't trust the user or owner. 1376 Some of the measures for low level security include process or 1377 application isolation by a high-level operating system, and perhaps 1378 restricting access to root or system privilege. For extremely simple 1379 single-use devices that don't use a protected mode operating system, 1380 like a Bluetooth speaker, the isolation might only be the plastic 1381 housing for the device. 1383 At medium level security, a special restricted operating environment 1384 like a Trusted Execution Environment (TEE) might be used. In this 1385 case, only security-oriented software has access to the Attester and 1386 key material. 1388 For high level security, specialized hardware will likely be used 1389 providing protection against chip decapping attacks, power supply and 1390 clock glitching, faulting injection and RF and power side channel 1391 attacks. 1393 12.1.2. Attestation Key Provisioning Processes 1395 Attestation key provisioning is the process that occurs in the 1396 factory or elsewhere that establishes the signing key material on the 1397 device and the verification key material off the device. Sometimes 1398 this is referred to as "personalization". 1400 One way to provision a key is to first generate it external to the 1401 device and then copy the key onto the device. In this case, 1402 confidentiality of the generator, as well as the path over which the 1403 key is provisioned, is necessary. This can be achieved in a number 1404 of ways. 1406 Confidentiality can be achieved entirely with physical provisioning 1407 facility security involving no encryption at all. For low-security 1408 use cases, this might be simply locking doors and limiting personnel 1409 that can enter the facility. For high-security use cases, this might 1410 involve a special area of the facility accessible only to select 1411 security-trained personnel. 1413 Cryptography can also be used to support confidentiality, but keys 1414 that are used to then provision attestation keys must somehow have 1415 been provisioned securely beforehand (a recursive problem). 1417 In many cases both some physical security and some cryptography will 1418 be necessary and useful to establish confidentiality. 1420 Another way to provision the key material is to generate it on the 1421 device and export the verification key. If public key cryptography 1422 is being used, then only integrity is necessary. Confidentiality is 1423 not necessary. 1425 In all cases, the Attestation Key provisioning process must ensure 1426 that only attestation key material that is generated by a valid 1427 Endorser is established in Attesters and then configured correctly. 1428 For many use cases, this will involve physical security at the 1429 facility, to prevent unauthorized devices from being manufactured 1430 that may be counterfeit or incorrectly configured. 1432 12.2. Integrity Protection 1434 Any solution that conveys information used for security purposes, 1435 whether such information is in the form of Evidence, Attestation 1436 Results, Endorsements, or appraisal policy must support end-to-end 1437 integrity protection and replay attack prevention, and often also 1438 needs to support additional security properties, including: 1440 * end-to-end encryption, 1441 * denial of service protection, 1443 * authentication, 1445 * auditing, 1447 * fine grained access controls, and 1449 * logging. 1451 Section 10 discusses ways in which freshness can be used in this 1452 architecture to protect against replay attacks. 1454 To assess the security provided by a particular appraisal policy, it 1455 is important to understand the strength of the root of trust, e.g., 1456 whether it is mutable software, or firmware that is read-only after 1457 boot, or immutable hardware/ROM. 1459 It is also important that the appraisal policy was itself obtained 1460 securely. As such, if appraisal policies for a Relying Party or for 1461 a Verifier can be configured via a network protocol, the ability to 1462 create Evidence about the integrity of the entity providing the 1463 appraisal policy needs to be considered. 1465 The security of conveyed information may be applied at different 1466 layers, whether by a conveyance protocol, or an information encoding 1467 format. This architecture expects attestation messages (i.e., 1468 Evidence, Attestation Results, Endorsements and Policies) are end-to- 1469 end protected based on the role interaction context. For example, if 1470 an Attester produces Evidence that is relayed through some other 1471 entity that doesn't implement the Attester or the intended Verifier 1472 roles, then the relaying entity should not expect to have access to 1473 the Evidence. 1475 13. IANA Considerations 1477 This document does not require any actions by IANA. 1479 14. Acknowledgments 1481 Special thanks go to Joerg Borchert, Nancy Cam-Winget, Jessica 1482 Fitzgerald-McKay, Thomas Fossati, Diego Lopez, Laurence Lundblade, 1483 Paul Rowe, Hannes Tschofenig, Frank Xia, and David Wooten. 1485 15. Notable Contributions 1487 Thomas Hardjono created older versions of the terminology section in 1488 collaboration with Ned Smith. Eric Voit provided the conceptual 1489 separation between Attestation Provision Flows and Attestation 1490 Evidence Flows. Monty Wisemen created the content structure of the 1491 first three architecture drafts. Carsten Bormann provided many of 1492 the motivational building blocks with respect to the Internet Threat 1493 Model. 1495 16. Appendix A: Time Considerations 1497 The table below defines a number of relevant events, with an ID that 1498 is used in subsequent diagrams. The times of said events might be 1499 defined in terms of an absolute clock time such as Coordinated 1500 Universal Time, or might be defined relative to some other timestamp 1501 or timeticks counter. 1503 +====+==============+=============================================+ 1504 | ID | Event | Explanation of event | 1505 +====+==============+=============================================+ 1506 | VG | Value | A value to appear in a Claim was created. | 1507 | | generated | In some cases, a value may have technically | 1508 | | | existed before an Attester became aware of | 1509 | | | it but the Attester might have no idea how | 1510 | | | long it has had that value. In such a | 1511 | | | case, the Value created time is the time at | 1512 | | | which the Claim containing the copy of the | 1513 | | | value was created. | 1514 +----+--------------+---------------------------------------------+ 1515 | HD | Handle | A centrally generated identifier for time- | 1516 | | distribution | bound recentness across a domain of devices | 1517 | | | is successfully distributed to Attesters. | 1518 +----+--------------+---------------------------------------------+ 1519 | NS | Nonce sent | A nonce not predictable to an Attester | 1520 | | | (recentness & uniqueness) is sent to an | 1521 | | | Attester. | 1522 +----+--------------+---------------------------------------------+ 1523 | NR | Nonce | A nonce is relayed to an Attester by | 1524 | | relayed | another entity. | 1525 +----+--------------+---------------------------------------------+ 1526 | HR | Handle | A handle distributed by a Handle | 1527 | | received | Distributor was received. | 1528 +----+--------------+---------------------------------------------+ 1529 | EG | Evidence | An Attester creates Evidence from collected | 1530 | | generation | Claims. | 1531 +----+--------------+---------------------------------------------+ 1532 | ER | Evidence | A Relying Party relays Evidence to a | 1533 | | relayed | Verifier. | 1534 +----+--------------+---------------------------------------------+ 1535 | RG | Result | A Verifier appraises Evidence and generates | 1536 | | generation | an Attestation Result. | 1537 +----+--------------+---------------------------------------------+ 1538 | RR | Result | A Relying Party relays an Attestation | 1539 | | relayed | Result to a Relying Party. | 1540 +----+--------------+---------------------------------------------+ 1541 | RA | Result | The Relying Party appraises Attestation | 1542 | | appraised | Results. | 1543 +----+--------------+---------------------------------------------+ 1544 | OP | Operation | The Relying Party performs some operation | 1545 | | performed | requested by the Attester. For example, | 1546 | | | acting upon some message just received | 1547 | | | across a session created earlier at | 1548 | | | time(RA). | 1549 +----+--------------+---------------------------------------------+ 1550 | RX | Result | An Attestation Result should no longer be | 1551 | | expiry | accepted, according to the Verifier that | 1552 | | | generated it. | 1553 +----+--------------+---------------------------------------------+ 1555 Table 1 1557 Using the table above, a number of hypothetical examples of how a 1558 solution might be built are illustrated below. a solution might be 1559 built. This list is not intended to be complete, but is just 1560 representative enough to highlight various timing considerations. 1562 All times are relative to the local clocks, indicated by an "a" 1563 (Attester), "v" (Verifier), or "r" (Relying Party) suffix. 1565 Times with an appended Prime (') indicate a second instance of the 1566 same event. 1568 How and if clocks are synchronized depends upon the model. 1570 16.1. Example 1: Timestamp-based Passport Model Example 1572 The following example illustrates a hypothetical Passport Model 1573 solution that uses timestamps and requires roughly synchronized 1574 clocks between the Attester, Verifier, and Relying Party, which 1575 depends on using a secure clock synchronization mechanism. As a 1576 result, the receiver of a conceptual message containing a timestamp 1577 can directly compare it to its own clock and timestamps. 1579 .----------. .----------. .---------------. 1580 | Attester | | Verifier | | Relying Party | 1581 '----------' '----------' '---------------' 1582 time(VG_a) | | 1583 | | | 1584 ~ ~ ~ 1585 | | | 1586 time(EG_a) | | 1587 |------Evidence{time(EG_a)}------>| | 1588 | time(RG_v) | 1589 |<-----Attestation Result---------| | 1590 | {time(RG_v),time(RX_v)} | | 1591 ~ ~ 1592 | | 1593 |----Attestation Result{time(RG_v),time(RX_v)}-->time(RA_r) 1594 | | 1595 ~ ~ 1596 | | 1597 | time(OP_r) 1598 | | 1600 The Verifier can check whether the Evidence is fresh when appraising 1601 it at time(RG_v) by checking "time(RG_v) - time(EG_a) < Threshold", 1602 where the Verifier's threshold is large enough to account for the 1603 maximum permitted clock skew between the Verifier and the Attester. 1605 If time(VG_a) is also included in the Evidence along with the claim 1606 value generated at that time, and the Verifier decides that it can 1607 trust the time(VG_a) value, the Verifier can also determine whether 1608 the claim value is recent by checking The threshold is decided by the 1609 Appraisal Policy for Evidence, and again needs to take into account 1610 the maximum permitted clock skew between the Verifier and the 1611 Attester."time(RG_v) - time(VG_a) < Threshold". 1613 The Relying Party can check whether the Attestation Result is fresh 1614 when appraising it at time(RA_r) by checking "time(RA_r) - time(RG_v) 1615 < Threshold", where the Relying Party's threshold is large enough to 1616 account for the maximum permitted clock skew between the Relying 1617 Party and the Verifier. The result might then be used for some time 1618 (e.g., throughout the lifetime of a connection established at 1619 time(RA_r)). The Relying Party must be careful, however, to not 1620 allow continued use beyond the period for which it deems the 1621 Attestation Result to remain fresh enough. Thus, it might allow use 1622 (at time(OP_r)) as long as "time(OP_r) - time(RG_v) < Threshold". 1623 However, if the Attestation Result contains an expiry time time(RX_v) 1624 then it could explicitly check "time(OP_r) < time(RX_v)". 1626 16.2. Example 2: Nonce-based Passport Model Example 1628 The following example illustrates a hypothetical Passport Model 1629 solution that uses nonces instead of timestamps. Compared to the 1630 timestamp-based example, it requires an extra round trip to retrieve 1631 a nonce, and requires that the Verifier and Relying Party track state 1632 to remember the nonce for some period of time. 1634 The advantage is that it does not require that any clocks are 1635 synchronized. As a result, the receiver of a conceptual message 1636 containing a timestamp cannot directly compare it to its own clock or 1637 timestamps. Thus we use a suffix ("a" for Attester, "v" for 1638 Verifier, and "r" for Relying Party) on the IDs below indicating 1639 which clock generated them, since times from different clocks cannot 1640 be compared. Only the delta between two events from the sender can 1641 be used by the receiver. 1643 .----------. .----------. .---------------. 1644 | Attester | | Verifier | | Relying Party | 1645 '----------' '----------' '---------------' 1646 time(VG_a) | | 1647 | | | 1648 ~ ~ ~ 1649 | | | 1650 |<--Nonce1---------------------time(NS_v) | 1651 time(EG_a) | | 1652 |---Evidence--------------------->| | 1653 | {Nonce1, time(EG_a)-time(VG_a)} | | 1654 | time(RG_v) | 1655 |<--Attestation Result------------| | 1656 | {time(RX_v)-time(RG_v)} | | 1657 ~ ~ 1658 | | 1659 |<--Nonce2-------------------------------------time(NS_r) 1660 time(RR_a) | 1661 |--[Attestation Result{time(RX_v)-time(RG_v)}, -->|time(RA_r) 1662 | Nonce2, time(RR_a)-time(EG_a)] | 1663 ~ ~ 1664 | | 1665 | time(OP_r) 1667 In this example solution, the Verifier can check whether the Evidence 1668 is fresh at "time(RG_v)" by verifying that "time(RG_v)-time(NS_v) < 1669 Threshold". 1671 The Verifier cannot, however, simply rely on a Nonce to determine 1672 whether the value of a claim is recent, since the claim value might 1673 have been generated long before the nonce was sent by the Verifier. 1675 However, if the Verifier decides that the Attester can be trusted to 1676 correctly provide the delta "time(EG_a)-time(VG_a)", then it can 1677 determine recency by checking "time(RG_v)-time(NS_v) + time(EG_a)- 1678 time(VG_a) < Threshold". 1680 Similarly if, based on an Attestation Result from a Verifier it 1681 trusts, the Relying Party decides that the Attester can be trusted to 1682 correctly provide time deltas, then it can determine whether the 1683 Attestation Result is fresh by checking "time(OP_r)-time(NS_r) + 1684 time(RR_a)-time(EG_a) < Threshold". Although the Nonce2 and 1685 "time(RR_a)-time(EG_a)" values cannot be inside the Attestation 1686 Result, they might be signed by the Attester such that the 1687 Attestation Result vouches for the Attester's signing capability. 1689 The Relying Party must still be careful, however, to not allow 1690 continued use beyond the period for which it deems the Attestation 1691 Result to remain valid. Thus, if the Attestation Result sends a 1692 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 1693 Relying Party can check "time(OP_r)-time(NS_r) < time(RX_v)- 1694 time(RG_v)". 1696 16.3. Example 3: Handle-based Passport Model Example 1698 Handles are a third option to establish time-keeping next to nonces 1699 or timestamps. Handles are opaque data intended to be available to 1700 all RATS roles that interact with each other, such as the Attester or 1701 Verifier, in specified intervals. To enable this availability, 1702 handles are distributed centrally by the Handle Distributor role over 1703 the network. As any other role, the Handle Distributor role can be 1704 taken on by a dedicated entity or collapsed with other roles, such as 1705 a Verifier. The use of handles can compensate for a lack of clocks 1706 or other sources of time on entities taking on RATS roles. The only 1707 entity that requires access to a source of time is the entity taking 1708 on the role of Handle Distributor. 1710 Handles are different from nonces as they can be used more than once 1711 and can be used by more than one entity at the same time. Handles 1712 are different from timestamps as they do not have to convey 1713 information about a point in time, but their reception creates that 1714 information. The reception of a handle is similar to the event that 1715 increments a relative tickcounter. Receipt of a new handle 1716 invalidates a previously received handle. 1718 In this example, Evidence generation based on received handles always 1719 uses the current (most recent) handle. As handles are distributed 1720 over the network, all involved entities receive a fresh handle at 1721 roughly the same time. Due to distribution over the network, there 1722 is some jitter with respect to the time the Handle is received, 1723 time(HR), for each involved entity. To compensate for this jitter, 1724 there is a small period of overlap (a specified offset) in which both 1725 a current handle and corresponding former handle are valid in 1726 Evidence appraisal: "validity-duration = time(HR'_v) + offset - 1727 time(HR_v)". The offset is typically based on a network's round trip 1728 time. Analogously, the generation of valid Evidence is only 1729 possible, if the age of the handle used is lower than the validity- 1730 duration: "time(HR_v) - time(EG_a) < validity-duration". 1732 From the point of view of a Verifier, the generation of valid 1733 Evidence is only possible, if the age of the handle used in the 1734 Evidence generation is younger than the duration of the distribution 1735 interval - "(time(HR'_v)-time(HR_v)) - (time(HR_a)-time(EG_a)) < 1736 validity-duration". 1738 Due to the validity-duration of handles, multiple different pieces of 1739 Evidence can be generated based on the same handle. The resulting 1740 granularity (time resolution) of Evidence freshness is typically 1741 lower than the resolution of clock-based tickcounters. 1743 The following example illustrates a hypothetical Background-Check 1744 Model solution that uses handles and requires a trustworthy time 1745 source available to the Handle Distributor role. 1747 .-------------. 1748 .----------. | Handle | .----------. .---------------. 1749 | Attester | | Distributor | | Verifier | | Relying Party | 1750 '----------' '-------------' '----------' '---------------' 1751 time(VG_a) | | | 1752 | | | | 1753 ~ ~ ~ ~ 1754 | | | | 1755 time(HR_a)<---------+-------------time(HR_v)------>time(HR_r) 1756 | | | | 1757 time(EG_a) | | | 1758 |----Evidence{time(EG_a)}-------->| | 1759 | {Handle1,time(EG_a)-time(VG_a)}| | 1760 | | time(RG_v) | 1761 |<-----Attestation Result---------| | 1762 | {time(RG_v),time(RX_v)} | | 1763 | | | 1764 ~ ~ ~ 1765 | | | 1766 time(HR_a')<--------'---------------------------->time(HR_r') 1767 | | 1768 time(RR_a) / 1769 |--Attestation Result{time(RX_v)-time(RG_v)}-->time(RA_r) 1770 | {Handle2, time(RR_a)-time(EG_a)} | 1771 ~ ~ 1772 | | 1773 | time(OP_r) 1774 | | 1776 16.4. Example 4: Timestamp-based Background-Check Model Example 1778 The following example illustrates a hypothetical Background-Check 1779 Model solution that uses timestamps and requires roughly synchronized 1780 clocks between the Attester, Verifier, and Relying Party. 1782 .----------. .---------------. .----------. 1783 | Attester | | Relying Party | | Verifier | 1784 '----------' '---------------' '----------' 1785 time(VG_a) | | 1786 | | | 1787 ~ ~ ~ 1788 | | | 1789 time(EG_a) | | 1790 |----Evidence------->| | 1791 | {time(EG_a)} time(ER_r)--Evidence{time(EG_a)}->| 1792 | | time(RG_v) 1793 | time(RA_r)<-Attestation Result---| 1794 | | {time(RX_v)} | 1795 ~ ~ ~ 1796 | | | 1797 | time(OP_r) | 1799 The time considerations in this example are equivalent to those 1800 discussed under Example 1 above. 1802 16.5. Example 5: Nonce-based Background-Check Model Example 1804 The following example illustrates a hypothetical Background-Check 1805 Model solution that uses nonces and thus does not require that any 1806 clocks are synchronized. In this example solution, a nonce is 1807 generated by a Verifier at the request of a Relying Party, when the 1808 Relying Party needs to send one to an Attester. 1810 .----------. .---------------. .----------. 1811 | Attester | | Relying Party | | Verifier | 1812 '----------' '---------------' '----------' 1813 time(VG_a) | | 1814 | | | 1815 ~ ~ ~ 1816 | | | 1817 | |<-------Nonce-----------time(NS_v) 1818 |<---Nonce-----------time(NR_r) | 1819 time(EG_a) | | 1820 |----Evidence{Nonce}--->| | 1821 | time(ER_r)--Evidence{Nonce}--->| 1822 | | time(RG_v) 1823 | time(RA_r)<-Attestation Result-| 1824 | | {time(RX_v)-time(RG_v)} | 1825 ~ ~ ~ 1826 | | | 1827 | time(OP_r) | 1829 The Verifier can check whether the Evidence is fresh, and whether a 1830 claim value is recent, the same as in Example 2 above. 1832 However, unlike in Example 2, the Relying Party can use the Nonce to 1833 determine whether the Attestation Result is fresh, by verifying that 1834 "time(OP_r)-time(NR_r) < Threshold". 1836 The Relying Party must still be careful, however, to not allow 1837 continued use beyond the period for which it deems the Attestation 1838 Result to remain valid. Thus, if the Attestation Result sends a 1839 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 1840 Relying Party can check "time(OP_r)-time(ER_r) < time(RX_v)- 1841 time(RG_v)". 1843 17. References 1845 17.1. Normative References 1847 [RFC7519] Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token 1848 (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015, 1849 . 1851 [RFC8392] Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, 1852 "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, 1853 May 2018, . 1855 17.2. Informative References 1857 [CTAP] FIDO Alliance, "Client to Authenticator Protocol", n.d., 1858 . 1862 [I-D.birkholz-rats-tuda] 1863 Fuchs, A., Birkholz, H., McDonald, I., and C. Bormann, 1864 "Time-Based Uni-Directional Attestation", Work in 1865 Progress, Internet-Draft, draft-birkholz-rats-tuda-03, 13 1866 July 2020, . 1869 [I-D.ietf-teep-architecture] 1870 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 1871 "Trusted Execution Environment Provisioning (TEEP) 1872 Architecture", Work in Progress, Internet-Draft, draft- 1873 ietf-teep-architecture-13, 2 November 2020, 1874 . 1877 [OPCUA] OPC Foundation, "OPC Unified Architecture Specification, 1878 Part 2: Security Model, Release 1.03", OPC 10000-2 , 25 1879 November 2015, . 1883 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 1884 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 1885 . 1887 [RFC8322] Field, J., Banghart, S., and D. Waltermire, "Resource- 1888 Oriented Lightweight Information Exchange (ROLIE)", 1889 RFC 8322, DOI 10.17487/RFC8322, February 2018, 1890 . 1892 [strengthoffunction] 1893 NISC, "Strength of Function", n.d., 1894 . 1897 [TCGarch] Trusted Computing Group, "Trusted Platform Module Library 1898 - Part 1: Architecture", n.d., 1899 . 1902 [WebAuthN] W3C, "Web Authentication: An API for accessing Public Key 1903 Credentials", n.d., . 1905 Contributors 1907 Monty Wiseman 1909 Email: montywiseman32@gmail.com 1911 Liang Xia 1913 Email: frank.xialiang@huawei.com 1915 Laurence Lundblade 1917 Email: lgl@island-resort.com 1919 Eliot Lear 1921 Email: elear@cisco.com 1922 Jessica Fitzgerald-McKay 1924 Sarah C. Helbe 1926 Andrew Guinn 1928 Peter Lostcco 1930 Email: pete.loscocco@gmail.com 1932 Eric Voit 1934 Thomas Fossati 1936 Email: thomas.fossati@arm.com 1938 Paul Rowe 1940 Carsten Bormann 1942 Email: cabo@tzi.org 1944 Giri Mandyam 1946 Email: mandyam@qti.qualcomm.com 1948 Authors' Addresses 1950 Henk Birkholz 1951 Fraunhofer SIT 1952 Rheinstrasse 75 1953 64295 Darmstadt 1954 Germany 1956 Email: henk.birkholz@sit.fraunhofer.de 1957 Dave Thaler 1958 Microsoft 1959 United States of America 1961 Email: dthaler@microsoft.com 1963 Michael Richardson 1964 Sandelman Software Works 1965 Canada 1967 Email: mcr+ietf@sandelman.ca 1969 Ned Smith 1970 Intel Corporation 1971 United States of America 1973 Email: ned.smith@intel.com 1975 Wei Pan 1976 Huawei Technologies 1978 Email: william.panwei@huawei.com