idnits 2.17.1 draft-ietf-rats-architecture-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 518 has weird spacing: '...tloader v ...' == Line 527 has weird spacing: '... Claims v | ...' -- The document date (23 April 2021) is 1092 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-birkholz-rats-tuda-04 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-14 Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RATS Working Group H. Birkholz 3 Internet-Draft Fraunhofer SIT 4 Intended status: Informational D. Thaler 5 Expires: 25 October 2021 Microsoft 6 M. Richardson 7 Sandelman Software Works 8 N. Smith 9 Intel 10 W. Pan 11 Huawei Technologies 12 23 April 2021 14 Remote Attestation Procedures Architecture 15 draft-ietf-rats-architecture-12 17 Abstract 19 In network protocol exchanges it is often useful for one end of a 20 communication to know whether the other end is in an intended 21 operating state. This document provides an architectural overview of 22 the entities involved that make such tests possible through the 23 process of generating, conveying, and evaluating evidentiary claims. 24 An attempt is made to provide for a model that is neutral toward 25 processor architectures, the content of claims, and protocols. 27 Note to Readers 29 Discussion of this document takes place on the RATS Working Group 30 mailing list (rats@ietf.org), which is archived at 31 https://mailarchive.ietf.org/arch/browse/rats/ 32 (https://mailarchive.ietf.org/arch/browse/rats/). 34 Source for this draft and an issue tracker can be found at 35 https://github.com/ietf-rats-wg/architecture (https://github.com/ 36 ietf-rats-wg/architecture). 38 Status of This Memo 40 This Internet-Draft is submitted in full conformance with the 41 provisions of BCP 78 and BCP 79. 43 Internet-Drafts are working documents of the Internet Engineering 44 Task Force (IETF). Note that other groups may also distribute 45 working documents as Internet-Drafts. The list of current Internet- 46 Drafts is at https://datatracker.ietf.org/drafts/current/. 48 Internet-Drafts are draft documents valid for a maximum of six months 49 and may be updated, replaced, or obsoleted by other documents at any 50 time. It is inappropriate to use Internet-Drafts as reference 51 material or to cite them other than as "work in progress." 53 This Internet-Draft will expire on 25 October 2021. 55 Copyright Notice 57 Copyright (c) 2021 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 62 license-info) in effect on the date of publication of this document. 63 Please review these documents carefully, as they describe your rights 64 and restrictions with respect to this document. Code Components 65 extracted from this document must include Simplified BSD License text 66 as described in Section 4.e of the Trust Legal Provisions and are 67 provided without warranty as described in the Simplified BSD License. 69 Table of Contents 71 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 72 2. Reference Use Cases . . . . . . . . . . . . . . . . . . . . . 5 73 2.1. Network Endpoint Assessment . . . . . . . . . . . . . . . 5 74 2.2. Confidential Machine Learning Model Protection . . . . . 5 75 2.3. Confidential Data Protection . . . . . . . . . . . . . . 6 76 2.4. Critical Infrastructure Control . . . . . . . . . . . . . 6 77 2.5. Trusted Execution Environment Provisioning . . . . . . . 7 78 2.6. Hardware Watchdog . . . . . . . . . . . . . . . . . . . . 7 79 2.7. FIDO Biometric Authentication . . . . . . . . . . . . . . 7 80 3. Architectural Overview . . . . . . . . . . . . . . . . . . . 8 81 3.1. Layered Attestation Environments . . . . . . . . . . . . 12 82 3.2. Composite Device . . . . . . . . . . . . . . . . . . . . 14 83 3.3. Implementation Considerations . . . . . . . . . . . . . . 16 84 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 17 85 4.1. Roles . . . . . . . . . . . . . . . . . . . . . . . . . . 17 86 4.2. Artifacts . . . . . . . . . . . . . . . . . . . . . . . . 18 87 5. Topological Patterns . . . . . . . . . . . . . . . . . . . . 19 88 5.1. Passport Model . . . . . . . . . . . . . . . . . . . . . 20 89 5.2. Background-Check Model . . . . . . . . . . . . . . . . . 21 90 5.3. Combinations . . . . . . . . . . . . . . . . . . . . . . 22 91 6. Roles and Entities . . . . . . . . . . . . . . . . . . . . . 23 92 7. Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . 24 93 7.1. Relying Party . . . . . . . . . . . . . . . . . . . . . . 24 94 7.2. Attester . . . . . . . . . . . . . . . . . . . . . . . . 25 95 7.3. Relying Party Owner . . . . . . . . . . . . . . . . . . . 25 96 7.4. Verifier . . . . . . . . . . . . . . . . . . . . . . . . 25 97 7.5. Endorser, Reference Value Provider, and Verifier Owner . 27 98 8. Conceptual Messages . . . . . . . . . . . . . . . . . . . . . 28 99 8.1. Evidence . . . . . . . . . . . . . . . . . . . . . . . . 28 100 8.2. Endorsements . . . . . . . . . . . . . . . . . . . . . . 28 101 8.3. Reference Values . . . . . . . . . . . . . . . . . . . . 29 102 8.4. Attestation Results . . . . . . . . . . . . . . . . . . . 29 103 8.5. Appraisal Policies . . . . . . . . . . . . . . . . . . . 30 104 9. Claims Encoding Formats . . . . . . . . . . . . . . . . . . . 31 105 10. Freshness . . . . . . . . . . . . . . . . . . . . . . . . . . 32 106 10.1. Explicit Timekeeping using Synchronized Clocks . . . . . 33 107 10.2. Implicit Timekeeping using Nonces . . . . . . . . . . . 33 108 10.3. Implicit Timekeeping using Epoch IDs . . . . . . . . . . 33 109 10.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . 35 110 11. Privacy Considerations . . . . . . . . . . . . . . . . . . . 35 111 12. Security Considerations . . . . . . . . . . . . . . . . . . . 36 112 12.1. Attester and Attestation Key Protection . . . . . . . . 36 113 12.1.1. On-Device Attester and Key Protection . . . . . . . 37 114 12.1.2. Attestation Key Provisioning Processes . . . . . . . 37 115 12.2. Integrity Protection . . . . . . . . . . . . . . . . . . 39 116 12.3. Epoch ID-based Attestation . . . . . . . . . . . . . . . 39 117 12.4. Trust Anchor Protection . . . . . . . . . . . . . . . . 40 118 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 40 119 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 41 120 15. Notable Contributions . . . . . . . . . . . . . . . . . . . . 41 121 16. Appendix A: Time Considerations . . . . . . . . . . . . . . . 41 122 16.1. Example 1: Timestamp-based Passport Model Example . . . 43 123 16.2. Example 2: Nonce-based Passport Model Example . . . . . 44 124 16.3. Example 3: Epoch ID-based Passport Model Example . . . . 46 125 16.4. Example 4: Timestamp-based Background-Check Model 126 Example . . . . . . . . . . . . . . . . . . . . . . . . 47 127 16.5. Example 5: Nonce-based Background-Check Model Example . 48 128 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 49 129 17.1. Normative References . . . . . . . . . . . . . . . . . . 49 130 17.2. Informative References . . . . . . . . . . . . . . . . . 49 131 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . 51 132 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 53 134 1. Introduction 136 The question of how one system can know that another system can be 137 trusted has found new interest and relevance in a world where trusted 138 computing elements are maturing in processor architectures. 140 Systems that have been attested and verified to be in a good state 141 (for some value of "good") can improve overall system posture. 142 Conversely, systems that cannot be attested and verified to be in a 143 good state can be taken out of service, or otherwise flagged for 144 repair. 146 For example: 148 * A bank back-end system might refuse to transact with another 149 system that is not known to be in a good state. 151 * A healthcare system might refuse to transmit electronic healthcare 152 records to a system that is not known to be in a good state. 154 In Remote Attestation Procedures (RATS), one peer (the "Attester") 155 produces believable information about itself - Evidence - to enable a 156 remote peer (the "Relying Party") to decide whether to consider that 157 Attester a trustworthy peer or not. RATS are facilitated by an 158 additional vital party, the Verifier. 160 The Verifier appraises Evidence via appraisal policies and creates 161 the Attestation Results to support Relying Parties in their decision 162 process. This document defines a flexible architecture consisting of 163 attestation roles and their interactions via conceptual messages. 164 Additionally, this document defines a universal set of terms that can 165 be mapped to various existing and emerging Remote Attestation 166 Procedures. Common topological patterns and the sequence of data 167 flows associated with them, such as the "Passport Model" and the 168 "Background-Check Model", are illustrated. The purpose is to define 169 useful terminology for remote attestation and enable readers to map 170 their solution architecture to the canonical attestation architecture 171 provided here. Having a common terminology that provides well- 172 understood meanings for common themes such as roles, device 173 composition, topological patterns, and appraisal procedures is vital 174 for semantic interoperability across solutions and platforms 175 involving multiple vendors and providers. 177 Amongst other things, this document is about trust and 178 trustworthiness. Trust is a choice one makes about another system. 179 Trustworthiness is a quality about the other system that can be used 180 in making one's decision to trust it or not. This is subtle 181 difference and being familiar with the difference is crucial for 182 using this document. Additionally, the concepts of freshness and 183 trust relationships with respect to RATS are elaborated on to enable 184 implementers to choose appropriate solutions to compose their Remote 185 Attestation Procedures. 187 2. Reference Use Cases 189 This section covers a number of representative and generic use cases 190 for remote attestation, independent of specific solutions. The 191 purpose is to provide motivation for various aspects of the 192 architecture presented in this document. Many other use cases exist, 193 and this document does not intend to have a complete list, only to 194 illustrate a set of use cases that collectively cover all the 195 functionality required in the architecture. 197 Each use case includes a description followed by an additional 198 summary of the Attester and Relying Party roles derived from the use 199 case. 201 2.1. Network Endpoint Assessment 203 Network operators want a trustworthy report that includes identity 204 and version information about the hardware and software on the 205 machines attached to their network, for purposes such as inventory, 206 audit, anomaly detection, record maintenance and/or trending reports 207 (logging). The network operator may also want a policy by which full 208 access is only granted to devices that meet some definition of 209 hygiene, and so wants to get Claims about such information and verify 210 its validity. Remote attestation is desired to prevent vulnerable or 211 compromised devices from getting access to the network and 212 potentially harming others. 214 Typically, solutions start with a specific component (called a root 215 of trust) that is intended to provide trustworthy device identity and 216 protected storage for measurements. The system components perform a 217 series of measurements that may be signed via functions provided by a 218 root of trust, considered as Evidence about present system 219 components, such as hardware, firmware, BIOS, software, etc. 221 Attester: A device desiring access to a network. 223 Relying Party: Network equipment such as a router, switch, or access 224 point, responsible for admission of the device into the network. 226 2.2. Confidential Machine Learning Model Protection 228 A device manufacturer wants to protect its intellectual property. 229 The intellectual property's scope primarily encompasses the machine 230 learning (ML) model that is deployed in the devices purchased by its 231 customers. The protection goals include preventing attackers, 232 potentially the customer themselves, from seeing the details of the 233 model. 235 This typically works by having some protected environment in the 236 device go through a remote attestation with some manufacturer service 237 that can assess its trustworthiness. If remote attestation succeeds, 238 then the manufacturer service releases either the model, or a key to 239 decrypt a model already deployed on the Attester in encrypted form, 240 to the requester. 242 Attester: A device desiring to run an ML model. 244 Relying Party: A server or service holding ML models it desires to 245 protect. 247 2.3. Confidential Data Protection 249 This is a generalization of the ML model use case above, where the 250 data can be any highly confidential data, such as health data about 251 customers, payroll data about employees, future business plans, etc. 252 As part of the attestation procedure, an assessment is made against a 253 set of policies to evaluate the state of the system that is 254 requesting the confidential data. Attestation is desired to prevent 255 leaking data via compromised devices. 257 Attester: An entity desiring to retrieve confidential data. 259 Relying Party: An entity that holds confidential data for release to 260 authorized entities. 262 2.4. Critical Infrastructure Control 264 Potentially harmful physical equipment (e.g., power grid, traffic 265 control, hazardous chemical processing, etc.) is connected to a 266 network in support of critical infrastructure. The organization 267 managing such infrastructure needs to ensure that only authorized 268 code and users can control corresponding critical processes, and that 269 these processes are protected from unauthorized manipulation or other 270 threats. When a protocol operation can affect a critical system 271 component of the infrastructure, devices attached to that critical 272 component require some assurances depending on the security context, 273 including that: a requesting device or application has not been 274 compromised, and the requesters and actors act on applicable 275 policies. As such, remote attestation can be used to only accept 276 commands from requesters that are within policy. 278 Attester: A device or application wishing to control physical 279 equipment. 281 Relying Party: A device or application connected to potentially 282 dangerous physical equipment (hazardous chemical processing, 283 traffic control, power grid, etc.). 285 2.5. Trusted Execution Environment Provisioning 287 A Trusted Application Manager (TAM) server is responsible for 288 managing the applications running in a Trusted Execution Environment 289 (TEE) of a client device, as described in 290 [I-D.ietf-teep-architecture]. To achieve its purpose, the TAM needs 291 to assess the state of a TEE, or of applications in the TEE, of a 292 client device. The TEE conducts Remote Attestation Procedures with 293 the TAM, which can then decide whether the TEE is already in 294 compliance with the TAM's latest policy. If not, the TAM has to 295 uninstall, update, or install approved applications in the TEE to 296 bring it back into compliance with the TAM's policy. 298 Attester: A device with a TEE capable of running trusted 299 applications that can be updated. 301 Relying Party: A TAM. 303 2.6. Hardware Watchdog 305 There is a class of malware that holds a device hostage and does not 306 allow it to reboot to prevent updates from being applied. This can 307 be a significant problem, because it allows a fleet of devices to be 308 held hostage for ransom. 310 A solution to this problem is a watchdog timer implemented in a 311 protected environment such as a Trusted Platform Module (TPM), as 312 described in [TCGarch] section 43.3. If the watchdog does not 313 receive regular, and fresh, Attestation Results as to the system's 314 health, then it forces a reboot. 316 Attester: The device that should be protected from being held 317 hostage for a long period of time. 319 Relying Party: A watchdog capable of triggering a procedure that 320 resets a device into a known, good operational state. 322 2.7. FIDO Biometric Authentication 324 In the Fast IDentity Online (FIDO) protocol [WebAuthN], [CTAP], the 325 device in the user's hand authenticates the human user, whether by 326 biometrics (such as fingerprints), or by PIN and password. FIDO 327 authentication puts a large amount of trust in the device compared to 328 typical password authentication because it is the device that 329 verifies the biometric, PIN and password inputs from the user, not 330 the server. For the Relying Party to know that the authentication is 331 trustworthy, the Relying Party needs to know that the Authenticator 332 part of the device is trustworthy. The FIDO protocol employs remote 333 attestation for this. 335 The FIDO protocol supports several remote attestation protocols and a 336 mechanism by which new ones can be registered and added. Remote 337 attestation defined by RATS is thus a candidate for use in the FIDO 338 protocol. 340 Other biometric authentication protocols such as the Chinese IFAA 341 standard and WeChat Pay as well as Google Pay make use of remote 342 attestation in one form or another. 344 Attester: Every FIDO Authenticator contains an Attester. 346 Relying Party: Any web site, mobile application back-end, or service 347 that relies on authentication data based on biometric information. 349 3. Architectural Overview 351 Figure 1 depicts the data that flows between different roles, 352 independent of protocol or use case. 354 ************ ************* ************ ***************** 355 * Endorser * * Reference * * Verifier * * Relying Party * 356 ************ * Value * * Owner * * Owner * 357 | * Provider * ************ ***************** 358 | ************* | | 359 | | | | 360 |Endorsements |Reference |Appraisal |Appraisal 361 | |Values |Policy |Policy for 362 | | |for |Attestation 363 .-----------. | |Evidence |Results 364 | | | | 365 | | | | 366 v v v | 367 .---------------------------. | 368 .----->| Verifier |------. | 369 | '---------------------------' | | 370 | | | 371 | Attestation| | 372 | Results | | 373 | Evidence | | 374 | | | 375 | v v 376 .----------. .---------------. 377 | Attester | | Relying Party | 378 '----------' '---------------' 380 Figure 1: Conceptual Data Flow 382 The text below summarizes the activities conducted by the roles 383 illustrated in Figure 1. 385 An Attester creates Evidence that is conveyed to a Verifier. 387 A Verifier uses the Evidence, any Reference Values from Reference 388 Value Providers, and any Endorsements from Endorsers, by applying an 389 Appraisal Policy for Evidence to assess the trustworthiness of the 390 Attester. This procedure is called the appraisal of Evidence. 392 Subsequently, the Verifier generates Attestation Results for use by 393 Relying Parties. 395 The Appraisal Policy for Evidence might be obtained from the Verifier 396 Owner via some protocol mechanism, or might be configured into the 397 Verifier by the Verifier Owner, or might be programmed into the 398 Verifier, or might be obtained via some other mechanism. 400 A Relying Party uses Attestation Results by applying its own 401 appraisal policy to make application-specific decisions, such as 402 authorization decisions. This procedure is called the appraisal of 403 Attestation Results. 405 The Appraisal Policy for Attestation Results might be obtained from 406 the Relying Party Owner via some protocol mechanism, or might be 407 configured into the Relying Party by the Relying Party Owner, or 408 might be programmed into the Relying Party, or might be obtained via 409 some other mechanism. 411 See Section 8 for further discussion of the conceptual messages shown 412 in Figure 1. ## Two Types of Environments of an Attester 414 As shown in Figure 2, an Attester consists of at least one Attesting 415 Environment and at least one Target Environment. In some 416 implementations, the Attesting and Target Environments might be 417 combined. Other implementations might have multiple Attesting and 418 Target Environments, such as in the examples described in more detail 419 in Section 3.1 and Section 3.2. Other examples may exist. All 420 compositions of Attesting and Target Environments discussed in this 421 architecture can be combined into more complex implementations. 423 .--------------------------------. 424 | | 425 | Verifier | 426 | | 427 '--------------------------------' 428 ^ 429 | 430 .-------------------------|----------. 431 | | | 432 | .----------------. | | 433 | | Target | | | 434 | | Environment | | | 435 | | | | Evidence | 436 | '----------------' | | 437 | | | | 438 | | | | 439 | Collect | | | 440 | Claims | | | 441 | | | | 442 | v | | 443 | .-------------. | 444 | | Attesting | | 445 | | Environment | | 446 | | | | 447 | '-------------' | 448 | Attester | 449 '------------------------------------' 451 Figure 2: Two Types of Environments 453 Claims are collected from Target Environments. That is, Attesting 454 Environments collect the values and the information to be represented 455 in Claims, by reading system registers and variables, calling into 456 subsystems, taking measurements on code, memory, or other security 457 related assets of the Target Environment. Attesting Environments 458 then format the Claims appropriately, and typically use key material 459 and cryptographic functions, such as signing or cipher algorithms, to 460 generate Evidence. There is no limit to or requirement on the types 461 of hardware or software environments that can be used to implement an 462 Attesting Environment, for example: Trusted Execution Environments 463 (TEEs), embedded Secure Elements (eSEs), Trusted Platform Modules 464 (TPMs) [TCGarch], or BIOS firmware. 466 An arbitrary execution environment may not, by default, be capable of 467 Claims collection for a given Target Environment. Execution 468 environments that are designed specifically to be capable of Claims 469 collection are referred to in this document as Attesting 470 Environments. For example, a TPM doesn't actively collect Claims 471 itself, it instead requires another component to feed various values 472 to the TPM. Thus, an Attesting Environment in such a case would be 473 the combination of the TPM together with whatever component is 474 feeding it the measurements. 476 3.1. Layered Attestation Environments 478 By definition, the Attester role generates Evidence. An Attester may 479 consist of one or more nested environments (layers). The root layer 480 of an Attester includes at least one root of trust. In order to 481 appraise Evidence generated by an Attester, the Verifier needs to 482 trust the Attester's root of trust. Trust in the Attester's root of 483 trust can be established in various ways as discussed in Section 7.4. 485 In layered attestation, a root of trust is the initial Attesting 486 Environment. Claims can be collected from or about each layer. The 487 corresponding Claims can be structured in a nested fashion that 488 reflects the nesting of the Attester's layers. Normally, Claims are 489 not self-asserted, rather a previous layer acts as the Attesting 490 Environment for the next layer. Claims about a root of trust 491 typically are asserted by an Endorser. 493 The example device illustrated in Figure 3 includes (A) a BIOS stored 494 in read-only memory, (B) a bootloader, and (C) an operating system 495 kernel. 497 .-------------. Endorsement for ROM 498 | Endorser |-----------------------. 499 '-------------' | 500 v 501 .-------------. Reference .----------. 502 | Reference | Values for | | 503 | Value |----------------->| Verifier | 504 | Provider(s) | ROM, bootloader, | | 505 '-------------' and kernel '----------' 506 ^ 507 .------------------------------------. | 508 | | | 509 | .---------------------------. | | 510 | | Kernel | | | 511 | | | | | Layered 512 | | Target | | | Evidence 513 | | Environment | | | for 514 | '---------------------------' | | bootloader 515 | Collect | | | and 516 | Claims | | | kernel 517 | .---------------|-----------. | | 518 | | Bootloader v | | | 519 | | .-----------. | | | 520 | | Target | Attesting | | | | 521 | | Environment |Environment|-----------' 522 | | | | | | 523 | | '-----------' | | 524 | | ^ | | 525 | '-----------------|---------' | 526 | Collect | | Evidence for | 527 | Claims v | bootloader | 528 | .---------------------------. | 529 | | ROM | | 530 | | | | 531 | | Attesting | | 532 | | Environment | | 533 | '---------------------------' | 534 | | 535 '------------------------------------' 537 Figure 3: Layered Attester 539 The first Attesting Environment, the read-only BIOS in this example, 540 has to ensure the integrity of the bootloader (the first Target 541 Environment). There are potentially multiple kernels to boot, and 542 the decision is up to the bootloader. Only a bootloader with intact 543 integrity will make an appropriate decision. Therefore, the Claims 544 relating to the integrity of the bootloader have to be measured 545 securely. At this stage of the boot-cycle of the device, the Claims 546 collected typically cannot be composed into Evidence. 548 After the boot sequence is started, the BIOS conducts the most 549 important and defining feature of layered attestation, which is that 550 the successfully measured bootloader now becomes (or contains) an 551 Attesting Environment for the next layer. This procedure in layered 552 attestation is sometimes called "staging". It is important that the 553 bootloader not be able to alter any Claims about itself that were 554 collected by the BIOS. This can be ensured having those Claims be 555 either signed by the BIOS or stored in a tamper-proof manner by the 556 BIOS. 558 Continuing with this example, the bootloader's Attesting Environment 559 is now in charge of collecting Claims about the next Target 560 Environment, which in this example is the kernel to be booted. The 561 final Evidence thus contains two sets of Claims: one set about the 562 bootloader as measured and signed by the BIOS, plus a set of Claims 563 about the kernel as measured and signed by the bootloader. 565 This example could be extended further by making the kernel become 566 another Attesting Environment for an application as another Target 567 Environment. This would result in a third set of Claims in the 568 Evidence pertaining to that application. 570 The essence of this example is a cascade of staged environments. 571 Each environment has the responsibility of measuring the next 572 environment before the next environment is started. In general, the 573 number of layers may vary by device or implementation, and an 574 Attesting Environment might even have multiple Target Environments 575 that it measures, rather than only one as shown by example in 576 Figure 3. 578 3.2. Composite Device 580 A composite device is an entity composed of multiple sub-entities 581 such that its trustworthiness has to be determined by the appraisal 582 of all these sub-entities. 584 Each sub-entity has at least one Attesting Environment collecting the 585 Claims from at least one Target Environment, then this sub-entity 586 generates Evidence about its trustworthiness. Therefore, each sub- 587 entity can be called an Attester. Among all the Attesters, there may 588 be only some which have the ability to communicate with the Verifier 589 while others do not. 591 For example, a carrier-grade router consists of a chassis and 592 multiple slots. The trustworthiness of the router depends on all its 593 slots' trustworthiness. Each slot has an Attesting Environment, such 594 as a TEE, collecting the Claims of its boot process, after which it 595 generates Evidence from the Claims. 597 Among these slots, only a "main" slot can communicate with the 598 Verifier while other slots cannot. But other slots can communicate 599 with the main slot by the links between them inside the router. So 600 the main slot collects the Evidence of other slots, produces the 601 final Evidence of the whole router and conveys the final Evidence to 602 the Verifier. Therefore the router is a composite device, each slot 603 is an Attester, and the main slot is the lead Attester. 605 Another example is a multi-chassis router composed of multiple single 606 carrier-grade routers. Multi-chassis router setups create redundancy 607 groups that provide higher throughput by interconnecting multiple 608 routers in these groups, which can be treated as one logical router 609 for simpler management. A multi-chassis router setup provides a 610 management point that connects to the Verifier. Typically one router 611 in the group is designated as the main router. Other routers in the 612 multi-chassis setup are connected to the main router only via 613 physical network links and are therefore managed and appraised via 614 the main router's help. In consequence, a multi-chassis router setup 615 is a composite device, each router is an Attester, and the main 616 router is the lead Attester. 618 Figure 4 depicts the conceptual data flow for a composite device. 620 .-----------------------------. 621 | Verifier | 622 '-----------------------------' 623 ^ 624 | 625 | Evidence of 626 | Composite Device 627 | 628 .----------------------------------|-------------------------------. 629 | .--------------------------------|-----. .------------. | 630 | | Collect .------------. | | | | 631 | | Claims .--------->| Attesting |<--------| Attester B |-. | 632 | | | |Environment | | '------------. | | 633 | | .----------------. | |<----------| Attester C |-. | 634 | | | Target | | | | '------------' | | 635 | | | Environment(s) | | |<------------| ... | | 636 | | | | '------------' | Evidence '------------' | 637 | | '----------------' | of | 638 | | | Attesters | 639 | | lead Attester A | (via Internal Links or | 640 | '--------------------------------------' Network Connections) | 641 | | 642 | Composite Device | 643 '------------------------------------------------------------------' 645 Figure 4: Composite Device 647 In a composite device, each Attester generates its own Evidence by 648 its Attesting Environment(s) collecting the Claims from its Target 649 Environment(s). The lead Attester collects Evidence from other 650 Attesters and conveys it to a Verifier. Collection of Evidence from 651 sub-entities may itself be a form of Claims collection that results 652 in Evidence asserted by the lead Attester. The lead Attester 653 generates Evidence about the layout of the whole composite device, 654 while sub-Attesters generate Evidence about their respective 655 (sub-)modules. 657 In this scenario, the trust model described in Section 7 can also be 658 applied to an inside Verifier. 660 3.3. Implementation Considerations 662 An entity can take on multiple RATS roles (e.g., Attester, Verifier, 663 Relying Party, etc.) at the same time. Multiple entities can 664 cooperate to implement a single RATS role as well. In essence, the 665 combination of roles and entities can be arbitrary. For example, in 666 the composite device scenario, the entity inside the lead Attester 667 can also take on the role of a Verifier, and the outer entity of 668 Verifier can take on the role of a Relying Party. After collecting 669 the Evidence of other Attesters, this inside Verifier uses 670 Endorsements and appraisal policies (obtained the same way as by any 671 other Verifier) as part of the appraisal procedures that generate 672 Attestation Results. The inside Verifier then conveys the 673 Attestation Results of other Attesters to the outside Verifier, 674 whether in the same conveyance protocol as part of the Evidence or 675 not. 677 4. Terminology 679 This document uses the following terms. 681 4.1. Roles 683 Attester: A role performed by an entity (typically a device) whose 684 Evidence must be appraised in order to infer the extent to which 685 the Attester is considered trustworthy, such as when deciding 686 whether it is authorized to perform some operation. 688 Produces: Evidence 690 Relying Party: A role performed by an entity that depends on the 691 validity of information about an Attester, for purposes of 692 reliably applying application specific actions. Compare /relying 693 party/ in [RFC4949]. 695 Consumes: Attestation Results 697 Verifier: A role performed by an entity that appraises the validity 698 of Evidence about an Attester and produces Attestation Results to 699 be used by a Relying Party. 701 Consumes: Evidence, Reference Values, Endorsements, Appraisal 702 Policy for Evidence 704 Produces: Attestation Results 706 Relying Party Owner: A role performed by an entity (typically an 707 administrator), that is authorized to configure Appraisal Policy 708 for Attestation Results in a Relying Party. 710 Produces: Appraisal Policy for Attestation Results 712 Verifier Owner: A role performed by an entity (typically an 713 administrator), that is authorized to configure Appraisal Policy 714 for Evidence in a Verifier. 716 Produces: Appraisal Policy for Evidence 718 Endorser: A role performed by an entity (typically a manufacturer) 719 whose Endorsements help Verifiers appraise the authenticity of 720 Evidence. 722 Produces: Endorsements 724 Reference Value Provider: A role performed by an entity (typically a 725 manufacturer) whose Reference Values help Verifiers appraise 726 Evidence to determine if acceptable known Claims have been 727 recorded by the Attester. 729 Produces: Reference Values 731 4.2. Artifacts 733 Claim: A piece of asserted information, often in the form of a name/ 734 value pair. Claims make up the usual structure of Evidence and 735 other RATS artifacts. Compare /claim/ in [RFC7519]. 737 Endorsement: A secure statement that an Endorser vouches for the 738 integrity of an Attester's various capabilities such as Claims 739 collection and Evidence signing. 741 Consumed By: Verifier 743 Produced By: Endorser 745 Evidence: A set of Claims generated by an Attester to be appraised 746 by a Verifier. Evidence may include configuration data, 747 measurements, telemetry, or inferences. 749 Consumed By: Verifier 751 Produced By: Attester 753 Attestation Result: The output generated by a Verifier, typically 754 including information about an Attester, where the Verifier 755 vouches for the validity of the results. 757 Consumed By: Relying Party 759 Produced By: Verifier 761 Appraisal Policy for Evidence: A set of rules that informs how a 762 Verifier evaluates the validity of information about an Attester. 763 Compare /security policy/ in [RFC4949]. 765 Consumed By: Verifier 767 Produced By: Verifier Owner 769 Appraisal Policy for Attestation Results: A set of rules that direct 770 how a Relying Party uses the Attestation Results regarding an 771 Attester generated by the Verifiers. Compare /security policy/ in 772 [RFC4949]. 774 Consumed by: Relying Party 776 Produced by: Relying Party Owner 778 Reference Values: A set of values against which values of Claims can 779 be compared as part of applying an Appraisal Policy for Evidence. 780 Reference Values are sometimes referred to in other documents as 781 known-good values, golden measurements, or nominal values, 782 although those terms typically assume comparison for equality, 783 whereas here Reference Values might be more general and be used in 784 any sort of comparison. 786 Consumed By: Verifier 788 Produced By: Reference Value Provider 790 5. Topological Patterns 792 Figure 1 shows a data-flow diagram for communication between an 793 Attester, a Verifier, and a Relying Party. The Attester conveys its 794 Evidence to the Verifier for appraisal, and the Relying Party 795 receives the Attestation Result from the Verifier. This section 796 refines the data-flow diagram by describing two reference models, as 797 well as one example composition thereof. The discussion that follows 798 is for illustrative purposes only and does not constrain the 799 interactions between RATS roles to the presented patterns. 801 5.1. Passport Model 803 The passport model is so named because of its resemblance to how 804 nations issue passports to their citizens. The nature of the 805 Evidence that an individual needs to provide to its local authority 806 is specific to the country involved. The citizen retains control of 807 the resulting passport document and presents it to other entities 808 when it needs to assert a citizenship or identity Claim, such as an 809 airport immigration desk. The passport is considered sufficient 810 because it vouches for the citizenship and identity Claims, and it is 811 issued by a trusted authority. Thus, in this immigration desk 812 analogy, the passport issuing agency is a Verifier, the passport is 813 an Attestation Result, and the immigration desk is a Relying Party. 815 In this model, an Attester conveys Evidence to a Verifier, which 816 compares the Evidence against its appraisal policy. The Verifier 817 then gives back an Attestation Result. If the Attestation Result was 818 a successful one, the Attester can then present the Attestation 819 Result (and possibly additional Claims) to a Relying Party, which 820 then compares this information against its own appraisal policy. 822 Three ways in which the process may fail include: 824 * First, the Verifier may not issue a positive Attestation Result 825 due to the Evidence not passing the Appraisal Policy for Evidence. 827 * The second way in which the process may fail is when the 828 Attestation Result is examined by the Relying Party, and based 829 upon the Appraisal Policy for Attestation Results, the result does 830 not pass the policy. 832 * The third way is when the Verifier is unreachable or unavailable. 834 Since the resource access protocol between the Attester and Relying 835 Party includes an Attestation Result, in this model the details of 836 that protocol constrain the serialization format of the Attestation 837 Result. The format of the Evidence on the other hand is only 838 constrained by the Attester-Verifier remote attestation protocol. 839 This implies that interoperability and standardization is more 840 relevant for Attestation Results than it is for Evidence. 842 +------------+ 843 | | Compare Evidence 844 | Verifier | against appraisal policy 845 | | 846 +------------+ 847 ^ | 848 Evidence | | Attestation 849 | | Result 850 | v 851 +------------+ +-------------+ 852 | |------------->| | Compare Attestation 853 | Attester | Attestation | Relying | Result against 854 | | Result | Party | appraisal policy 855 +------------+ +-------------+ 857 Figure 5: Passport Model 859 5.2. Background-Check Model 861 The background-check model is so named because of the resemblance of 862 how employers and volunteer organizations perform background checks. 863 When a prospective employee provides Claims about education or 864 previous experience, the employer will contact the respective 865 institutions or former employers to validate the Claim. Volunteer 866 organizations often perform police background checks on volunteers in 867 order to determine the volunteer's trustworthiness. Thus, in this 868 analogy, a prospective volunteer is an Attester, the organization is 869 the Relying Party, and the organization that issues a report is a 870 Verifier. 872 In this model, an Attester conveys Evidence to a Relying Party, which 873 simply passes it on to a Verifier. The Verifier then compares the 874 Evidence against its appraisal policy, and returns an Attestation 875 Result to the Relying Party. The Relying Party then compares the 876 Attestation Result against its own appraisal policy. 878 The resource access protocol between the Attester and Relying Party 879 includes Evidence rather than an Attestation Result, but that 880 Evidence is not processed by the Relying Party. Since the Evidence 881 is merely forwarded on to a trusted Verifier, any serialization 882 format can be used for Evidence because the Relying Party does not 883 need a parser for it. The only requirement is that the Evidence can 884 be _encapsulated in_ the format required by the resource access 885 protocol between the Attester and Relying Party. 887 However, like in the Passport model, an Attestation Result is still 888 consumed by the Relying Party. Code footprint and attack surface 889 area can be minimized by using a serialization format for which the 890 Relying Party already needs a parser to support the protocol between 891 the Attester and Relying Party, which may be an existing standard or 892 widely deployed resource access protocol. Such minimization is 893 especially important if the Relying Party is a constrained node. 895 +-------------+ 896 | | Compare Evidence 897 | Verifier | against appraisal 898 | | policy 899 +-------------+ 900 ^ | 901 Evidence | | Attestation 902 | | Result 903 | v 904 +------------+ +-------------+ 905 | |-------------->| | Compare Attestation 906 | Attester | Evidence | Relying | Result against 907 | | | Party | appraisal policy 908 +------------+ +-------------+ 910 Figure 6: Background-Check Model 912 5.3. Combinations 914 One variation of the background-check model is where the Relying 915 Party and the Verifier are on the same machine, performing both 916 functions together. In this case, there is no need for a protocol 917 between the two. 919 It is also worth pointing out that the choice of model depends on the 920 use case, and that different Relying Parties may use different 921 topological patterns. 923 The same device may need to create Evidence for different Relying 924 Parties and/or different use cases. For instance, it would use one 925 model to provide Evidence to a network infrastructure device to gain 926 access to the network, and the other model to provide Evidence to a 927 server holding confidential data to gain access to that data. As 928 such, both models may simultaneously be in use by the same device. 930 Figure 7 shows another example of a combination where Relying Party 1 931 uses the passport model, whereas Relying Party 2 uses an extension of 932 the background-check model. Specifically, in addition to the basic 933 functionality shown in Figure 6, Relying Party 2 actually provides 934 the Attestation Result back to the Attester, allowing the Attester to 935 use it with other Relying Parties. This is the model that the 936 Trusted Application Manager plans to support in the TEEP architecture 937 [I-D.ietf-teep-architecture]. 939 +-------------+ 940 | | Compare Evidence 941 | Verifier | against appraisal policy 942 | | 943 +-------------+ 944 ^ | 945 Evidence | | Attestation 946 | | Result 947 | v 948 +-------------+ 949 | | Compare 950 | Relying | Attestation Result 951 | Party 2 | against appraisal policy 952 +-------------+ 953 ^ | 954 Evidence | | Attestation 955 | | Result 956 | v 957 +-------------+ +-------------+ 958 | |-------------->| | Compare Attestation 959 | Attester | Attestation | Relying | Result against 960 | | Result | Party 1 | appraisal policy 961 +-------------+ +-------------+ 963 Figure 7: Example Combination 965 6. Roles and Entities 967 An entity in the RATS architecture includes at least one of the roles 968 defined in this document. 970 An entity can aggregate more than one role into itself, such as being 971 both a Verifier and a Relying Party, or being both a Reference Value 972 Provider and an Endorser. As such, any conceptual messages (see 973 Section 8 for more discussion) originating from such roles might also 974 be combined. For example, Reference Values might be conveyed as part 975 of an appraisal policy if the Verifier Owner and Reference Value 976 Provider roles are combined. Similarly, Reference Values might be 977 conveyed as part of an Endorsement if the Endorser and Reference 978 Value Provider roles are combined. 980 Interactions between roles aggregated into the same entity do not 981 necessarily use the Internet Protocol. Such interactions might use a 982 loopback device or other IP-based communication between separate 983 environments, but they do not have to. Alternative channels to 984 convey conceptual messages include function calls, sockets, GPIO 985 interfaces, local busses, or hypervisor calls. This type of 986 conveyance is typically found in composite devices. Most 987 importantly, these conveyance methods are out-of-scope of RATS, but 988 they are presumed to exist in order to convey conceptual messages 989 appropriately between roles. 991 In essence, an entity that combines more than one role creates and 992 consumes the corresponding conceptual messages as defined in this 993 document. 995 7. Trust Model 997 7.1. Relying Party 999 This document covers scenarios for which a Relying Party trusts a 1000 Verifier that can appraise the trustworthiness of information about 1001 an Attester. Such trust might come by the Relying Party trusting the 1002 Verifier (or its public key) directly, or might come by trusting an 1003 entity (e.g., a Certificate Authority) that is in the Verifier's 1004 certificate path. Such trust is expressed by storing one or more 1005 "trust anchors" in a secure location known as a trust anchor store. 1007 As defined in [RFC6024], "A trust anchor represents an authoritative 1008 entity via a public key and associated data. The public key is used 1009 to verify digital signatures, and the associated data is used to 1010 constrain the types of information for which the trust anchor is 1011 authoritative." The trust anchor may be a certificate or it may be a 1012 raw public key along with additional data if necessary such as its 1013 public key algorithm and parameters. 1015 The Relying Party might implicitly trust a Verifier, such as in a 1016 Verifier/Relying Party combination where the Verifier and Relying 1017 Party roles are combined. Or, for a stronger level of security, the 1018 Relying Party might require that the Verifier first provide 1019 information about itself that the Relying Party can use to assess the 1020 trustworthiness of the Verifier before accepting its Attestation 1021 Results. 1023 For example, one explicit way for a Relying Party "A" to establish 1024 such trust in a Verifier "B", would be for B to first act as an 1025 Attester where A acts as a combined Verifier/Relying Party. If A 1026 then accepts B as trustworthy, it can choose to accept B as a 1027 Verifier for other Attesters. 1029 As another example, the Relying Party can establish trust in the 1030 Verifier by out of band establishment of key material, combined with 1031 a protocol like TLS to communicate. There is an assumption that 1032 between the establishment of the trusted key material and the 1033 creation of the Evidence, that the Verifier has not been compromised. 1035 Similarly, the Relying Party also needs to trust the Relying Party 1036 Owner for providing its Appraisal Policy for Attestation Results, and 1037 in some scenarios the Relying Party might even require that the 1038 Relying Party Owner go through a remote attestation procedure with it 1039 before the Relying Party will accept an updated policy. This can be 1040 done similarly to how a Relying Party could establish trust in a 1041 Verifier as discussed above. 1043 7.2. Attester 1045 In some scenarios, Evidence might contain sensitive information such 1046 as Personally Identifiable Information (PII) or system identifiable 1047 information. Thus, an Attester must trust entities to which it 1048 conveys Evidence, to not reveal sensitive data to unauthorized 1049 parties. The Verifier might share this information with other 1050 authorized parties, according to a governing policy that address the 1051 handling of sensitive information (potentially included in Appraisal 1052 Policies for Evidence). In the background-check model, this Evidence 1053 may also be revealed to Relying Party(s). 1055 When Evidence contains sensitive information, an Attester typically 1056 requires that a Verifier authenticates itself (e.g., at TLS session 1057 establishment) and might even request a remote attestation before the 1058 Attester sends the sensitive Evidence. This can be done by having 1059 the Attester first act as a Verifier/Relying Party, and the Verifier 1060 act as its own Attester, as discussed above. 1062 7.3. Relying Party Owner 1064 The Relying Party Owner might also require that the Relying Party 1065 first act as an Attester, providing Evidence that the Owner can 1066 appraise, before the Owner would give the Relying Party an updated 1067 policy that might contain sensitive information. In such a case, 1068 authentication or attestation in both directions might be needed, in 1069 which case typically one side's Evidence must be considered safe to 1070 share with an untrusted entity, in order to bootstrap the sequence. 1071 See Section 11 for more discussion. 1073 7.4. Verifier 1075 The Verifier trusts (or more specifically, the Verifier's security 1076 policy is written in a way that configures the Verifier to trust) a 1077 manufacturer, or the manufacturer's hardware, so as to be able to 1078 appraise the trustworthiness of that manufacturer's devices. Such 1079 trust is expressed by storing one or more trust anchors in the 1080 Verifier's trust anchor store. 1082 In a typical solution, a Verifier comes to trust an Attester 1083 indirectly by having an Endorser (such as a manufacturer) vouch for 1084 the Attester's ability to securely generate Evidence, in which case 1085 the Endorser's key material is stored in the Verifier's trust anchor 1086 store. 1088 In some solutions, a Verifier might be configured to directly trust 1089 an Attester by having the Verifier have the Attester's key material 1090 (rather than the Endorser's) in its trust anchor store. 1092 Such direct trust must first be established at the time of trust 1093 anchor store configuration either by checking with an Endorser at 1094 that time, or by conducting a security analysis of the specific 1095 device. Having the Attester directly in the trust anchor store 1096 narrows the Verifier's trust to only specific devices rather than all 1097 devices the Endorser might vouch for, such as all devices 1098 manufactured by the same manufacturer in the case that the Endorser 1099 is a manufacturer. 1101 Such narrowing is often important since physical possession of a 1102 device can also be used to conduct a number of attacks, and so a 1103 device in a physically secure environment (such as one's own 1104 premises) may be considered trusted whereas devices owned by others 1105 would not be. This often results in a desire to either have the 1106 owner run their own Endorser that would only endorse devices one 1107 owns, or to use Attesters directly in the trust anchor store. When 1108 there are many Attesters owned, the use of an Endorser enables better 1109 scalability. 1111 That is, a Verifier might appraise the trustworthiness of an 1112 application component, operating system component, or service under 1113 the assumption that information provided about it by the lower-layer 1114 firmware or software is true. A stronger level of assurance of 1115 security comes when information can be vouched for by hardware or by 1116 ROM code, especially if such hardware is physically resistant to 1117 hardware tampering. In most cases, components that have to be 1118 vouched for via Endorsements because no Evidence is generated about 1119 them are referred to as roots of trust. 1121 The manufacturer having arranged for an Attesting Environment to be 1122 provisioned with key material with which to sign Evidence, the 1123 Verifier is then provided with some way of verifying the signature on 1124 the Evidence. This may be in the form of an appropriate trust 1125 anchor, or the Verifier may be provided with a database of public 1126 keys (rather than certificates) or even carefully curated and secured 1127 lists of symmetric keys. 1129 The nature of how the Verifier manages to validate the signatures 1130 produced by the Attester is critical to the secure operation of a 1131 remote attestation system, but is not the subject of standardization 1132 within this architecture. 1134 A conveyance protocol that provides authentication and integrity 1135 protection can be used to convey Evidence that is otherwise 1136 unprotected (e.g., not signed). Appropriate conveyance of 1137 unprotected Evidence (e.g., [I-D.birkholz-rats-uccs]) relies on the 1138 following conveyance protocol's protection capabilities: 1140 1. The key material used to authenticate and integrity protect the 1141 conveyance channel is trusted by the Verifier to speak for the 1142 Attesting Environment(s) that collected Claims about the Target 1143 Environment(s). 1145 2. All unprotected Evidence that is conveyed is supplied exclusively 1146 by the Attesting Environment that has the key material that 1147 protects the conveyance channel 1149 3. The root of trust protects both the conveyance channel key 1150 material and the Attesting Environment with equivalent strength 1151 protections. 1153 As illustrated in [I-D.birkholz-rats-uccs], an entity that receives 1154 unprotected Evidence via a trusted conveyance channel always takes on 1155 the responsibility of vouching for the Evidence's authenticity and 1156 freshness. If protected Evidence is generated, the Attester's 1157 Attesting Environments take on that responsibility. In cases where 1158 unprotected Evidence is processed by a Verifier, Relying Parties have 1159 to trust that the Verifier is capable of handling Evidence in a 1160 manner that preserves the Evidence's authenticity and freshness. 1161 Generating and conveying unprotected Evidence always creates 1162 significant risk and the benefits of that approach have to be 1163 carefully weighed against potential drawbacks. 1165 See Section 12 for discussion on security strength. 1167 7.5. Endorser, Reference Value Provider, and Verifier Owner 1169 In some scenarios, the Endorser, Reference Value Provider, and 1170 Verifier Owner may need to trust the Verifier before giving the 1171 Endorsement, Reference Values, or appraisal policy to it. This can 1172 be done similarly to how a Relying Party might establish trust in a 1173 Verifier. 1175 As discussed in Section 7.3, authentication or attestation in both 1176 directions might be needed, in which case typically one side's 1177 identity or Evidence must be considered safe to share with an 1178 untrusted entity, in order to bootstrap the sequence. See Section 11 1179 for more discussion. 1181 8. Conceptual Messages 1183 Figure 1 illustrates the flow of a conceptual messages between 1184 various roles. This section provides additional elaboration and 1185 implementation considerations. It is the responsibility of protocol 1186 specifications to define the actual data format and semantics of any 1187 relevant conceptual messages. 1189 8.1. Evidence 1191 Evidence is a set of Claims about the target environment that reveal 1192 operational status, health, configuration or construction that have 1193 security relevance. Evidence is appraised by a Verifier to establish 1194 its relevance, compliance, and timeliness. Claims need to be 1195 collected in a manner that is reliable. Evidence needs to be 1196 securely associated with the target environment so that the Verifier 1197 cannot be tricked into accepting Claims originating from a different 1198 environment (that may be more trustworthy). Evidence also must be 1199 protected from man-in-the-middle attackers who may observe, change or 1200 misdirect Evidence as it travels from Attester to Verifier. The 1201 timeliness of Evidence can be captured using Claims that pinpoint the 1202 time or interval when changes in operational status, health, and so 1203 forth occur. 1205 8.2. Endorsements 1207 An Endorsement is a secure statement that some entity (e.g., a 1208 manufacturer) vouches for the integrity of the device's signing 1209 capability. For example, if the signing capability is in hardware, 1210 then an Endorsement might be a manufacturer certificate that signs a 1211 public key whose corresponding private key is only known inside the 1212 device's hardware. Thus, when Evidence and such an Endorsement are 1213 used together, an appraisal procedure can be conducted based on 1214 appraisal policies that may not be specific to the device instance, 1215 but merely specific to the manufacturer providing the Endorsement. 1216 For example, an appraisal policy might simply check that devices from 1217 a given manufacturer have information matching a set of Reference 1218 Values, or an appraisal policy might have a set of more complex logic 1219 on how to appraise the validity of information. 1221 However, while an appraisal policy that treats all devices from a 1222 given manufacturer the same may be appropriate for some use cases, it 1223 would be inappropriate to use such an appraisal policy as the sole 1224 means of authorization for use cases that wish to constrain _which_ 1225 compliant devices are considered authorized for some purpose. For 1226 example, an enterprise using remote attestation for Network Endpoint 1227 Assessment [RFC5209] may not wish to let every healthy laptop from 1228 the same manufacturer onto the network, but instead only want to let 1229 devices that it legally owns onto the network. Thus, an Endorsement 1230 may be helpful information in authenticating information about a 1231 device, but is not necessarily sufficient to authorize access to 1232 resources which may need device-specific information such as a public 1233 key for the device or component or user on the device. 1235 8.3. Reference Values 1237 Reference Values used in appraisal procedures come from a Reference 1238 Value Provider and are then used by the Verifier to compare to 1239 Evidence. Reference Values with matching Evidence produces 1240 acceptable Claims. Additionally, appraisal policy may play a role in 1241 determining the acceptance of Claims. 1243 8.4. Attestation Results 1245 Attestation Results are the input used by the Relying Party to decide 1246 the extent to which it will trust a particular Attester, and allow it 1247 to access some data or perform some operation. 1249 Attestation Results may carry a boolean value indicating compliance 1250 or non-compliance with a Verifier's appraisal policy, or may carry a 1251 richer set of Claims about the Attester, against which the Relying 1252 Party applies its Appraisal Policy for Attestation Results. 1254 The quality of the Attestation Results depends upon the ability of 1255 the Verifier to evaluate the Attester. Different Attesters have a 1256 different _Strength of Function_ [strengthoffunction], which results 1257 in the Attestation Results being qualitatively different in strength. 1259 An Attestation Result that indicates non-compliance can be used by an 1260 Attester (in the passport model) or a Relying Party (in the 1261 background-check model) to indicate that the Attester should not be 1262 treated as authorized and may be in need of remediation. In some 1263 cases, it may even indicate that the Evidence itself cannot be 1264 authenticated as being correct. 1266 By default, the Relying Party does not believe the Attester to be 1267 compliant. Upon receipt of an authentic Attestation Result and given 1268 the Appraisal Policy for Attestation Results is satisfied, the 1269 Attester is allowed to perform the prescribed actions or access. The 1270 simplest such appraisal policy might authorize granting the Attester 1271 full access or control over the resources guarded by the Relying 1272 Party. A more complex appraisal policy might involve using the 1273 information provided in the Attestation Result to compare against 1274 expected values, or to apply complex analysis of other information 1275 contained in the Attestation Result. 1277 Thus, Attestation Results often need to include detailed information 1278 about the Attester, for use by Relying Parties, much like physical 1279 passports and drivers licenses include personal information such as 1280 name and date of birth. Unlike Evidence, which is often very device- 1281 and vendor-specific, Attestation Results can be vendor-neutral, if 1282 the Verifier has a way to generate vendor-agnostic information based 1283 on the appraisal of vendor-specific information in Evidence. This 1284 allows a Relying Party's appraisal policy to be simpler, potentially 1285 based on standard ways of expressing the information, while still 1286 allowing interoperability with heterogeneous devices. 1288 Finally, whereas Evidence is signed by the device (or indirectly by a 1289 manufacturer, if Endorsements are used), Attestation Results are 1290 signed by a Verifier, allowing a Relying Party to only need a trust 1291 relationship with one entity, rather than a larger set of entities, 1292 for purposes of its appraisal policy. 1294 8.5. Appraisal Policies 1296 The Verifier, when appraising Evidence, or the Relying Party, when 1297 appraising Attestation Results, checks the values of matched Claims 1298 against constraints specified in its appraisal policy. Examples of 1299 such constraints checking include: 1301 * comparison for equality against a Reference Value, or 1303 * a check for being in a range bounded by Reference Values, or 1305 * membership in a set of Reference Values, or 1307 * a check against values in other Claims. 1309 Upon completing all appraisal policy constraints, the remaining 1310 Claims are accepted as input toward determining Attestation Results, 1311 when appraising Evidence, or as input to a Relying Party, when 1312 appraising Attestation Results. 1314 9. Claims Encoding Formats 1316 The following diagram illustrates a relationship to which remote 1317 attestation is desired to be added: 1319 +-------------+ +------------+ Evaluate 1320 | |-------------->| | request 1321 | Attester | Access some | Relying | against 1322 | | resource | Party | security 1323 +-------------+ +------------+ policy 1325 Figure 8: Typical Resource Access 1327 In this diagram, the protocol between Attester and a Relying Party 1328 can be any new or existing protocol (e.g., HTTP(S), COAP(S), ROLIE 1329 [RFC8322], 802.1x, OPC UA [OPCUA], etc.), depending on the use case. 1331 Typically, such protocols already have mechanisms for passing 1332 security information for authentication and authorization purposes. 1333 Common formats include JWTs [RFC7519], CWTs [RFC8392], and X.509 1334 certificates. 1336 Retrofitting already deployed protocols with remote attestation 1337 requires adding RATS conceptual messages to the existing data flows. 1338 This must be done in a way that does not degrade the security 1339 properties of the systems involved and should use native extension 1340 mechanisms provided by the underlying protocol. For example, if a 1341 TLS handshake is to be extended with remote attestation capabilities, 1342 attestation Evidence may be embedded in an ad-hoc X.509 certificate 1343 extension (e.g., [TCG-DICE]), or into a new TLS Certificate Type 1344 (e.g., [I-D.tschofenig-tls-cwt]). 1346 Especially for constrained nodes there is a desire to minimize the 1347 amount of parsing code needed in a Relying Party, in order to both 1348 minimize footprint and to minimize the attack surface. While it 1349 would be possible to embed a CWT inside a JWT, or a JWT inside an 1350 X.509 extension, etc., there is a desire to encode the information 1351 natively in a format that is already supported by the Relying Party. 1353 This motivates having a common "information model" that describes the 1354 set of remote attestation related information in an encoding-agnostic 1355 way, and allowing multiple encoding formats (CWT, JWT, X.509, etc.) 1356 that encode the same information into the Claims format needed by the 1357 Relying Party. 1359 The following diagram illustrates that Evidence and Attestation 1360 Results might be expressed via multiple potential encoding formats, 1361 so that they can be conveyed by various existing protocols. It also 1362 motivates why the Verifier might also be responsible for accepting 1363 Evidence that encodes Claims in one format, while issuing Attestation 1364 Results that encode Claims in a different format. 1366 Evidence Attestation Results 1367 .--------------. CWT CWT .-------------------. 1368 | Attester-A |------------. .----------->| Relying Party V | 1369 '--------------' v | `-------------------' 1370 .--------------. JWT .------------. JWT .-------------------. 1371 | Attester-B |-------->| Verifier |-------->| Relying Party W | 1372 '--------------' | | `-------------------' 1373 .--------------. X.509 | | X.509 .-------------------. 1374 | Attester-C |-------->| |-------->| Relying Party X | 1375 '--------------' | | `-------------------' 1376 .--------------. TPM | | TPM .-------------------. 1377 | Attester-D |-------->| |-------->| Relying Party Y | 1378 '--------------' '------------' `-------------------' 1379 .--------------. other ^ | other .-------------------. 1380 | Attester-E |------------' '----------->| Relying Party Z | 1381 '--------------' `-------------------' 1383 Figure 9: Multiple Attesters and Relying Parties with Different 1384 Formats 1386 10. Freshness 1388 A Verifier or Relying Party might need to learn the point in time 1389 (i.e., the "epoch") an Evidence or Attestation Result has been 1390 produced. This is essential in deciding whether the included Claims 1391 and their values can be considered fresh, meaning they still reflect 1392 the latest state of the Attester, and that any Attestation Result was 1393 generated using the latest Appraisal Policy for Evidence. 1395 Freshness is assessed based on the Appraisal Policy for Evidence or 1396 Attestation Results that compares the estimated epoch against an 1397 "expiry" threshold defined locally to that policy. There is, 1398 however, always a race condition possible in that the state of the 1399 Attester, and the appraisal policies might change immediately after 1400 the Evidence or Attestation Result was generated. The goal is merely 1401 to narrow their recentness to something the Verifier (for Evidence) 1402 or Relying Party (for Attestation Result) is willing to accept. Some 1403 flexibility on the freshness requirement is a key component for 1404 enabling caching and reuse of both Evidence and Attestation Results, 1405 which is especially valuable in cases where their computation uses a 1406 substantial part of the resource budget (e.g., energy in constrained 1407 devices). 1409 There are three common approaches for determining the epoch of 1410 Evidence or an Attestation Result. 1412 10.1. Explicit Timekeeping using Synchronized Clocks 1414 The first approach is to rely on synchronized and trustworthy clocks, 1415 and include a signed timestamp (see [I-D.birkholz-rats-tuda]) along 1416 with the Claims in the Evidence or Attestation Result. Timestamps 1417 can also be added on a per-Claim basis to distinguish the time of 1418 generation of Evidence or Attestation Result from the time that a 1419 specific Claim was generated. The clock's trustworthiness can 1420 generally be established via Endorsements and typically requires 1421 additional Claims about the signer's time synchronization mechanism. 1423 In some use cases, however, a trustworthy clock might not be 1424 available. For example, in many Trusted Execution Environments 1425 (TEEs) today, a clock is only available outside the TEE and so cannot 1426 be trusted by the TEE. 1428 10.2. Implicit Timekeeping using Nonces 1430 A second approach places the onus of timekeeping solely on the 1431 Verifier (for Evidence) or the Relying Party (for Attestation 1432 Results), and might be suitable, for example, in case the Attester 1433 does not have a trustworthy clock or time synchronization is 1434 otherwise impaired. In this approach, a non-predictable nonce is 1435 sent by the appraising entity, and the nonce is then signed and 1436 included along with the Claims in the Evidence or Attestation Result. 1437 After checking that the sent and received nonces are the same, the 1438 appraising entity knows that the Claims were signed after the nonce 1439 was generated. This allows associating a "rough" epoch to the 1440 Evidence or Attestation Result. In this case the epoch is said to be 1441 rough because: 1443 * The epoch applies to the entire Claim set instead of a more 1444 granular association, and 1446 * The time between the creation of Claims and the collection of 1447 Claims is indistinguishable. 1449 10.3. Implicit Timekeeping using Epoch IDs 1451 A third approach relies on having epoch identifiers (or "IDs") 1452 periodically sent to both the sender and receiver of Evidence or 1453 Attestation Results by some "Epoch ID Distributor". 1455 Epoch IDs are different from nonces as they can be used more than 1456 once and can even be used by more than one entity at the same time. 1457 Epoch IDs are different from timestamps as they do not have to convey 1458 information about a point in time, i.e., they are not necessarily 1459 monotonically increasing integers. 1461 Like the nonce approach, this allows associating a "rough" epoch 1462 without requiring a trustworthy clock or time synchronization in 1463 order to generate or appraise the freshness of Evidence or 1464 Attestation Results. Only the Epoch ID Distributor requires access 1465 to a clock so it can periodically send new epoch IDs. 1467 The most recent epoch ID is included in the produced Evidence or 1468 Attestation Results, and the appraising entity can compare the epoch 1469 ID in received Evidence or Attestation Results against the latest 1470 epoch ID it received from the Epoch ID Distributor to determine if it 1471 is within the current epoch. An actual solution also needs to take 1472 into account race conditions when transitioning to a new epoch, such 1473 as by using a counter signed by the Epoch ID Distributor as the epoch 1474 ID, or by including both the current and previous epoch IDs in 1475 messages and/or checks, by requiring retries in case of mismatching 1476 epoch IDs, or by buffering incoming messages that might be associated 1477 with a epoch ID that the receiver has not yet obtained. 1479 More generally, in order to prevent an appraising entity from 1480 generating false negatives (e.g., discarding Evidence that is deemed 1481 stale even if it is not), the appraising entity should keep an "epoch 1482 window" consisting of the most recently received epoch IDs. The 1483 depth of such epoch window is directly proportional to the maximum 1484 network propagation delay between the first to receive the epoch ID 1485 and the last to receive the epoch ID, and it is inversely 1486 proportional to the epoch duration. The appraising entity shall 1487 compare the epoch ID carried in the received Evidence or Attestation 1488 Result with the epoch IDs in its epoch window to find a suitable 1489 match. 1491 Whereas the nonce approach typically requires the appraising entity 1492 to keep state for each nonce generated, the epoch ID approach 1493 minimizes the state kept to be independent of the number of Attesters 1494 or Verifiers from which it expects to receive Evidence or Attestation 1495 Results, as long as all use the same Epoch ID Distributor. 1497 10.4. Discussion 1499 Implicit and explicit timekeeping can be combined into hybrid 1500 mechanisms. For example, if clocks exist and are considered 1501 trustworthy but are not synchronized, a nonce-based exchange may be 1502 used to determine the (relative) time offset between the involved 1503 peers, followed by any number of timestamp based exchanges. 1505 It is important to note that the actual values in Claims might have 1506 been generated long before the Claims are signed. If so, it is the 1507 signer's responsibility to ensure that the values are still correct 1508 when they are signed. For example, values generated at boot time 1509 might have been saved to secure storage until network connectivity is 1510 established to the remote Verifier and a nonce is obtained. 1512 A more detailed discussion with examples appears in Section 16. 1514 For a discussion on the security of epoch IDs see Section 12.3. 1516 11. Privacy Considerations 1518 The conveyance of Evidence and the resulting Attestation Results 1519 reveal a great deal of information about the internal state of a 1520 device as well as potentially any users of the device. In many 1521 cases, the whole point of attestation procedures is to provide 1522 reliable information about the type of the device and the firmware/ 1523 software that the device is running. This information might be 1524 particularly interesting to many attackers. For example, knowing 1525 that a device is running a weak version of firmware provides a way to 1526 aim attacks better. 1528 Many Claims in Evidence and Attestation Results are potentially 1529 Personally Identifying Information (PII) depending on the end-to-end 1530 use case of the remote attestation procedure. Remote attestation 1531 that goes up to include containers and applications, e.g., a blood 1532 pressure monitor, may further reveal details about specific systems 1533 or users. 1535 In some cases, an attacker may be able to make inferences about the 1536 contents of Evidence from the resulting effects or timing of the 1537 processing. For example, an attacker might be able to infer the 1538 value of specific Claims if it knew that only certain values were 1539 accepted by the Relying Party. 1541 Evidence and Attestation Results are expected to be integrity 1542 protected (i.e., either via signing or a secure channel) and 1543 optionally might be confidentiality protected via encryption. If 1544 confidentiality protection via signing the conceptual messages is 1545 omitted or unavailable, the protecting protocols that convey Evidence 1546 or Attestation Results are responsible for detailing what kinds of 1547 information are disclosed, and to whom they are exposed. 1549 As Evidence might contain sensitive or confidential information, 1550 Attesters are responsible for only sending such Evidence to trusted 1551 Verifiers. Some Attesters might want a stronger level of assurance 1552 of the trustworthiness of a Verifier before sending Evidence to it. 1553 In such cases, an Attester can first act as a Relying Party and ask 1554 for the Verifier's own Attestation Result, and appraising it just as 1555 a Relying Party would appraise an Attestation Result for any other 1556 purpose. 1558 Another approach to deal with Evidence is to remove PII from the 1559 Evidence while still being able to verify that the Attester is one of 1560 a large set. This approach is often called "Direct Anonymous 1561 Attestation". See [CCC-DeepDive] section 6.2 for more discussion. 1563 12. Security Considerations 1565 This document provides an architecture for doing remote attestation. 1566 No specific wire protocol is documented here. Without a specific 1567 proposal to compare against, it is impossible to know if the security 1568 threats listed below have been mitigated well. The security 1569 considerations below should be read as being essentially requirements 1570 against realizations of the RATS Architecture. Some threats apply to 1571 protocols, some are against implementations (code), and some threats 1572 are against physical infrastructure (such as factories). 1574 12.1. Attester and Attestation Key Protection 1576 Implementers need to pay close attention to the protection of the 1577 Attester and the manufacturing processes for provisioning attestation 1578 key material. If either of these are compromised, intended levels of 1579 assurance for RATS are compromised because attackers can forge 1580 Evidence or manipulate the Attesting Environment. For example, a 1581 Target Environment should not be able to tamper with the Attesting 1582 Environment that measures it, by isolating the two environments from 1583 each other in some way. 1585 Remote attestation applies to use cases with a range of security 1586 requirements, so the protections discussed here range from low to 1587 high security where low security may be limited to application or 1588 process isolation by the device's operating system, and high security 1589 may involve specialized hardware to defend against physical attacks 1590 on a chip. 1592 12.1.1. On-Device Attester and Key Protection 1594 It is assumed that an Attesting Environment is sufficiently isolated 1595 from the Target Environment it collects Claims about and that it 1596 signs the resulting Claims set with an attestation key, so that the 1597 Target Environment cannot forge Evidence about itself. Such an 1598 isolated environment might be provided by a process, a dedicated 1599 chip, a TEE, a virtual machine, or another secure mode of operation. 1600 The Attesting Environment must be protected from unauthorized 1601 modification to ensure it behaves correctly. Confidentiality 1602 protection of the Attesting Environment's signing key is vital so it 1603 cannot be misused to forge Evidence. 1605 In many cases the user or owner of a device that includes the role of 1606 Attester must not be able to modify or extract keys from the 1607 Attesting Environments, to prevent creating forged Evidence. Some 1608 common examples include the user of a mobile phone or FIDO 1609 authenticator. An essential value-add provided by RATS is for the 1610 Relying Party to be able to trust the Attester even if the user or 1611 owner is not trusted. 1613 Measures for a minimally protected system might include process or 1614 application isolation provided by a high-level operating system, and 1615 restricted access to root or system privileges. In contrast, For 1616 really simple single-use devices that don't use a protected mode 1617 operating system, like a Bluetooth speaker, the only factual 1618 isolation might be the sturdy housing of the device. 1620 Measures for a moderately protected system could include a special 1621 restricted operating environment, such as a TEE. In this case, only 1622 security-oriented software has access to the Attester and key 1623 material. 1625 Measures for a highly protected system could include specialized 1626 hardware that is used to provide protection against chip decapping 1627 attacks, power supply and clock glitching, faulting injection and RF 1628 and power side channel attacks. 1630 12.1.2. Attestation Key Provisioning Processes 1632 Attestation key provisioning is the process that occurs in the 1633 factory or elsewhere to establish signing key material on the device 1634 and the validation key material off the device. Sometimes this is 1635 procedure is referred to as personalization or customization. 1637 12.1.2.1. Off-Device Key Generation 1639 One way to provision key material is to first generate it external to 1640 the device and then copy the key onto the device. In this case, 1641 confidentiality protection of the generator, as well as for the path 1642 over which the key is provisioned, is necessary. The manufacturer 1643 needs to take care to protect corresponding key material with 1644 measures appropriate for its value. 1646 The degree of protection afforded to this key material can vary by 1647 device, based upon considerations as to a cost/benefit evaluation of 1648 the intended function of the device. The confidentiality protection 1649 is fundamentally based upon some amount of physical protection: while 1650 encryption is often used to provide confidentiality when a key is 1651 conveyed across a factory, where the attestation key is created or 1652 applied, it must be available in an unencrypted form. The physical 1653 protection can therefore vary from situations where the key is 1654 unencrypted only within carefully controlled secure enclaves within 1655 silicon, to situations where an entire facility is considered secure, 1656 by the simple means of locked doors and limited access. 1658 The cryptography that is used to enable confidentiality protection of 1659 the attestation key comes with its own requirements to be secured. 1660 This results in recursive problems, as the key material used to 1661 provision attestation keys must again somehow have been provisioned 1662 securely beforehand (requiring an additional level of protection, and 1663 so on). 1665 So, this is why, in general, a combination of some physical security 1666 measures and some cryptographic measures is used to establish 1667 confidentiality protection. 1669 12.1.2.2. On-Device Key Generation 1671 When key material is generated within a device and the secret part of 1672 it never leaves the device, then the problem may lessen. For public- 1673 key cryptography, it is, by definition, not necessary to maintain 1674 confidentiality of the public key: however integrity of the chain of 1675 custody of the public key is necessary in order to avoid attacks 1676 where an attacker is able get a key they control endorsed. 1678 To summarize: attestation key provisioning must ensure that only 1679 valid attestation key material is established in Attesters. 1681 12.2. Integrity Protection 1683 Any solution that conveys information used for security purposes, 1684 whether such information is in the form of Evidence, Attestation 1685 Results, Endorsements, or appraisal policy must support end-to-end 1686 integrity protection and replay attack prevention, and often also 1687 needs to support additional security properties, including: 1689 * end-to-end encryption, 1691 * denial of service protection, 1693 * authentication, 1695 * auditing, 1697 * fine grained access controls, and 1699 * logging. 1701 Section 10 discusses ways in which freshness can be used in this 1702 architecture to protect against replay attacks. 1704 To assess the security provided by a particular appraisal policy, it 1705 is important to understand the strength of the root of trust, e.g., 1706 whether it is mutable software, or firmware that is read-only after 1707 boot, or immutable hardware/ROM. 1709 It is also important that the appraisal policy was itself obtained 1710 securely. If an attacker can configure appraisal policies for a 1711 Relying Party or for a Verifier, then integrity of the process is 1712 compromised. 1714 Security protections in RATS may be applied at different layers, 1715 whether by a conveyance protocol, or an information encoding format. 1716 This architecture expects conceptual messages (see Section 8) to be 1717 end-to-end protected based on the role interaction context. For 1718 example, if an Attester produces Evidence that is relayed through 1719 some other entity that doesn't implement the Attester or the intended 1720 Verifier roles, then the relaying entity should not expect to have 1721 access to the Evidence. 1723 12.3. Epoch ID-based Attestation 1725 Epoch IDs, described in Section 10.3, can be tampered with, replayed, 1726 dropped, delayed, and reordered by an attacker. 1728 An attacker could be either external or belong to the distribution 1729 group, for example, if one of the Attester entities have been 1730 compromised. 1732 An attacker who is able to tamper with epoch IDs can potentially lock 1733 all the participants in a certain epoch of choice for ever, 1734 effectively freezing time. This is problematic since it destroys the 1735 ability to ascertain freshness of Evidence and Attestation Results. 1737 To mitigate this threat, the transport should be at least integrity 1738 protected and provide origin authentication. 1740 Selective dropping of epoch IDs is equivalent to pinning the victim 1741 node to a past epoch. An attacker could drop epoch IDs to only some 1742 entities and not others, which will typically result in a denial of 1743 service due to the permanent staleness of the Attestation Result or 1744 Evidence. 1746 Delaying or reordering epoch IDs is equivalent to manipulating the 1747 victim's timeline at will. This ability could be used by a malicious 1748 actor (e.g., a compromised router) to mount a confusion attack where, 1749 for example, a Verifier is tricked into accepting Evidence coming 1750 from a past epoch as fresh, while in the meantime the Attester has 1751 been compromised. 1753 Reordering and dropping attacks are mitigated if the transport 1754 provides the ability to detect reordering and drop. However, the 1755 delay attack described above can't be thwarted in this manner. 1757 12.4. Trust Anchor Protection 1759 As noted in Section 7, Verifiers and Relying Parties have trust 1760 anchor stores that must be secured. Specifically, a trust anchor 1761 store must resist modification against unauthorized insertion, 1762 deletion, and modification. 1764 If certificates are used as trust anchors, Verifiers and Relying 1765 Parties are also responsible for validating the entire certificate 1766 path up to the trust anchor, which includes checking for certificate 1767 revocation. See Section 6 of [RFC5280] for details. 1769 13. IANA Considerations 1771 This document does not require any actions by IANA. 1773 14. Acknowledgments 1775 Special thanks go to Joerg Borchert, Nancy Cam-Winget, Jessica 1776 Fitzgerald-McKay, Diego Lopez, Laurence Lundblade, Paul Rowe, Hannes 1777 Tschofenig, Frank Xia, and David Wooten. 1779 15. Notable Contributions 1781 Thomas Hardjono created initial versions of the terminology section 1782 in collaboration with Ned Smith. Eric Voit provided the conceptual 1783 separation between Attestation Provision Flows and Attestation 1784 Evidence Flows. Monty Wisemen created the content structure of the 1785 first three architecture drafts. Carsten Bormann provided many of 1786 the motivational building blocks with respect to the Internet Threat 1787 Model. 1789 16. Appendix A: Time Considerations 1791 Section 10 discussed various issues and requirements around freshness 1792 of evidence, and summarized three approaches that might be used by 1793 different solutions to address them. This appendix provides more 1794 details with examples to help illustrate potential approaches, to 1795 inform those creating specific solutions. 1797 The table below defines a number of relevant events, with an ID that 1798 is used in subsequent diagrams. The times of said events might be 1799 defined in terms of an absolute clock time, such as the Coordinated 1800 Universal Time timescale, or might be defined relative to some other 1801 timestamp or timeticks counter, such as a clock resetting its epoch 1802 each time it is powered on. 1804 +====+============+=================================================+ 1805 | ID | Event | Explanation of event | 1806 +====+============+=================================================+ 1807 | VG | Value | A value to appear in a Claim was created. | 1808 | | generated | In some cases, a value may have technically | 1809 | | | existed before an Attester became aware of | 1810 | | | it but the Attester might have no idea how | 1811 | | | long it has had that value. In such a | 1812 | | | case, the Value created time is the time at | 1813 | | | which the Claim containing the copy of the | 1814 | | | value was created. | 1815 +----+------------+-------------------------------------------------+ 1816 | NS | Nonce sent | A nonce not predictable to an Attester | 1817 | | | (recentness & uniqueness) is sent to an | 1818 | | | Attester. | 1819 +----+------------+-------------------------------------------------+ 1820 | NR | Nonce | A nonce is relayed to an Attester by | 1821 | | relayed | another entity. | 1822 +----+------------+-------------------------------------------------+ 1823 | IR | Epoch ID | An epoch ID is successfully received and | 1824 | | received | processed by an entity. | 1825 +----+------------+-------------------------------------------------+ 1826 | EG | Evidence | An Attester creates Evidence from collected | 1827 | | generation | Claims. | 1828 +----+------------+-------------------------------------------------+ 1829 | ER | Evidence | A Relying Party relays Evidence to a | 1830 | | relayed | Verifier. | 1831 +----+------------+-------------------------------------------------+ 1832 | RG | Result | A Verifier appraises Evidence and generates | 1833 | | generation | an Attestation Result. | 1834 +----+------------+-------------------------------------------------+ 1835 | RR | Result | A Relying Party relays an Attestation | 1836 | | relayed | Result to a Relying Party. | 1837 +----+------------+-------------------------------------------------+ 1838 | RA | Result | The Relying Party appraises Attestation | 1839 | | appraised | Results. | 1840 +----+------------+-------------------------------------------------+ 1841 | OP | Operation | The Relying Party performs some operation | 1842 | | performed | requested by the Attester via a resource | 1843 | | | access protocol as depicted in Figure 8, | 1844 | | | e.g., across a session created earlier at | 1845 | | | time(RA). | 1846 +----+------------+-------------------------------------------------+ 1847 | RX | Result | An Attestation Result should no longer be | 1848 | | expiry | accepted, according to the Verifier that | 1849 | | | generated it. | 1850 +----+------------+-------------------------------------------------+ 1852 Table 1 1854 Using the table above, a number of hypothetical examples of how a 1855 solution might be built are illustrated below. This list is not 1856 intended to be complete, but is just representative enough to 1857 highlight various timing considerations. 1859 All times are relative to the local clocks, indicated by an "_a" 1860 (Attester), "_v" (Verifier), or "_r" (Relying Party) suffix. 1862 Times with an appended Prime (') indicate a second instance of the 1863 same event. 1865 How and if clocks are synchronized depends upon the model. 1867 In the figures below, curly braces indicate containment. For 1868 example, the notation Evidence{foo} indicates that 'foo' is contained 1869 in the Evidence and is thus covered by its signature. 1871 16.1. Example 1: Timestamp-based Passport Model Example 1873 The following example illustrates a hypothetical Passport Model 1874 solution that uses timestamps and requires roughly synchronized 1875 clocks between the Attester, Verifier, and Relying Party, which 1876 depends on using a secure clock synchronization mechanism. As a 1877 result, the receiver of a conceptual message containing a timestamp 1878 can directly compare it to its own clock and timestamps. 1880 .----------. .----------. .---------------. 1881 | Attester | | Verifier | | Relying Party | 1882 '----------' '----------' '---------------' 1883 time(VG_a) | | 1884 | | | 1885 ~ ~ ~ 1886 | | | 1887 time(EG_a) | | 1888 |------Evidence{time(EG_a)}------>| | 1889 | time(RG_v) | 1890 |<-----Attestation Result---------| | 1891 | {time(RG_v),time(RX_v)} | | 1892 ~ ~ 1893 | | 1894 |----Attestation Result{time(RG_v),time(RX_v)}-->time(RA_r) 1895 | | 1896 ~ ~ 1897 | | 1898 | time(OP_r) 1900 The Verifier can check whether the Evidence is fresh when appraising 1901 it at time(RG_v) by checking "time(RG_v) - time(EG_a) < Threshold", 1902 where the Verifier's threshold is large enough to account for the 1903 maximum permitted clock skew between the Verifier and the Attester. 1905 If time(VG_a) is also included in the Evidence along with the Claim 1906 value generated at that time, and the Verifier decides that it can 1907 trust the time(VG_a) value, the Verifier can also determine whether 1908 the Claim value is recent by checking "time(RG_v) - time(VG_a) < 1909 Threshold". The threshold is decided by the Appraisal Policy for 1910 Evidence, and again needs to take into account the maximum permitted 1911 clock skew between the Verifier and the Attester. 1913 The Relying Party can check whether the Attestation Result is fresh 1914 when appraising it at time(RA_r) by checking "time(RA_r) - time(RG_v) 1915 < Threshold", where the Relying Party's threshold is large enough to 1916 account for the maximum permitted clock skew between the Relying 1917 Party and the Verifier. The result might then be used for some time 1918 (e.g., throughout the lifetime of a connection established at 1919 time(RA_r)). The Relying Party must be careful, however, to not 1920 allow continued use beyond the period for which it deems the 1921 Attestation Result to remain fresh enough. Thus, it might allow use 1922 (at time(OP_r)) as long as "time(OP_r) - time(RG_v) < Threshold". 1923 However, if the Attestation Result contains an expiry time time(RX_v) 1924 then it could explicitly check "time(OP_r) < time(RX_v)". 1926 16.2. Example 2: Nonce-based Passport Model Example 1928 The following example illustrates a hypothetical Passport Model 1929 solution that uses nonces instead of timestamps. Compared to the 1930 timestamp-based example, it requires an extra round trip to retrieve 1931 a nonce, and requires that the Verifier and Relying Party track state 1932 to remember the nonce for some period of time. 1934 The advantage is that it does not require that any clocks are 1935 synchronized. As a result, the receiver of a conceptual message 1936 containing a timestamp cannot directly compare it to its own clock or 1937 timestamps. Thus we use a suffix ("a" for Attester, "v" for 1938 Verifier, and "r" for Relying Party) on the IDs below indicating 1939 which clock generated them, since times from different clocks cannot 1940 be compared. Only the delta between two events from the sender can 1941 be used by the receiver. 1943 .----------. .----------. .---------------. 1944 | Attester | | Verifier | | Relying Party | 1945 '----------' '----------' '---------------' 1946 time(VG_a) | | 1947 | | | 1948 ~ ~ ~ 1949 | | | 1950 |<--Nonce1---------------------time(NS_v) | 1951 time(EG_a) | | 1952 |---Evidence--------------------->| | 1953 | {Nonce1, time(EG_a)-time(VG_a)} | | 1954 | time(RG_v) | 1955 |<--Attestation Result------------| | 1956 | {time(RX_v)-time(RG_v)} | | 1957 ~ ~ 1958 | | 1959 |<--Nonce2-------------------------------------time(NS_r) 1960 time(RR_a) | 1961 |--[Attestation Result{time(RX_v)-time(RG_v)}, -->|time(RA_r) 1962 | Nonce2, time(RR_a)-time(EG_a)] | 1963 ~ ~ 1964 | | 1965 | time(OP_r) 1967 In this example solution, the Verifier can check whether the Evidence 1968 is fresh at "time(RG_v)" by verifying that "time(RG_v)-time(NS_v) < 1969 Threshold". 1971 The Verifier cannot, however, simply rely on a Nonce to determine 1972 whether the value of a Claim is recent, since the Claim value might 1973 have been generated long before the nonce was sent by the Verifier. 1974 However, if the Verifier decides that the Attester can be trusted to 1975 correctly provide the delta "time(EG_a)-time(VG_a)", then it can 1976 determine recency by checking "time(RG_v)-time(NS_v) + time(EG_a)- 1977 time(VG_a) < Threshold". 1979 Similarly if, based on an Attestation Result from a Verifier it 1980 trusts, the Relying Party decides that the Attester can be trusted to 1981 correctly provide time deltas, then it can determine whether the 1982 Attestation Result is fresh by checking "time(OP_r)-time(NS_r) + 1983 time(RR_a)-time(EG_a) < Threshold". Although the Nonce2 and 1984 "time(RR_a)-time(EG_a)" values cannot be inside the Attestation 1985 Result, they might be signed by the Attester such that the 1986 Attestation Result vouches for the Attester's signing capability. 1988 The Relying Party must still be careful, however, to not allow 1989 continued use beyond the period for which it deems the Attestation 1990 Result to remain valid. Thus, if the Attestation Result sends a 1991 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 1992 Relying Party can check "time(OP_r)-time(NS_r) < time(RX_v)- 1993 time(RG_v)". 1995 16.3. Example 3: Epoch ID-based Passport Model Example 1997 The example in Figure 10 illustrates a hypothetical Passport Model 1998 solution that uses epoch IDs instead of nonces or timestamps. 2000 The Epoch ID Distributor broadcasts epoch ID "I" which starts a new 2001 epoch "E" for a protocol participant upon reception at "time(IR)". 2003 The Attester generates Evidence incorporating epoch ID "I" and 2004 conveys it to the Verifier. 2006 The Verifier appraises that the received epoch ID "I" is "fresh" 2007 according to the definition provided in Section 10.3 whereby retries 2008 are required in the case of mismatching epoch IDs, and generates an 2009 Attestation Result. The Attestation Result is conveyed to the 2010 Attester. 2012 After the transmission of epoch ID "I'" a new epoch "E'" is 2013 established when "I'" is received by each protocol participant. The 2014 Attester relays the Attestation Result obtained during epoch "E" 2015 (associated with epoch ID "I") to the Relying Party using the epoch 2016 ID for the current epoch "I'". If the Relying Party had not yet 2017 received "I'", then the Attestation Result would be rejected, but in 2018 this example, it is received. 2020 In the illustrated scenario, the epoch ID for relaying an Attestation 2021 Result to the Relying Party is current, while a previous epoch ID was 2022 used to generate Verifier evaluated evidence. This indicates that at 2023 least one epoch transition has occurred, and the Attestation Results 2024 may only be as fresh as the previous epoch. If the Relying Party 2025 remembers the previous epoch ID "I" during an epoch window as 2026 discussed in Section 10.3, and the message is received during that 2027 window, the Attestation Result is accepted as fresh, and otherwise it 2028 is rejected as stale. 2030 .-------------. 2031 .----------. | Epoch ID | .----------. .---------------. 2032 | Attester | | Distributor | | Verifier | | Relying Party | 2033 '----------' '-------------' '----------' '---------------' 2034 time(VG_a) | | | 2035 | | | | 2036 ~ ~ ~ ~ 2037 | | | | 2038 time(IR_a)<------I--+--I--------time(IR_v)----->time(IR_r) 2039 | | | | 2040 time(EG_a) | | | 2041 |---Evidence--------------------->| | 2042 | {I,time(EG_a)-time(VG_a)} | | 2043 | | | | 2044 | | time(RG_v) | 2045 |<--Attestation Result------------| | 2046 | {I,time(RX_v)-time(RG_v)} | | 2047 | | | | 2048 time(IR'_a)<-----I'-+--I'-------time(IR'_v)---->time(IR'_r) 2049 | | | | 2050 |---[Attestation Result--------------------->time(RA_r) 2051 | {I,time(RX_v)-time(RG_v)},I'] | | 2052 | | | | 2053 ~ ~ ~ ~ 2054 | | | | 2055 | | | time(OP_r) 2057 Figure 10: Epoch ID-based Passport Model 2059 16.4. Example 4: Timestamp-based Background-Check Model Example 2061 The following example illustrates a hypothetical Background-Check 2062 Model solution that uses timestamps and requires roughly synchronized 2063 clocks between the Attester, Verifier, and Relying Party. 2065 .----------. .---------------. .----------. 2066 | Attester | | Relying Party | | Verifier | 2067 '----------' '---------------' '----------' 2068 time(VG_a) | | 2069 | | | 2070 ~ ~ ~ 2071 | | | 2072 time(EG_a) | | 2073 |----Evidence------->| | 2074 | {time(EG_a)} time(ER_r)--Evidence{time(EG_a)}->| 2075 | | time(RG_v) 2076 | time(RA_r)<-Attestation Result---| 2077 | | {time(RX_v)} | 2078 ~ ~ ~ 2079 | | | 2080 | time(OP_r) | 2082 The time considerations in this example are equivalent to those 2083 discussed under Example 1 above. 2085 16.5. Example 5: Nonce-based Background-Check Model Example 2087 The following example illustrates a hypothetical Background-Check 2088 Model solution that uses nonces and thus does not require that any 2089 clocks are synchronized. In this example solution, a nonce is 2090 generated by a Verifier at the request of a Relying Party, when the 2091 Relying Party needs to send one to an Attester. 2093 .----------. .---------------. .----------. 2094 | Attester | | Relying Party | | Verifier | 2095 '----------' '---------------' '----------' 2096 time(VG_a) | | 2097 | | | 2098 ~ ~ ~ 2099 | | | 2100 | |<-------Nonce-----------time(NS_v) 2101 |<---Nonce-----------time(NR_r) | 2102 time(EG_a) | | 2103 |----Evidence{Nonce}--->| | 2104 | time(ER_r)--Evidence{Nonce}--->| 2105 | | time(RG_v) 2106 | time(RA_r)<-Attestation Result-| 2107 | | {time(RX_v)-time(RG_v)} | 2108 ~ ~ ~ 2109 | | | 2110 | time(OP_r) | 2112 The Verifier can check whether the Evidence is fresh, and whether a 2113 Claim value is recent, the same as in Example 2 above. 2115 However, unlike in Example 2, the Relying Party can use the Nonce to 2116 determine whether the Attestation Result is fresh, by verifying that 2117 "time(OP_r)-time(NR_r) < Threshold". 2119 The Relying Party must still be careful, however, to not allow 2120 continued use beyond the period for which it deems the Attestation 2121 Result to remain valid. Thus, if the Attestation Result sends a 2122 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 2123 Relying Party can check "time(OP_r)-time(ER_r) < time(RX_v)- 2124 time(RG_v)". 2126 17. References 2128 17.1. Normative References 2130 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 2131 Housley, R., and W. Polk, "Internet X.509 Public Key 2132 Infrastructure Certificate and Certificate Revocation List 2133 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 2134 . 2136 [RFC7519] Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token 2137 (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015, 2138 . 2140 [RFC8392] Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, 2141 "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, 2142 May 2018, . 2144 17.2. Informative References 2146 [CCC-DeepDive] 2147 Confidential Computing Consortium, "Confidential Computing 2148 Deep Dive", n.d., 2149 . 2151 [CTAP] FIDO Alliance, "Client to Authenticator Protocol", n.d., 2152 . 2156 [I-D.birkholz-rats-tuda] 2157 Fuchs, A., Birkholz, H., McDonald, I. E., and C. Bormann, 2158 "Time-Based Uni-Directional Attestation", Work in 2159 Progress, Internet-Draft, draft-birkholz-rats-tuda-04, 13 2160 January 2021, 2161 . 2163 [I-D.birkholz-rats-uccs] 2164 Birkholz, H., O'Donoghue, J., Cam-Winget, N., and C. 2165 Bormann, "A CBOR Tag for Unprotected CWT Claims Sets", 2166 Work in Progress, Internet-Draft, draft-birkholz-rats- 2167 uccs-03, 8 March 2021, 2168 . 2170 [I-D.ietf-teep-architecture] 2171 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 2172 "Trusted Execution Environment Provisioning (TEEP) 2173 Architecture", Work in Progress, Internet-Draft, draft- 2174 ietf-teep-architecture-14, 22 February 2021, 2175 . 2178 [I-D.tschofenig-tls-cwt] 2179 Tschofenig, H. and M. Brossard, "Using CBOR Web Tokens 2180 (CWTs) in Transport Layer Security (TLS) and Datagram 2181 Transport Layer Security (DTLS)", Work in Progress, 2182 Internet-Draft, draft-tschofenig-tls-cwt-02, 13 July 2020, 2183 . 2185 [OPCUA] OPC Foundation, "OPC Unified Architecture Specification, 2186 Part 2: Security Model, Release 1.03", OPC 10000-2 , 25 2187 November 2015, . 2191 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 2192 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 2193 . 2195 [RFC5209] Sangster, P., Khosravi, H., Mani, M., Narayan, K., and J. 2196 Tardo, "Network Endpoint Assessment (NEA): Overview and 2197 Requirements", RFC 5209, DOI 10.17487/RFC5209, June 2008, 2198 . 2200 [RFC6024] Reddy, R. and C. Wallace, "Trust Anchor Management 2201 Requirements", RFC 6024, DOI 10.17487/RFC6024, October 2202 2010, . 2204 [RFC8322] Field, J., Banghart, S., and D. Waltermire, "Resource- 2205 Oriented Lightweight Information Exchange (ROLIE)", 2206 RFC 8322, DOI 10.17487/RFC8322, February 2018, 2207 . 2209 [strengthoffunction] 2210 NISC, "Strength of Function", n.d., 2211 . 2214 [TCG-DICE] Trusted Computing Group, "DICE Certificate Profiles", 2215 n.d., . 2219 [TCGarch] Trusted Computing Group, "Trusted Platform Module Library 2220 - Part 1: Architecture", 8 November 2019, 2221 . 2224 [WebAuthN] W3C, "Web Authentication: An API for accessing Public Key 2225 Credentials", n.d., . 2227 Contributors 2229 Monty Wiseman 2231 Email: montywiseman32@gmail.com 2233 Liang Xia 2235 Email: frank.xialiang@huawei.com 2237 Laurence Lundblade 2239 Email: lgl@island-resort.com 2241 Eliot Lear 2243 Email: elear@cisco.com 2245 Jessica Fitzgerald-McKay 2246 Sarah C. Helbe 2248 Andrew Guinn 2250 Peter Loscocco 2252 Email: pete.loscocco@gmail.com 2254 Eric Voit 2256 Thomas Fossati 2258 Email: thomas.fossati@arm.com 2260 Paul Rowe 2262 Carsten Bormann 2264 Email: cabo@tzi.org 2266 Giri Mandyam 2268 Email: mandyam@qti.qualcomm.com 2270 Kathleen Moriarty 2272 Email: kathleen.moriarty.ietf@gmail.com 2274 Guy Fedorkow 2276 Email: gfedorkow@juniper.net 2278 Simon Frost 2280 Email: Simon.Frost@arm.com 2282 Authors' Addresses 2284 Henk Birkholz 2285 Fraunhofer SIT 2286 Rheinstrasse 75 2287 64295 Darmstadt 2288 Germany 2290 Email: henk.birkholz@sit.fraunhofer.de 2292 Dave Thaler 2293 Microsoft 2294 United States of America 2296 Email: dthaler@microsoft.com 2298 Michael Richardson 2299 Sandelman Software Works 2300 Canada 2302 Email: mcr+ietf@sandelman.ca 2304 Ned Smith 2305 Intel Corporation 2306 United States of America 2308 Email: ned.smith@intel.com 2310 Wei Pan 2311 Huawei Technologies 2313 Email: william.panwei@huawei.com