idnits 2.17.1 draft-ietf-rats-architecture-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 497 has weird spacing: '... claims v ...' -- The document date (10 July 2020) is 1385 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-birkholz-rats-tuda-02 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-11 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RATS Working Group H. Birkholz 3 Internet-Draft Fraunhofer SIT 4 Intended status: Informational D. Thaler 5 Expires: 11 January 2021 Microsoft 6 M. Richardson 7 Sandelman Software Works 8 N. Smith 9 Intel 10 W. Pan 11 Huawei Technologies 12 10 July 2020 14 Remote Attestation Procedures Architecture 15 draft-ietf-rats-architecture-05 17 Abstract 19 In network protocol exchanges, it is often the case that one entity 20 (a Relying Party) requires evidence about a remote peer to assess the 21 peer's trustworthiness, and a way to appraise such evidence. The 22 evidence is typically a set of claims about its software and hardware 23 platform. This document describes an architecture for such remote 24 attestation procedures (RATS). 26 Note to Readers 28 Discussion of this document takes place on the RATS Working Group 29 mailing list (rats@ietf.org), which is archived at 30 https://mailarchive.ietf.org/arch/browse/rats/ 31 (https://mailarchive.ietf.org/arch/browse/rats/). 33 Source for this draft and an issue tracker can be found at 34 https://github.com/ietf-rats-wg/architecture (https://github.com/ 35 ietf-rats-wg/architecture). 37 Status of This Memo 39 This Internet-Draft is submitted in full conformance with the 40 provisions of BCP 78 and BCP 79. 42 Internet-Drafts are working documents of the Internet Engineering 43 Task Force (IETF). Note that other groups may also distribute 44 working documents as Internet-Drafts. The list of current Internet- 45 Drafts is at https://datatracker.ietf.org/drafts/current/. 47 Internet-Drafts are draft documents valid for a maximum of six months 48 and may be updated, replaced, or obsoleted by other documents at any 49 time. It is inappropriate to use Internet-Drafts as reference 50 material or to cite them other than as "work in progress." 52 This Internet-Draft will expire on 11 January 2021. 54 Copyright Notice 56 Copyright (c) 2020 IETF Trust and the persons identified as the 57 document authors. All rights reserved. 59 This document is subject to BCP 78 and the IETF Trust's Legal 60 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 61 license-info) in effect on the date of publication of this document. 62 Please review these documents carefully, as they describe your rights 63 and restrictions with respect to this document. Code Components 64 extracted from this document must include Simplified BSD License text 65 as described in Section 4.e of the Trust Legal Provisions and are 66 provided without warranty as described in the Simplified BSD License. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 71 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 72 3. Reference Use Cases . . . . . . . . . . . . . . . . . . . . . 5 73 3.1. Network Endpoint Assessment . . . . . . . . . . . . . . . 5 74 3.2. Confidential Machine Learning (ML) Model Protection . . . 6 75 3.3. Confidential Data Retrieval . . . . . . . . . . . . . . . 6 76 3.4. Critical Infrastructure Control . . . . . . . . . . . . . 6 77 3.5. Trusted Execution Environment (TEE) Provisioning . . . . 7 78 3.6. Hardware Watchdog . . . . . . . . . . . . . . . . . . . . 7 79 4. Architectural Overview . . . . . . . . . . . . . . . . . . . 8 80 4.1. Appraisal Policies . . . . . . . . . . . . . . . . . . . 9 81 4.2. Two Types of Environments of an Attester . . . . . . . . 9 82 4.3. Layered Attestation Environments . . . . . . . . . . . . 10 83 4.4. Composite Device . . . . . . . . . . . . . . . . . . . . 12 84 5. Topological Models . . . . . . . . . . . . . . . . . . . . . 15 85 5.1. Passport Model . . . . . . . . . . . . . . . . . . . . . 15 86 5.2. Background-Check Model . . . . . . . . . . . . . . . . . 16 87 5.3. Combinations . . . . . . . . . . . . . . . . . . . . . . 17 88 6. Roles and Entities . . . . . . . . . . . . . . . . . . . . . 18 89 7. Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . 19 90 7.1. Relying Party . . . . . . . . . . . . . . . . . . . . . . 19 91 7.2. Attester . . . . . . . . . . . . . . . . . . . . . . . . 20 92 7.3. Relying Party Owner . . . . . . . . . . . . . . . . . . . 20 93 7.4. Verifier . . . . . . . . . . . . . . . . . . . . . . . . 20 94 7.5. Endorser and Verifier Owner . . . . . . . . . . . . . . . 21 96 8. Conceptual Messages . . . . . . . . . . . . . . . . . . . . . 21 97 8.1. Evidence . . . . . . . . . . . . . . . . . . . . . . . . 21 98 8.2. Endorsements . . . . . . . . . . . . . . . . . . . . . . 21 99 8.3. Attestation Results . . . . . . . . . . . . . . . . . . . 22 100 9. Claims Encoding Formats . . . . . . . . . . . . . . . . . . . 23 101 10. Freshness . . . . . . . . . . . . . . . . . . . . . . . . . . 25 102 11. Privacy Considerations . . . . . . . . . . . . . . . . . . . 27 103 12. Security Considerations . . . . . . . . . . . . . . . . . . . 27 104 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28 105 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 28 106 15. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 29 107 16. Appendix A: Time Considerations . . . . . . . . . . . . . . . 29 108 16.1. Example 1: Timestamp-based Passport Model Example . . . 30 109 16.2. Example 2: Nonce-based Passport Model Example . . . . . 32 110 16.3. Example 3: Timestamp-based Background-Check Model 111 Example . . . . . . . . . . . . . . . . . . . . . . . . 33 112 16.4. Example 4: Nonce-based Background-Check Model Example . 33 113 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 34 114 17.1. Normative References . . . . . . . . . . . . . . . . . . 34 115 17.2. Informative References . . . . . . . . . . . . . . . . . 34 116 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 35 118 1. Introduction 120 In Remote Attestation Procedures (RATS), one peer (the "Attester") 121 produces believable information about itself - Evidence - to enable a 122 remote peer (the "Relying Party") to decide whether to consider that 123 Attester a trustworthy peer or not. RATS are facilitated by an 124 additional vital party, the Verifier. 126 The Verifier appraises Evidence via Appraisal Policies and creates 127 the Attestation Results to support Relying Parties in their decision 128 process. This documents defines a flexible architecture consisting 129 of attestation roles and their interactions via conceptual messages. 130 Additionally, this document defines a universal set of terms that can 131 be mapped to various existing and emerging Remote Attestation 132 Procedures. Common topological models and the data flows associated 133 with them, such as the "Passport Model" and the "Background-Check 134 Model" are illustrated. The purpose is to define useful terminology 135 for attestation and enable readers to map their solution architecture 136 to the canonical attestation architecture provided here. Having a 137 common terminology that provides well-understood meanings for common 138 themes such as roles, device composition, topological models, and 139 appraisal is vital for semantic interoperability across solutions and 140 platforms involving multiple vendors and providers. 142 Amongst other things, this document is about trust and 143 trustworthiness. Trust is a choice one makes about another system. 144 Trustworthiness is a quality about the other system that can be used 145 in making one's decision to trust it or not. This is subtle 146 difference and being familiar with the difference is crucial for 147 using this document. Additionally, the concepts of freshness and 148 trust relationships with respect to RATS are elaborated on to enable 149 implementers in order to choose appropriate solutions to compose 150 their Remote Attestation Procedures. 152 2. Terminology 154 This document uses the following terms. 156 Appraisal Policy for Evidence: A set of rules that informs how a 157 Verifier evaluates the validity of information about an Attester. 158 Compare /security policy/ in [RFC4949] 160 Appraisal Policy for Attestation Results: A set of rules that direct 161 how a Relying Party uses the Attestation Results regarding an 162 Attester generated by the Verifiers. Compare /security policy/ in 163 [RFC4949] 165 Attestation Result: The output generated by a Verifier, typically 166 including information about an Attester, where the Verifier 167 vouches for the validity of the results 169 Attester: A role performed by an entity (typically a device) whose 170 Evidence must be appraised in order to infer the extent to which 171 the Attester is considered trustworthy, such as when deciding 172 whether it is authorized to perform some operation 174 Claim: A piece of asserted information, often in the form of a name/ 175 value pair. (Compare /claim/ in [RFC7519]) 177 Endorsement: A secure statement that some entity (typically a 178 manufacturer) vouches for the integrity of an Attester's signing 179 capability 181 Endorser: An entity (typically a manufacturer) whose Endorsements 182 help Verifiers appraise the authenticity of Evidence 184 Evidence: A set of information about an Attester that is to be 185 appraised by a Verifier. Evidence may include configuration data, 186 measurements, telemetry, or inferences. 188 Relying Party: A role performed by an entity that depends on the 189 validity of information about an Attester, for purposes of 190 reliably applying application specific actions. Compare /relying 191 party/ in [RFC4949] 193 Relying Party Owner: An entity (typically an administrator), that is 194 authorized to configure Appraisal Policy for Attestation Results 195 in a Relying Party 197 Verifier: A role performed by an entity that appraises the validity 198 of Evidence about an Attester and produces Attestation Results to 199 be used by a Relying Party 201 Verifier Owner: An entity (typically an administrator), that is 202 authorized to configure Appraisal Policy for Evidence in a 203 Verifier 205 3. Reference Use Cases 207 This section covers a number of representative use cases for remote 208 attestation, independent of specific solutions. The purpose is to 209 provide motivation for various aspects of the architecture presented 210 in this draft. Many other use cases exist, and this document does 211 not intend to have a complete list, only to have a set of use cases 212 that collectively cover all the functionality required in the 213 architecture. 215 Each use case includes a description followed by a summary of the 216 Attester and a Relying Party roles. 218 3.1. Network Endpoint Assessment 220 Network operators want a trustworthy report that includes identity 221 and version of information of the hardware and software on the 222 machines attached to their network, for purposes such as inventory, 223 audit, anomaly detection, record maintenance and/or trending reports 224 (logging). The network operator may also want a policy by which full 225 access is only granted to devices that meet some definition of 226 hygiene, and so wants to get claims about such information and verify 227 their validity. Remote attestation is desired to prevent vulnerable 228 or compromised devices from getting access to the network and 229 potentially harming others. 231 Typically, solutions start with a specific component (called a "Root 232 of Trust") that provides device identity and protected storage for 233 measurements. The system components perform a series of measurements 234 that may be signed by the Root of Trust, considered as Evidence about 235 the hardware, firmware, BIOS, software, etc. that is running. 237 Attester: A device desiring access to a network 239 Relying Party: A network infrastructure device such as a router, 240 switch, or access point 242 3.2. Confidential Machine Learning (ML) Model Protection 244 A device manufacturer wants to protect its intellectual property. 245 This is primarily the ML model it developed and runs in the devices 246 purchased by its customers. The goals for the protection include 247 preventing attackers, potentially the customer themselves, from 248 seeing the details of the model. 250 This typically works by having some protected environment in the 251 device go through a remote attestation with some manufacturer service 252 that can assess its trustworthiness. If remote attestation succeeds, 253 then the manufacturer service releases either the model, or a key to 254 decrypt a model the Attester already has in encrypted form, to the 255 requester. 257 Attester: A device desiring to run an ML model to do inferencing 259 Relying Party: A server or service holding ML models it desires to 260 protect 262 3.3. Confidential Data Retrieval 264 This is a generalization of the ML model use case above, where the 265 data can be any highly confidential data, such as health data about 266 customers, payroll data about employees, future business plans, etc. 267 An assessment of system state is made against a set of policies to 268 evaluate the state of a system using attestations for the system 269 requesting data. Attestation is desired to prevent leaking data to 270 compromised devices. 272 Attester: An entity desiring to retrieve confidential data 274 Relying Party: An entity that holds confidential data for retrieval 275 by other entities 277 3.4. Critical Infrastructure Control 279 In this use case, potentially dangerous physical equipment (e.g., 280 power grid, traffic control, hazardous chemical processing, etc.) is 281 connected to a network. The organization managing such 282 infrastructure needs to ensure that only authorized code and users 283 can control such processes, and they are protected from malware or 284 other adversaries. When a protocol operation can affect some 285 critical system, the device attached to the critical equipment thus 286 wants some assurance that the requester has not been compromised. As 287 such, remote attestation can be used to only accept commands from 288 requesters that are within policy. 290 Attester: A device or application wishing to control physical 291 equipment 293 Relying Party: A device or application connected to potentially 294 dangerous physical equipment (hazardous chemical processing, 295 traffic control, power grid, etc.) 297 3.5. Trusted Execution Environment (TEE) Provisioning 299 A "Trusted Application Manager (TAM)" server is responsible for 300 managing the applications running in the TEE of a client device. To 301 do this, the TAM wants to assess the state of a TEE, or of 302 applications in the TEE, of a client device. The TEE conducts a 303 remote attestation procedure with the TAM, which can then decide 304 whether the TEE is already in compliance with the TAM's latest 305 policy, or if the TAM needs to uninstall, update, or install approved 306 applications in the TEE to bring it back into compliance with the 307 TAM's policy. 309 Attester: A device with a trusted execution environment capable of 310 running trusted applications that can be updated 312 Relying Party: A Trusted Application Manager 314 3.6. Hardware Watchdog 316 One significant problem is malware that holds a device hostage and 317 does not allow it to reboot to prevent updates from being applied. 318 This is a significant problem, because it allows a fleet of devices 319 to be held hostage for ransom. 321 In the case, the Relying Party is the watchdog timer in the TPM/ 322 secure enclave itself, as described in [TCGarch] section 43.3. The 323 Attestation Results are returned to the device, and provided to the 324 enclave. 326 If the watchdog does not receive regular, and fresh, Attestation 327 Results as to the systems' health, then it forces a reboot. 329 Attester: The device that is desired to keep from being held hostage 330 for a long period of time 332 Relying Party: A remote server that will securely grant the Attester 333 permission to continue operating (i.e., not reboot) for a period 334 of time 336 4. Architectural Overview 338 Figure 1 depicts the data that flows between different roles, 339 independent of protocol or use case. 341 ************ ************ **************** 342 * Endorser * * Verifier * * Relying Party* 343 ************ * Owner * * Owner * 344 | ************ **************** 345 | | | 346 Endorsements| | | 347 | |Appraisal | 348 | |Policy | 349 | |for | Appraisal 350 | |Evidence | Policy for 351 | | | Attestation 352 | | | Results 353 v v | 354 .-----------------. | 355 .----->| Verifier |------. | 356 | '-----------------' | | 357 | | | 358 | Attestation| | 359 | Results | | 360 | Evidence | | 361 | | | 362 | v v 363 .----------. .-----------------. 364 | Attester | | Relying Party | 365 '----------' '-----------------' 367 Figure 1: Conceptual Data Flow 369 An Attester creates Evidence that is conveyed to a Verifier. 371 The Verifier uses the Evidence, and any Endorsements from Endorsers, 372 by applying an Evidence Appraisal Policy to assess the 373 trustworthiness of the Attester, and generates Attestation Results 374 for use by Relying Parties. The Appraisal Policy for Evidence might 375 be obtained from an Endorser along with the Endorsements, or might be 376 obtained via some other mechanism such as being configured in the 377 Verifier by an administrator. 379 The Relying Party uses Attestation Results by applying its own 380 Appraisal Policy to make application-specific decisions such as 381 authorization decisions. The Appraisal Policy for Attestation 382 Results might, for example, be configured in the Relying Party by an 383 administrator. 385 4.1. Appraisal Policies 387 The Verifier, when appraising Evidence, or the Relying Party, when 388 appraising Attestation Results, checks the values of some claims 389 against constraints specified in its Appraisal Policy. Such 390 constraints might involve a comparison for equality against a 391 reference value, or a check for being in a range bounded by reference 392 values, or membership in a set of reference values, or a check 393 against values in other claims, or any other test. 395 Such reference values might be specified as part of the Appraisal 396 Policy itself, or might be obtained from a separate source, such as 397 an Endorsement, and then used by the Appraisal Policy. 399 The actual data format and semantics of any reference values are 400 specific to claims and implementations. This architecture document 401 does not define any general purpose format for them or general means 402 for comparison. 404 4.2. Two Types of Environments of an Attester 406 An Attester consists of at least one Attesting Environment and at 407 least one Target Environment. In some implementations, the Attesting 408 and Target Environments might be combined. Other implementations 409 might have multiple Attesting and Target Environments, such as in the 410 examples described in more detail in Section 4.3 and Section 4.4. 411 Other examples may exist, and the examples discussed could even be 412 combined into even more complex implementations. 414 Claims are collected from Target Environments, as shown in Figure 2. 415 That is, Attesting Environments collect the raw values and the 416 information to be represented in claims, such as by doing some 417 measurement of a Target Environment's code, memory, and/or registers. 418 Attesting Environments then format the claims appropriately, and 419 typically use key material and cryptographic functions, such as 420 signing or cipher algorithms, to create Evidence. Places that 421 Attesting Environments can exist include Trusted Execution 422 Environments (TEE), embedded Secure Elements (eSE), and BIOS 423 firmware. An execution environment may not, by default, be capable 424 of claims collection for a given Target Environment. Execution 425 environments that are designed to be capable of claims collection are 426 referred to in this document as Attesting Environments. 428 .--------------------------------. 429 | | 430 | Verifier | 431 | | 432 '--------------------------------' 433 ^ 434 | 435 .-------------------------|----------. 436 | | | 437 | .----------------. | | 438 | | Target | | | 439 | | Environment | | | 440 | | | | Evidence | 441 | '----------------' | | 442 | | | | 443 | | | | 444 | Collect | | | 445 | Claims | | | 446 | | | | 447 | v | | 448 | .-------------. | 449 | | Attesting | | 450 | | Environment | | 451 | | | | 452 | '-------------' | 453 | Attester | 454 '------------------------------------' 456 Figure 2: Two Types of Environments 458 4.3. Layered Attestation Environments 460 By definition, the Attester role creates Evidence. An Attester may 461 consist of one or more nested or staged environments, adding 462 complexity to the architectural structure. The unifying component is 463 the Root of Trust and the nested, staged, or chained attestation 464 Evidence produced. The nested or chained structure includes Claims, 465 collected by the Attester to aid in the assurance or believability of 466 the attestation Evidence. 468 Figure 3 depicts an example of a device that includes (A) a BIOS 469 stored in read-only memory in this example, (B) an updatable 470 bootloader, and (C) an operating system kernel. 472 .----------. .----------. 473 | | | | 474 | Endorser |------------------->| Verifier | 475 | | Endorsements | | 476 '----------' for A, B, and C '----------' 477 ^ 478 .------------------------------------. | 479 | | | 480 | .---------------------------. | | 481 | | Target | | | Layered 482 | | Environment | | | Evidence 483 | | C | | | for 484 | '---------------------------' | | B and C 485 | Collect | | | 486 | claims | | | 487 | .---------------|-----------. | | 488 | | Target v | | | 489 | | Environment .-----------. | | | 490 | | B | Attesting | | | | 491 | | |Environment|-----------' 492 | | | B | | | 493 | | '-----------' | | 494 | | ^ | | 495 | '---------------------|-----' | 496 | Collect | | Evidence | 497 | claims v | for B | 498 | .-----------. | 499 | | Attesting | | 500 | |Environment| | 501 | | A | | 502 | '-----------' | 503 | | 504 '------------------------------------' 506 Figure 3: Layered Attester 508 Attesting Environment A, the read-only BIOS in this example, has to 509 ensure the integrity of the bootloader (Target Environment B). There 510 are potentially multiple kernels to boot, and the decision is up to 511 the bootloader. Only a bootloader with intact integrity will make an 512 appropriate decision. Therefore, these Claims have to be measured 513 securely. At this stage of the boot-cycle of the device, the Claims 514 collected typically cannot be composed into Evidence. 516 After the boot sequence is started, the BIOS conducts the most 517 important and defining feature of layered attestation, which is that 518 the successfully measured Target Environment B now becomes (or 519 contains) an Attesting Environment for the next layer. This 520 procedure in Layered Attestation is sometimes called "staging". It 521 is important that the new Attesting Environment B not be able to 522 alter any Claims about its own Target Environment B. This can be 523 ensured having those Claims be either signed by Attesting Environment 524 A or stored in an untamperable manner by Attesting Environment A. 526 Continuing with this example, the bootloader's Attesting Environment 527 B is now in charge of collecting Claims about Target Environment C, 528 which in this example is the kernel to be booted. The final Evidence 529 thus contains two sets of Claims: one set about the bootloader as 530 measured and signed by the BIOS, plus a set of Claims about the 531 kernel as measured and signed by the bootloader. 533 This example could be extended further by making the kernel become 534 another Attesting Environment for an application as another Target 535 Environment. This would result in a third set of Claims in the 536 Evidence pertaining to that application. 538 The essence of this example is a cascade of staged environments. 539 Each environment has the responsibility of measuring the next 540 environment before the next environment is started. In general, the 541 number of layers may vary by device or implementation, and an 542 Attesting Environment might even have multiple Target Environments 543 that it measures, rather than only one as shown in Figure 3. 545 4.4. Composite Device 547 A Composite Device is an entity composed of multiple sub-entities 548 such that its trustworthiness has to be determined by the appraisal 549 of all these sub-entities. 551 Each sub-entity has at least one Attesting Environment collecting the 552 claims from at least one Target Environment, then this sub-entity 553 generates Evidence about its trustworthiness. Therefore each sub- 554 entity can be called an Attester. Among all the Attesters, there may 555 be only some which have the ability to communicate with the Verifier 556 while others do not. 558 For example, a carrier-grade router consists of a chassis and 559 multiple slots. The trustworthiness of the router depends on all its 560 slots' trustworthiness. Each slot has an Attesting Environment such 561 as a TEE collecting the claims of its boot process, after which it 562 generates Evidence from the claims. Among these slots, only a main 563 slot can communicate with the Verifier while other slots cannot. But 564 other slots can communicate with the main slot by the links between 565 them inside the router. So the main slot collects the Evidence of 566 other slots, produces the final Evidence of the whole router and 567 conveys the final Evidence to the Verifier. Therefore the router is 568 a Composite Device, each slot is an Attester, and the main slot is 569 the lead Attester. 571 Another example is a multi-chassis router composed of multiple single 572 carrier-grade routers. The multi-chassis router provides higher 573 throughput by interconnecting multiple routers and can be logically 574 treated as one router for simpler management. A multi-chassis router 575 provides a management point that connects to the Verifier. Other 576 routers are only connected to the main router by the network cables, 577 and therefore they are managed and appraised via this main router's 578 help. So, in this case, the multi-chassis router is the Composite 579 Device, each router is an Attester and the main router is the lead 580 Attester. 582 Figure 4 depicts the conceptual data flow for a Composite Device. 584 .-----------------------------. 585 | Verifier | 586 '-----------------------------' 587 ^ 588 | 589 | Evidence of 590 | Composite Device 591 | 592 .----------------------------------|-------------------------------. 593 | .--------------------------------|-----. .------------. | 594 | | Collect .------------. | | | | 595 | | Claims .--------->| Attesting |<--------| Attester B |-. | 596 | | | |Environment | | '------------. | | 597 | | .----------------. | |<----------| Attester C |-. | 598 | | | Target | | | | '------------' | | 599 | | | Environment(s) | | |<------------| ... | | 600 | | | | '------------' | Evidence '------------' | 601 | | '----------------' | of | 602 | | | Attesters | 603 | | lead Attester A | (via Internal Links or | 604 | '--------------------------------------' Network Connections) | 605 | | 606 | Composite Device | 607 '------------------------------------------------------------------' 609 Figure 4: Conceptual Data Flow for a Composite Device 611 In the Composite Device, each Attester generates its own Evidence by 612 its Attesting Environment(s) collecting the claims from its Target 613 Environment(s). The lead Attester collects the Evidence of all other 614 Attesters and then generates the Evidence of the whole Composite 615 Attester. 617 An entity can take on multiple RATS roles (e.g., Attester, Verifier, 618 Relying Party, etc.) at the same time. The combination of roles can 619 be arbitrary. For example, in this Composite Device scenario, the 620 entity inside the lead Attester can also take on the role of a 621 Verifier, and the outside entity of Verifier can take on the role of 622 a Relying Party. After collecting the Evidence of other Attesters, 623 this inside Verifier uses Endorsements and Appraisal Policies 624 (obtained the same way as any other Verifier) in the verification 625 process to generate Attestation Results. The inside Verifier then 626 conveys the Attestation Results of other Attesters, whether in the 627 same conveyance protocol as the Evidence or not, to the outside 628 Verifier. 630 In this situation, the trust model described in Section 7 is also 631 suitable for this inside Verifier. 633 5. Topological Models 635 Figure 1 shows a basic model for communication between an Attester, a 636 Verifier, and a Relying Party. The Attester conveys its Evidence to 637 the Verifier for appraisal, and the Relying Party gets the 638 Attestation Result from the Verifier. There are multiple other 639 possible models. This section includes some reference models. This 640 is not intended to be a restrictive list, and other variations may 641 exist. 643 5.1. Passport Model 645 The passport model is so named because of its resemblance to how 646 nations issue passports to their citizens. The nature of the 647 Evidence that an individual needs to provide to its local authority 648 is specific to the country involved. The citizen retains control of 649 the resulting passport document and presents it to other entities 650 when it needs to assert a citizenship or identity claim, such as an 651 airport immigration desk. The passport is considered sufficient 652 because it vouches for the citizenship and identity claims, and it is 653 issued by a trusted authority. Thus, in this immigration desk 654 analogy, the passport issuing agency is a Verifier, the passport is 655 an Attestation Result, and the immigration desk is a Relying Party. 657 In this model, an Attester conveys Evidence to a Verifier, which 658 compares the Evidence against its Appraisal Policy. The Verifier 659 then gives back an Attestation Result. If the Attestation Result was 660 a successful one, the Attester can then present the Attestation 661 Result to a Relying Party, which then compares the Attestation Result 662 against its own Appraisal Policy. 664 There are three ways in which the process may fail. First, the 665 Verifier may refuse to issue the Attestation Result due to some error 666 in processing, or some missing input to the Verifier. The second way 667 in which the process may fail is when the Attestation Result is 668 examined by the Relying Party, and based upon the Appraisal Policy, 669 the result does not pass the policy. The third way is when the 670 Verifier is unreachable. 672 Since the resource access protocol between the Attester and Relying 673 Party includes an Attestation Result, in this model the details of 674 that protocol constrain the serialization format of the Attestation 675 Result. The format of the Evidence on the other hand is only 676 constrained by the Attester-Verifier remote attestation protocol. 678 +-------------+ 679 | | Compare Evidence 680 | Verifier | against Appraisal Policy 681 | | 682 +-------------+ 683 ^ | 684 Evidence| |Attestation 685 | | Result 686 | v 687 +----------+ +---------+ 688 | |------------->| |Compare Attestation 689 | Attester | Attestation | Relying | Result against 690 | | Result | Party | Appraisal 691 +----------+ +---------+ Policy 693 Figure 5: Passport Model 695 5.2. Background-Check Model 697 The background-check model is so named because of the resemblance of 698 how employers and volunteer organizations perform background checks. 699 When a prospective employee provides claims about education or 700 previous experience, the employer will contact the respective 701 institutions or former employers to validate the claim. Volunteer 702 organizations often perform police background checks on volunteers in 703 order to determine the volunteer's trustworthiness. Thus, in this 704 analogy, a prospective volunteer is an Attester, the organization is 705 the Relying Party, and a former employer or government agency that 706 issues a report is a Verifier. 708 In this model, an Attester conveys Evidence to a Relying Party, which 709 simply passes it on to a Verifier. The Verifier then compares the 710 Evidence against its Appraisal Policy, and returns an Attestation 711 Result to the Relying Party. The Relying Party then compares the 712 Attestation Result against its own appraisal policy. 714 The resource access protocol between the Attester and Relying Party 715 includes Evidence rather than an Attestation Result, but that 716 Evidence is not processed by the Relying Party. Since the Evidence 717 is merely forwarded on to a trusted Verifier, any serialization 718 format can be used for Evidence because the Relying Party does not 719 need a parser for it. The only requirement is that the Evidence can 720 be _encapsulated in_ the format required by the resource access 721 protocol between the Attester and Relying Party. 723 However, like in the Passport model, an Attestation Result is still 724 consumed by the Relying Party and so the serialization format of the 725 Attestation Result is still important. If the Relying Party is a 726 constrained node whose purpose is to serve a given type resource 727 using a standard resource access protocol, it already needs the 728 parser(s) required by that existing protocol. Hence, the ability to 729 let the Relying Party obtain an Attestation Result in the same 730 serialization format allows minimizing the code footprint and attack 731 surface area of the Relying Party, especially if the Relying Party is 732 a constrained node. 734 +-------------+ 735 | | Compare Evidence 736 | Verifier | against Appraisal 737 | | Policy 738 +-------------+ 739 ^ | 740 Evidence| |Attestation 741 | | Result 742 | v 743 +------------+ +-------------+ 744 | |-------------->| | Compare Attestation 745 | Attester | Evidence | Relying | Result against 746 | | | Party | Appraisal Policy 747 +------------+ +-------------+ 749 Figure 6: Background-Check Model 751 5.3. Combinations 753 One variation of the background-check model is where the Relying 754 Party and the Verifier are on the same machine, performing both 755 functions together. In this case, there is no need for a protocol 756 between the two. 758 It is also worth pointing out that the choice of model is generally 759 up to the Relying Party. The same device may need to create Evidence 760 for different Relying Parties and/or different use cases. For 761 instance, it would provide Evidence to a network infrastructure 762 device to gain access to the network, and to a server holding 763 confidential data to gain access to that data. As such, both models 764 may simultaneously be in use by the same device. 766 Figure 7 shows another example of a combination where Relying Party 1 767 uses the passport model, whereas Relying Party 2 uses an extension of 768 the background-check model. Specifically, in addition to the basic 769 functionality shown in Figure 6, Relying Party 2 actually provides 770 the Attestation Result back to the Attester, allowing the Attester to 771 use it with other Relying Parties. This is the model that the 772 Trusted Application Manager plans to support in the TEEP architecture 773 [I-D.ietf-teep-architecture]. 775 +-------------+ 776 | | Compare Evidence 777 | Verifier | against Appraisal Policy 778 | | 779 +-------------+ 780 ^ | 781 Evidence| |Attestation 782 | | Result 783 | v 784 +-------------+ 785 | | Compare 786 | Relying | Attestation Result 787 | Party 2 | against Appraisal Policy 788 +-------------+ 789 ^ | 790 Evidence| |Attestation 791 | | Result 792 | v 793 +----------+ +----------+ 794 | |-------------->| | Compare Attestation 795 | Attester | Attestation | Relying | Result against 796 | | Result | Party 1 | Appraisal Policy 797 +----------+ +----------+ 799 Figure 7: Example Combination 801 6. Roles and Entities 803 An entity in the RATS architecture includes at least one of the roles 804 defined in this document. An entity can aggregate more than one role 805 into itself. These collapsed roles combine the duties of multiple 806 roles. 808 In these cases, interaction between these roles do not necessarily 809 use the Internet Protocol. They can be using a loopback device or 810 other IP-based communication between separate environments, but they 811 do not have to. Alternative channels to convey conceptual messages 812 include function calls, sockets, GPIO interfaces, local busses, or 813 hypervisor calls. This type of conveyance is typically found in 814 Composite Devices. Most importantly, these conveyance methods are 815 out-of-scope of RATS, but they are presumed to exist in order to 816 convey conceptual messages appropriately between roles. 818 For example, an entity that both connects to a wide-area network and 819 to a system bus is taking on both the Attester and Verifier roles. 820 As a system bus entity, a Verifier consumes Evidence from other 821 devices connected to the system bus that implement Attester roles. 822 As a wide-area network connected entity, it may implement an Attester 823 role. The entity, as a system bus Verifier, may choose to fully 824 isolate its role as a wide-area network Attester. 826 In essence, an entity that combines more than one role creates and 827 consumes the corresponding conceptual messages as defined in this 828 document. 830 7. Trust Model 832 7.1. Relying Party 834 The scope of this document is scenarios for which a Relying Party 835 trusts a Verifier that can appraise the trustworthiness of 836 information about an Attester. Such trust might come by the Relying 837 Party trusting the Verifier (or its public key) directly, or might 838 come by trusting an entity (e.g., a Certificate Authority) that is in 839 the Verifier's certificate chain. 841 The Relying Party might implicitly trust a Verifier, such as in a 842 Verifier/Relying Party combination where the Verifier and Relying 843 Party roles are combined. Or, for a stronger level of security, the 844 Relying Party might require that the Verifier first provide 845 information about itself that the Relying Party can use to assess the 846 trustworthiness of the Verifier before accepting its Attestation 847 Results. 849 For example, one explicit way for a Relying Party "A" to establish 850 such trust in a Verifier "B", would be for B to first act as an 851 Attester where A acts as a combined Verifier/Relying Party. If A 852 then accepts B as trustworthy, it can choose to accept B as a 853 Verifier for other Attesters. 855 Similarly, the Relying Party also needs to trust the Relying Party 856 Owner for providing its Appraisal Policy for Attestation Results, and 857 in some scenarios the Relying Party might even require that the 858 Relying Party Owner go through a remote attestation procedure with it 859 before the Relying Party will accept an updated policy. This can be 860 done similarly to how a Relying Party could establish trust in a 861 Verifier as discussed above. 863 7.2. Attester 865 In some scenarios, Evidence might contain sensitive information such 866 as Personally Identifiable Information. Thus, an Attester must trust 867 entities to which it conveys Evidence, to not reveal sensitive data 868 to unauthorized parties. The Verifier might share this information 869 with other authorized parties, according to rules that it controls. 870 In the background-check model, this Evidence may also be revealed to 871 Relying Party(s). 873 In some cases where Evidence contains sensitive information, an 874 Attester might even require that a Verifier first go through a remote 875 attestation procedure with it before the Attester will send the 876 sensitive Evidence. This can be done by having the Attester first 877 act as a Verifier/Relying Party, and the Verifier act as its own 878 Attester, as discussed above. 880 7.3. Relying Party Owner 882 The Relying Party Owner might also require that the Relying Party 883 first act as an Attester, providing Evidence that the Owner can 884 appraise, before the Owner would give the Relying Party an updated 885 policy that might contain sensitive information. In such a case, 886 mutual attestation might be needed, in which case typically one 887 side's Evidence must be considered safe to share with an untrusted 888 entity, in order to bootstrap the sequence. 890 7.4. Verifier 892 The Verifier trusts (or more specifically, the Verifier's security 893 policy is written in a way that configures the Verifier to trust) a 894 manufacturer, or the manufacturer's hardware, so as to be able to 895 appraise the trustworthiness of that manufacturer's devices. In 896 solutions with weaker security, a Verifier might be configured to 897 implicitly trust firmware or even software (e.g., a hypervisor). 898 That is, it might appraise the trustworthiness of an application 899 component, operating system component, or service under the 900 assumption that information provided about it by the lower-layer 901 hypervisor or firmware is true. A stronger level of assurance of 902 security comes when information can be vouched for by hardware or by 903 ROM code, especially if such hardware is physically resistant to 904 hardware tampering. The component that is implicitly trusted is 905 often referred to as a Root of Trust. 907 A conveyance protocol that provides authentication and integrity 908 protection can be used to convey unprotected Evidence, assuming the 909 following properties exists: 911 1. The key material used to authenticate and integrity protect the 912 conveyance channel is trusted by the Verifier to speak for the 913 Attesting Environment(s) that collected claims about the Target 914 Environment(s). 916 2. All unprotected Evidence that is conveyed is supplied exclusively 917 by the Attesting Environment that has the key material that 918 protects the conveyance channel 920 3. The Root of Trust protects both the conveyance channel key 921 material and the Attesting Environment with equivalent strength 922 protections. 924 7.5. Endorser and Verifier Owner 926 In some scenarios, the Endorser and Verifier Owner may need to trust 927 the Verifier before giving the Endorsement and Appraisal Policy to 928 it. This can be done similarly to how a Relying Party might 929 establish trust in a Verifier as discussed above, and in such a case, 930 mutual attestation might even be needed as discussed in Section 7.3. 932 8. Conceptual Messages 934 8.1. Evidence 936 Evidence is a set of claims about the target environment that reveal 937 operational status, health, configuration or construction that have 938 security relevance. Evidence is evaluated by a Verifier to establish 939 its relevance, compliance, and timeliness. Claims need to be 940 collected in a manner that is reliable. Evidence needs to be 941 securely associated with the target environment so that the Verifier 942 cannot be tricked into accepting claims originating from a different 943 environment (that may be more trustworthy). Evidence also must be 944 protected from man-in-the-middle attackers who may observe, change or 945 misdirect Evidence as it travels from Attester to Verifier. The 946 timeliness of Evidence can be captured using claims that pinpoint the 947 time or interval when changes in operational status, health, and so 948 forth occur. 950 8.2. Endorsements 952 An Endorsement is a secure statement that some entity (e.g., a 953 manufacturer) vouches for the integrity of the device's signing 954 capability. For example, if the signing capability is in hardware, 955 then an Endorsement might be a manufacturer certificate that signs a 956 public key whose corresponding private key is only known inside the 957 device's hardware. Thus, when Evidence and such an Endorsement are 958 used together, an appraisal procedure can be conducted based on 959 Appraisal Policies that may not be specific to the device instance, 960 but merely specific to the manufacturer providing the Endorsement. 961 For example, an Appraisal Policy might simply check that devices from 962 a given manufacturer have information matching a set of known-good 963 reference values, or an Appraisal Policy might have a set of more 964 complex logic on how to appraise the validity of information. 966 However, while an Appraisal Policy that treats all devices from a 967 given manufacturer the same may be appropriate for some use cases, it 968 would be inappropriate to use such an Appraisal Policy as the sole 969 means of authorization for use cases that wish to constrain _which_ 970 compliant devices are considered authorized for some purpose. For 971 example, an enterprise using remote attestation for Network Endpoint 972 Assessment may not wish to let every healthy laptop from the same 973 manufacturer onto the network, but instead only want to let devices 974 that it legally owns onto the network. Thus, an Endorsement may be 975 helpful information in authenticating information about a device, but 976 is not necessarily sufficient to authorize access to resources which 977 may need device-specific information such as a public key for the 978 device or component or user on the device. 980 8.3. Attestation Results 982 Attestation Results may indicate compliance or non-compliance with a 983 Verifier's Appraisal Policy. A result that indicates non-compliance 984 can be used by an Attester (in the passport model) or a Relying Party 985 (in the background-check model) to indicate that the Attester should 986 not be treated as authorized and may be in need of remediation. In 987 some cases, it may even indicate that the Evidence itself cannot be 988 authenticated as being correct. 990 An Attestation Result that indicates compliance can be used by a 991 Relying Party to make authorization decisions based on the Relying 992 Party's Appraisal Policy. The simplest such policy might be to 993 simply authorize any party supplying a compliant Attestation Result 994 signed by a trusted Verifier. A more complex policy might also 995 entail comparing information provided in the result against known- 996 good reference values, or applying more complex logic on such 997 information. 999 Thus, Attestation Results often need to include detailed information 1000 about the Attester, for use by Relying Parties, much like physical 1001 passports and drivers licenses include personal information such as 1002 name and date of birth. Unlike Evidence, which is often very device- 1003 and vendor-specific, Attestation Results can be vendor-neutral if the 1004 Verifier has a way to generate vendor-agnostic information based on 1005 the appraisal of vendor-specific information in Evidence. This 1006 allows a Relying Party's Appraisal Policy to be simpler, potentially 1007 based on standard ways of expressing the information, while still 1008 allowing interoperability with heterogeneous devices. 1010 Finally, whereas Evidence is signed by the device (or indirectly by a 1011 manufacturer, if Endorsements are used), Attestation Results are 1012 signed by a Verifier, allowing a Relying Party to only need a trust 1013 relationship with one entity, rather than a larger set of entities, 1014 for purposes of its Appraisal Policy. 1016 9. Claims Encoding Formats 1018 The following diagram illustrates a relationship to which remote 1019 attestation is desired to be added: 1021 +-------------+ +------------+ Evaluate 1022 | |-------------->| | request 1023 | Attester | Access some | Relying | against 1024 | | resource | Party | security 1025 +-------------+ +------------+ policy 1027 Figure 8: Typical Resource Access 1029 In this diagram, the protocol between Attester and a Relying Party 1030 can be any new or existing protocol (e.g., HTTP(S), COAP(S), ROLIE 1031 [RFC8322], 802.1x, OPC UA, etc.), depending on the use case. Such 1032 protocols typically already have mechanisms for passing security 1033 information for purposes of authentication and authorization. Common 1034 formats include JWTs [RFC7519], CWTs [RFC8392], and X.509 1035 certificates. 1037 To enable remote attestation to be added to existing protocols, 1038 enabling a higher level of assurance against malware for example, it 1039 is important that information needed for appraising the Attester be 1040 usable with existing protocols that have constraints around what 1041 formats they can transport. For example, OPC UA [OPCUA] (probably 1042 the most common protocol in industrial IoT environments) is defined 1043 to carry X.509 certificates and so security information must be 1044 embedded into an X.509 certificate to be passed in the protocol. 1045 Thus, remote attestation related information could be natively 1046 encoded in X.509 certificate extensions, or could be natively encoded 1047 in some other format (e.g., a CWT) which in turn is then encoded in 1048 an X.509 certificate extension. 1050 Especially for constrained nodes, however, there is a desire to 1051 minimize the amount of parsing code needed in a Relying Party, in 1052 order to both minimize footprint and to minimize the attack surface 1053 area. So while it would be possible to embed a CWT inside a JWT, or 1054 a JWT inside an X.509 extension, etc., there is a desire to encode 1055 the information natively in the format that is natural for the 1056 Relying Party. 1058 This motivates having a common "information model" that describes the 1059 set of remote attestation related information in an encoding-agnostic 1060 way, and allowing multiple encoding formats (CWT, JWT, X.509, etc.) 1061 that encode the same information into the claims format needed by the 1062 Relying Party. 1064 The following diagram illustrates that Evidence and Attestation 1065 Results might each have multiple possible encoding formats, so that 1066 they can be conveyed by various existing protocols. It also 1067 motivates why the Verifier might also be responsible for accepting 1068 Evidence that encodes claims in one format, while issuing Attestation 1069 Results that encode claims in a different format. 1071 Evidence Attestation Results 1072 .--------------. CWT CWT .-------------------. 1073 | Attester-A |------------. .----------->| Relying Party V | 1074 '--------------' v | `-------------------' 1075 .--------------. JWT .------------. JWT .-------------------. 1076 | Attester-B |-------->| Verifier |-------->| Relying Party W | 1077 '--------------' | | `-------------------' 1078 .--------------. X.509 | | X.509 .-------------------. 1079 | Attester-C |-------->| |-------->| Relying Party X | 1080 '--------------' | | `-------------------' 1081 .--------------. TPM | | TPM .-------------------. 1082 | Attester-D |-------->| |-------->| Relying Party Y | 1083 '--------------' '------------' `-------------------' 1084 .--------------. other ^ | other .-------------------. 1085 | Attester-E |------------' '----------->| Relying Party Z | 1086 '--------------' `-------------------' 1088 Figure 9: Multiple Attesters and Relying Parties with Different 1089 Formats 1091 10. Freshness 1093 A remote entity (Verifier or Relying Party) may need to learn the 1094 point in time (i.e., the "epoch") an Evidence or Attestation Result 1095 has been produced. This is essential in deciding whether the 1096 included Claims and their values can be considered fresh, meaning 1097 they still reflect the latest state of the Attester, and that any 1098 Attestation Result was generated using the latest Appraisal Policy 1099 for Evidence. 1101 Freshness is assessed based on a policy defined by the consuming 1102 entity, Verifier or Relying Party, that compares the estimated epoch 1103 against an "expiry" threshold defined locally to that policy. There 1104 is, however, always a race condition possible in that the state of 1105 the Attester, and the Appraisal Policy for Evidence, might change 1106 immediately after the Evidence or Attestation Result was generated. 1107 The goal is merely to narrow their recentness to something the 1108 Verifier (for Evidence) or Relying Party (for Attestation Result) is 1109 willing to accept. Freshness is a key component for enabling caching 1110 and reuse of both Evidence and Attestation Results, which is 1111 especially valuable in cases where their computation uses a 1112 substantial part of the resource budget (e.g., energy in constrained 1113 devices). 1115 There are two common approaches for determining the epoch of an 1116 Evidence or Attestation Result. 1118 The first approach is to rely on synchronized and trustworthy clocks, 1119 and include a signed timestamp (see [I-D.birkholz-rats-tuda]) along 1120 with the Claims in the Evidence or Attestation Result. Timestamps 1121 can be added on a per-Claim basis, to distinguish the time of 1122 creation of Evidence or Attestation Result from the time that a 1123 specific Claim was generated. The clock's trustworthiness typically 1124 requires additional Claims about the signer's time synchronization 1125 mechanism. 1127 A second approach places the onus of timekeeping solely on the 1128 appraising entity, i.e., the Verifier (for Evidence), or the Relying 1129 Party (for Attestation Results), and might be suitable, for example, 1130 in case the Attester does not have a reliable clock or time 1131 synchronisation is otherwise impaired. In this approach, a non- 1132 predictable nonce is sent by the appraising entity, and the nonce is 1133 then signed and included along with the Claims in the Evidence or 1134 Attestation Result. After checking that the sent and received nonces 1135 are the same, the appraising entity knows that the Claims were signed 1136 after the nonce was generated. This allows associating a "rough" 1137 epoch to the Evidence or Attestation Result. In this case the epoch 1138 is said to be rough because: 1140 * The epoch applies to the entire claim set instead of a more 1141 granular association, and 1143 * The time between the creation of Claims and the collection of 1144 Claims is indistinguishable. 1146 Implicit and explicit timekeeping can be combined into hybrid 1147 mechanisms. For example, if clocks exist and are considered 1148 trustworthy but are not synchronized, a nonce-based exchange may be 1149 used to determine the (relative) time offset between the involved 1150 peers, followed by any number of timestamp based exchanges. In 1151 another setup where all Roles (Attesters, Verifiers and Relying 1152 Parties) share the same broadcast channel, the nonce-based approach 1153 may be used to anchor all parties to the same (relative) timeline, 1154 without requiring synchronized clocks, by having a central entity 1155 emit nonces at regular intervals and have the "current" nonce 1156 included in the produced Evidence or Attestation Result. 1158 It is important to note that the actual values in Claims might have 1159 been generated long before the Claims are signed. If so, it is the 1160 signer's responsibility to ensure that the values are still correct 1161 when they are signed. For example, values generated at boot time 1162 might have been saved to secure storage until network connectivity is 1163 established to the remote Verifier and a nonce is obtained. 1165 A more detailed discussion with examples appears in Section 16. 1167 11. Privacy Considerations 1169 The conveyance of Evidence and the resulting Attestation Results 1170 reveal a great deal of information about the internal state of a 1171 device as well as any users the device is associated with. In many 1172 cases, the whole point of the Attestation process is to provide 1173 reliable information about the type of the device and the firmware/ 1174 software that the device is running. This information might be 1175 particularly interesting to many attackers. For example, knowing 1176 that a device is running a weak version of firmware provides a way to 1177 aim attacks better. 1179 Many claims in Attestation Evidence and Attestation Results are 1180 potentially PII (Personally Identifying Information) depending on the 1181 end-to-end use case of the attestation. Attestation that goes up to 1182 include containers and applications may further reveal details about 1183 a specific system or user. 1185 In some cases, an attacker may be able to make inferences about 1186 attestations from the results or timing of the processing. For 1187 example, an attacker might be able to infer the value of specific 1188 claims if it knew that only certain values were accepted by the 1189 Relying Party. 1191 Evidence and Attestation Results data structures are expected to 1192 support integrity protection encoding (e.g., COSE, JOSE, X.509) and 1193 optionally might support confidentiality protection (e.g., COSE, 1194 JOSE). Therefore, if confidentiality protection is omitted or 1195 unavailable, the protocols that convey Evidence or Attestation 1196 Results are responsible for detailing what kinds of information are 1197 disclosed, and to whom they are exposed. 1199 Furthermore, because Evidence might contain sensitive information, 1200 Attesters are responsible for only sending such Evidence to trusted 1201 Verifiers. Some Attesters might want a stronger level of assurance 1202 of the trustworthiness of a Verifier before sending Evidence to it. 1203 In such cases, an Attester can first act as a Relying Party and ask 1204 for the Verifier's own Attestation Result, and appraising it just as 1205 a Relying Party would appraise an Attestation Result for any other 1206 purpose. 1208 12. Security Considerations 1210 Any solution that conveys information used for security purposes, 1211 whether such information is in the form of Evidence, Attestation 1212 Results, Endorsements, or Appraisal Policy must support end-to-end 1213 integrity protection and replay attack prevention, and often also 1214 needs to support additional security properties, including: 1216 * end-to-end encryption, 1218 * denial of service protection, 1220 * authentication, 1222 * auditing, 1224 * fine grained access controls, and 1226 * logging. 1228 Section 10 discusses ways in which freshness can be used in this 1229 architecture to protect against replay attacks. 1231 To assess the security provided by a particular Appraisal Policy, it 1232 is important to understand the strength of the Root of Trust, e.g., 1233 whether it is mutable software, or firmware that is read-only after 1234 boot, or immutable hardware/ROM. 1236 It is also important that the Appraisal Policy was itself obtained 1237 securely. As such, if Appraisal Policies for a Relying Party or for 1238 a Verifier can be configured via a network protocol, the ability to 1239 create Evidence about the integrity of the entity providing the 1240 Appraisal Policy needs to be considered. 1242 The security of conveyed information may be applied at different 1243 layers, whether by a conveyance protocol, or an information encoding 1244 format. This architecture expects attestation messages (i.e., 1245 Evidence, Attestation Results, Endorsements and Policies) are end-to- 1246 end protected based on the role interaction context. For example, if 1247 an Attester produces Evidence that is relayed through some other 1248 entity that doesn't implement the Attester or the intended Verifier 1249 roles, then the relaying entity should not expect to have access to 1250 the Evidence. 1252 13. IANA Considerations 1254 This document does not require any actions by IANA. 1256 14. Acknowledgments 1258 Special thanks go to Joerg Borchert, Nancy Cam-Winget, Jessica 1259 Fitzgerald-McKay, Thomas Fossati, Diego Lopez, Laurence Lundblade, 1260 Paul Rowe, Hannes Tschofenig, Frank Xia, and David Wooten. 1262 15. Contributors 1264 Thomas Hardjono created older versions of the terminology section in 1265 collaboration with Ned Smith. Eric Voit provided the conceptual 1266 separation between Attestation Provision Flows and Attestation 1267 Evidence Flows. Monty Wisemen created the content structure of the 1268 first three architecture drafts. Carsten Bormann provided many of 1269 the motivational building blocks with respect to the Internet Threat 1270 Model. 1272 16. Appendix A: Time Considerations 1274 The table below defines a number of relevant events, with an ID that 1275 is used in subsequent diagrams. The times of said events might be 1276 defined in terms of an absolute clock time such as Coordinated 1277 Universal Time, or might be defined relative to some other timestamp 1278 or timeticks counter. 1280 +====+==============+===============================================+ 1281 | ID | Event | Explanation of event | 1282 +====+==============+===============================================+ 1283 | VG | Value | A value to appear in a Claim was | 1284 | | generation | created. | 1285 +----+--------------+-----------------------------------------------+ 1286 | AA | Attester | An Attesting Environment starts to | 1287 | | awareness | be aware of a new/changed Claim | 1288 | | | value. | 1289 +----+--------------+-----------------------------------------------+ 1290 | HD | Handle | A centrally generated identifier for | 1291 | | distribution | time-bound recentness across a | 1292 | | | domain of devices is successfully | 1293 | | | distributed to Attesters. | 1294 +----+--------------+-----------------------------------------------+ 1295 | NS | Nonce sent | A nonce not predictable to an | 1296 | | | Attester (recentness & uniqueness) | 1297 | | | is sent to an Attester. | 1298 +----+--------------+-----------------------------------------------+ 1299 | NR | Nonce | A nonce is relayed to an Attester by | 1300 | | relayed | another entity. | 1301 +----+--------------+-----------------------------------------------+ 1302 | EG | Evidence | An Attester creates Evidence from | 1303 | | generation | collected Claims. | 1304 +----+--------------+-----------------------------------------------+ 1305 | ER | Evidence | A Relying Party relays Evidence to a | 1306 | | relayed | Verifier. | 1307 +----+--------------+-----------------------------------------------+ 1308 | RG | Result | A Verifier appraises Evidence and | 1309 | | generation | generates an Attestation Result. | 1310 +----+--------------+-----------------------------------------------+ 1311 | RR | Result | A Relying Party relays an | 1312 | | relayed | Attestation Result to a Relying | 1313 | | | Party. | 1314 +----+--------------+-----------------------------------------------+ 1315 | RA | Result | The Relying Party appraises | 1316 | | appraised | Attestation Results. | 1317 +----+--------------+-----------------------------------------------+ 1318 | OP | Operation | The Relying Party performs some | 1319 | | performed | operation requested by the Attester. | 1320 | | | For example, acting upon some | 1321 | | | message just received across a | 1322 | | | session created earlier at time(RA). | 1323 +----+--------------+-----------------------------------------------+ 1324 | RX | Result | An Attestation Result should no | 1325 | | expiry | longer be accepted, according to the | 1326 | | | Verifier that generated it. | 1327 +----+--------------+-----------------------------------------------+ 1329 Table 1 1331 Using the table above, a number of hypothetical examples of how a 1332 solution might be built are illustrated below. a solution might be 1333 built. This list is not intended to be complete, but is just 1334 representative enough to highlight various timing considerations. 1336 16.1. Example 1: Timestamp-based Passport Model Example 1338 The following example illustrates a hypothetical Passport Model 1339 solution that uses timestamps and requires roughly synchronized 1340 clocks between the Attester, Verifier, and Relying Party, which 1341 depends on using a secure clock synchronization mechanism. 1343 .----------. .----------. .---------------. 1344 | Attester | | Verifier | | Relying Party | 1345 '----------' '----------' '---------------' 1346 time(VG) | | 1347 | | | 1348 ~ ~ ~ 1349 | | | 1350 time(EG) | | 1351 |------Evidence{time(EG)}-------->| | 1352 | time(RG) | 1353 |<-----Attestation Result---------| | 1354 | {time(RG),time(RX)} | | 1355 ~ ~ 1356 | | 1357 |------Attestation Result{time(RG),time(RX)}-->time(RA) 1358 | | 1359 ~ ~ 1360 | | 1361 | time(OP) 1362 | | 1364 The Verifier can check whether the Evidence is fresh when appraising 1365 it at time(RG) by checking "time(RG) - time(EG) < Threshold", where 1366 the Verifier's threshold is large enough to account for the maximum 1367 permitted clock skew between the Verifier and the Attester. 1369 If time(VG) is also included in the Evidence along with the claim 1370 value generated at that time, and the Verifier decides that it can 1371 trust the time(VG) value, the Verifier can also determine whether the 1372 claim value is recent by checking "time(RG) - time(VG) < Threshold", 1373 again where the threshold is large enough to account for the maximum 1374 permitted clock skew between the Verifier and the Attester. 1376 The Relying Party can check whether the Attestation Result is fresh 1377 when appraising it at time(RA) by checking "time(RA) - time(RG) < 1378 Threshold", where the Relying Party's threshold is large enough to 1379 account for the maximum permitted clock skew between the Relying 1380 Party and the Verifier. The result might then be used for some time 1381 (e.g., throughout the lifetime of a connection established at 1382 time(RA)). The Relying Party must be careful, however, to not allow 1383 continued use beyond the period for which it deems the Attestation 1384 Result to remain fresh enough. Thus, it might allow use (at 1385 time(OP)) as long as "time(OP) - time(RG) < Threshold". However, if 1386 the Attestation Result contains an expiry time time(RX) then it could 1387 explicitly check "time(OP) < time(RX)". 1389 16.2. Example 2: Nonce-based Passport Model Example 1391 The following example illustrates a hypothetical Passport Model 1392 solution that uses nonces and thus does not require that any clocks 1393 are synchronized. 1395 .----------. .----------. .---------------. 1396 | Attester | | Verifier | | Relying Party | 1397 '----------' '----------' '---------------' 1398 time(VG) | | 1399 | | | 1400 ~ ~ ~ 1401 | | | 1402 |<---Nonce1--------------------time(NS) | 1403 time(EG) | | 1404 |----Evidence-------------------->| | 1405 | {Nonce1, time(EG)-time(VG)} | | 1406 | time(RG) | 1407 |<---Attestation Result-----------| | 1408 | {time(RX)-time(RG)} | | 1409 ~ ~ 1410 | | 1411 |<---Nonce2------------------------------------time(NS') 1412 time(RR) 1413 |----Attestation Result{time(RX)-time(RG)}---->time(RA) 1414 | Nonce2, time(RR)-time(EG) | 1415 ~ ~ 1416 | | 1417 | time(OP) 1419 In this example solution, the Verifier can check whether the Evidence 1420 is fresh at time(RG) by verifying that "time(RG) - time(NS) < 1421 Threshold". 1423 The Verifier cannot, however, simply rely on a Nonce to determine 1424 whether the value of a claim is recent, since the claim value might 1425 have been generated long before the nonce was sent by the Verifier. 1426 However, if the Verifier decides that the Attester can be trusted to 1427 correctly provide the delta time(EG)-time(VG), then it can determine 1428 recency by checking "time(RG)-time(NS) + time(EG)-time(VG) < 1429 Threshold". 1431 Similarly if, based on an Attestation Result from a Verifier it 1432 trusts, the Relying Party decides that the Attester can be trusted to 1433 correctly provide time deltas, then it can determine whether the 1434 Attestation Result is fresh by checking "time(OP) - time(NS') + 1435 time(RR)-time(EG) < Threshold". Although the Nonce2 and time(RR)- 1436 time(EG) values cannot be inside the Attestation Result, they might 1437 be signed by the Attester such that the Attestation Result vouches 1438 for the Attester's signing capability. 1440 The Relying Party must still be careful, however, to not allow 1441 continued use beyond the period for which it deems the Attestation 1442 Result to remain valid. Thus, if the Attestation Result sends a 1443 validity lifetime in terms of time(RX)-time(RG), then the Relying 1444 Party can check "time(OP) - time(NS') < time(RX)-time(RG)". 1446 16.3. Example 3: Timestamp-based Background-Check Model Example 1448 The following example illustrates a hypothetical Background-Check 1449 Model solution that uses timestamps and requires roughly synchronized 1450 clocks between the Attester, Verifier, and Relying Party. 1452 .----------. .---------------. .----------. 1453 | Attester | | Relying Party | | Verifier | 1454 '----------' '---------------' '----------' 1455 time(VG) | | 1456 | | | 1457 ~ ~ ~ 1458 | | | 1459 time(EG) | | 1460 |----Evidence------->| | 1461 | {time(EG)} time(ER)--Evidence{time(EG)}-->| 1462 | | time(RG) 1463 | time(RA)<-Attestation Result---| 1464 | | {time(RX)} | 1465 ~ ~ ~ 1466 | | | 1467 | time(OP) | 1469 The time considerations in this example are equivalent to those 1470 discussed under Example 1 above. 1472 16.4. Example 4: Nonce-based Background-Check Model Example 1474 The following example illustrates a hypothetical Background-Check 1475 Model solution that uses nonces and thus does not require that any 1476 clocks are synchronized. In this example solution, a nonce is 1477 generated by a Verifier at the request of a Relying Party, when the 1478 Relying Party needs to send one to an Attester. 1480 .----------. .---------------. .----------. 1481 | Attester | | Relying Party | | Verifier | 1482 '----------' '---------------' '----------' 1483 time(VG) | | 1484 | | | 1485 ~ ~ ~ 1486 | | | 1487 | |<-----Nonce-------------time(NS) 1488 |<---Nonce-----------time(NR) | 1489 time(EG) | | 1490 |----Evidence{Nonce}--->| | 1491 | time(ER)--Evidence{Nonce}----->| 1492 | | time(RG) 1493 | time(RA)<-Attestation Result---| 1494 | | {time(RX)-time(RG)} | 1495 ~ ~ ~ 1496 | | | 1497 | time(OP) | 1499 The Verifier can check whether the Evidence is fresh, and whether a 1500 claim value is recent, the same as in Example 2 above. 1502 However, unlike in Example 2, the Relying Party can use the Nonce to 1503 determine whether the Attestation Result is fresh, by verifying that 1504 "time(OP) - time(NR) < Threshold". 1506 The Relying Party must still be careful, however, to not allow 1507 continued use beyond the period for which it deems the Attestation 1508 Result to remain valid. Thus, if the Attestation Result sends a 1509 validity lifetime in terms of time(RX)-time(RG), then the Relying 1510 Party can check "time(OP) - time(ER) < time(RX)-time(RG)". 1512 17. References 1514 17.1. Normative References 1516 [RFC7519] Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token 1517 (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015, 1518 . 1520 [RFC8392] Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, 1521 "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, 1522 May 2018, . 1524 17.2. Informative References 1526 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 1527 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 1528 . 1530 [RFC8322] Field, J., Banghart, S., and D. Waltermire, "Resource- 1531 Oriented Lightweight Information Exchange (ROLIE)", 1532 RFC 8322, DOI 10.17487/RFC8322, February 2018, 1533 . 1535 [OPCUA] OPC Foundation, "OPC Unified Architecture Specification, 1536 Part 2: Security Model, Release 1.03", OPC 10000-2 , 25 1537 November 2015, . 1541 [I-D.birkholz-rats-tuda] 1542 Fuchs, A., Birkholz, H., McDonald, I., and C. Bormann, 1543 "Time-Based Uni-Directional Attestation", Work in 1544 Progress, Internet-Draft, draft-birkholz-rats-tuda-02, 9 1545 March 2020, . 1548 [I-D.ietf-teep-architecture] 1549 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 1550 "Trusted Execution Environment Provisioning (TEEP) 1551 Architecture", Work in Progress, Internet-Draft, draft- 1552 ietf-teep-architecture-11, 2 July 2020, 1553 . 1556 [TCGarch] Trusted Computing Group, "Trusted Platform Module Library 1557 - Part 1: Architecture", 7 July 2020, 1558 . 1561 Authors' Addresses 1563 Henk Birkholz 1564 Fraunhofer SIT 1565 Rheinstrasse 75 1566 64295 Darmstadt 1567 Germany 1569 Email: henk.birkholz@sit.fraunhofer.de 1570 Dave Thaler 1571 Microsoft 1572 United States of America 1574 Email: dthaler@microsoft.com 1576 Michael Richardson 1577 Sandelman Software Works 1578 Canada 1580 Email: mcr+ietf@sandelman.ca 1582 Ned Smith 1583 Intel Corporation 1584 United States of America 1586 Email: ned.smith@intel.com 1588 Wei Pan 1589 Huawei Technologies 1591 Email: william.panwei@huawei.com