idnits 2.17.1 draft-ietf-rats-architecture-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 481 has weird spacing: '... claims v ...' -- The document date (21 May 2020) is 1429 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-birkholz-rats-tuda-02 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-08 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RATS Working Group H. Birkholz 3 Internet-Draft Fraunhofer SIT 4 Intended status: Informational D. Thaler 5 Expires: 22 November 2020 Microsoft 6 M. Richardson 7 Sandelman Software Works 8 N. Smith 9 Intel 10 W. Pan 11 Huawei Technologies 12 21 May 2020 14 Remote Attestation Procedures Architecture 15 draft-ietf-rats-architecture-03 17 Abstract 19 In network protocol exchanges, it is often the case that one entity 20 (a Relying Party) requires evidence about a remote peer to assess the 21 peer's trustworthiness, and a way to appraise such evidence. The 22 evidence is typically a set of claims about its software and hardware 23 platform. This document describes an architecture for such remote 24 attestation procedures (RATS). 26 Note to Readers 28 Discussion of this document takes place on the RATS Working Group 29 mailing list (rats@ietf.org), which is archived at 30 https://mailarchive.ietf.org/arch/browse/rats/ 31 (https://mailarchive.ietf.org/arch/browse/rats/). 33 Source for this draft and an issue tracker can be found at 34 https://github.com/ietf-rats-wg/architecture (https://github.com/ 35 ietf-rats-wg/architecture). 37 Status of This Memo 39 This Internet-Draft is submitted in full conformance with the 40 provisions of BCP 78 and BCP 79. 42 Internet-Drafts are working documents of the Internet Engineering 43 Task Force (IETF). Note that other groups may also distribute 44 working documents as Internet-Drafts. The list of current Internet- 45 Drafts is at https://datatracker.ietf.org/drafts/current/. 47 Internet-Drafts are draft documents valid for a maximum of six months 48 and may be updated, replaced, or obsoleted by other documents at any 49 time. It is inappropriate to use Internet-Drafts as reference 50 material or to cite them other than as "work in progress." 52 This Internet-Draft will expire on 22 November 2020. 54 Copyright Notice 56 Copyright (c) 2020 IETF Trust and the persons identified as the 57 document authors. All rights reserved. 59 This document is subject to BCP 78 and the IETF Trust's Legal 60 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 61 license-info) in effect on the date of publication of this document. 62 Please review these documents carefully, as they describe your rights 63 and restrictions with respect to this document. Code Components 64 extracted from this document must include Simplified BSD License text 65 as described in Section 4.e of the Trust Legal Provisions and are 66 provided without warranty as described in the Simplified BSD License. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 71 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 72 3. Reference Use Cases . . . . . . . . . . . . . . . . . . . . . 5 73 3.1. Network Endpoint Assessment . . . . . . . . . . . . . . . 5 74 3.2. Confidential Machine Learning (ML) Model Protection . . . 5 75 3.3. Confidential Data Retrieval . . . . . . . . . . . . . . . 6 76 3.4. Critical Infrastructure Control . . . . . . . . . . . . . 6 77 3.5. Trusted Execution Environment (TEE) Provisioning . . . . 7 78 3.6. Hardware Watchdog . . . . . . . . . . . . . . . . . . . . 7 79 4. Architectural Overview . . . . . . . . . . . . . . . . . . . 7 80 4.1. Appraisal Policies . . . . . . . . . . . . . . . . . . . 9 81 4.2. Two Types of Environments of an Attester . . . . . . . . 9 82 4.3. Layered Attestation Environments . . . . . . . . . . . . 10 83 4.4. Composite Device . . . . . . . . . . . . . . . . . . . . 12 84 5. Topological Models . . . . . . . . . . . . . . . . . . . . . 14 85 5.1. Passport Model . . . . . . . . . . . . . . . . . . . . . 15 86 5.2. Background-Check Model . . . . . . . . . . . . . . . . . 16 87 5.3. Combinations . . . . . . . . . . . . . . . . . . . . . . 17 88 6. Roles and Entities . . . . . . . . . . . . . . . . . . . . . 18 89 7. Role Hosting and Composition . . . . . . . . . . . . . . . . 19 90 8. Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . 20 91 9. Conceptual Messages . . . . . . . . . . . . . . . . . . . . . 21 92 9.1. Evidence . . . . . . . . . . . . . . . . . . . . . . . . 21 93 9.2. Endorsements . . . . . . . . . . . . . . . . . . . . . . 21 94 9.3. Attestation Results . . . . . . . . . . . . . . . . . . . 22 96 10. Claims Encoding Formats . . . . . . . . . . . . . . . . . . . 23 97 11. Freshness . . . . . . . . . . . . . . . . . . . . . . . . . . 24 98 12. Privacy Considerations . . . . . . . . . . . . . . . . . . . 25 99 13. Security Considerations . . . . . . . . . . . . . . . . . . . 26 100 14. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 26 101 15. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 26 102 16. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 27 103 17. Appendix A: Time Considerations . . . . . . . . . . . . . . . 27 104 17.1. Example 1: Timestamp-based Passport Model Example . . . 29 105 17.2. Example 2: Nonce-based Passport Model Example . . . . . 30 106 17.3. Example 3: Timestamp-based Background-Check Model 107 Example . . . . . . . . . . . . . . . . . . . . . . . . 31 108 17.4. Example 4: Nonce-based Background-Check Model Example . 31 109 18. References . . . . . . . . . . . . . . . . . . . . . . . . . 32 110 18.1. Normative References . . . . . . . . . . . . . . . . . . 32 111 18.2. Informative References . . . . . . . . . . . . . . . . . 32 112 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 33 114 1. Introduction 116 In Remote Attestation Procedures (RATS), one peer (the "Attester") 117 produces believable information about itself - Evidence - to enable a 118 remote peer (the "Relying Party") to decide whether to consider that 119 Attester a trustworthy peer or not. RATS are facilitated by an 120 additional vital party, the Verifier. 122 This documents defines a flexible architecture consisting of 123 attestation roles and their interactions via conceptual messages. 124 Additionally, this document defines a universal set of terms that can 125 be mapped to various existing and emerging Remote Attestation 126 Procedures. Common topological models and the data flows associated 127 with them, such as the "Passport Model" and the "Background-Check 128 Model" are illustrated. The purpose is to enable readers to map 129 their solution architecture to the canonical attestation architecture 130 provided here and to define useful terminology for attestation. 131 Having a common terminology that provides well-understood meanings 132 for common themes such as, roles, device composition, topological 133 models and appraisal is vital for semantic interoperability across 134 solutions and platforms involving multiple vendors and providers. 136 Amongst other things, this document is about trust and 137 trustworthiness. Trust is a decision being made. Trustworthiness is 138 a quality that is assessed via evidence created. This is a subtle 139 difference and being familiar with the difference is crucial for 140 using this document. Additionally, the concepts of freshness and 141 trust relationships with respect to RATS are elaborated on to enable 142 implementers in order to choose appropriate solutions to compose 143 their Remote Attestation Procedures. 145 2. Terminology 147 This document uses the following terms. 149 Appraisal Policy for Evidence: A set of rules that direct how a 150 Verifier evaluates the validity of information about an Attester. 151 Compare /security policy/ in [RFC4949] 153 Appraisal Policy for Attestation Result: A set of rules that direct 154 how a Relying Party uses the Attestation Results regarding an 155 Attester generated by the Verifiers. Compare /security policy/ in 156 [RFC4949] 158 Attestation Result: The output generated by a Verifier, typically 159 including information about an Attester, where the Verifier 160 vouches for the validity of the Evidence it has appraised 162 Attester: An entity (typically a device), whose Evidence must be 163 appraised in order to infer the extent to which the Attester is 164 considered trustworthy, such as when deciding whether it is 165 authorized to perform some operation 167 Claim: A piece of asserted information, often in the form of a name/ 168 value pair. (Compare /claim/ in [RFC7519]) 170 Endorsement: Statements that Endorsers make (typically a 171 manufacturer) that vouches for the design and implementation of 172 the Attester. Often this includes statements about the integrity 173 of an Attester's signing capability 175 Endorser: An entity (typically a manufacturer) whose Endorsements 176 help Verifiers appraise the authenticity of Evidence 178 Evidence: A set of information that asserts the trustworthiness 179 status of an Attester, that is appraised by a Verifier 181 Relying Party: An entity, that depends on the validity of 182 information about an Attester, for purposes of reliably applying 183 application specific actions. Compare /relying party/ in 184 [RFC4949] 186 Relying Party Owner: An entity (typically an administrator), that is 187 authorized to configure Appraisal Policy for Attestation Results 188 in a Relying Party 190 Verifier: An entity (typically a service), that appraises the 191 validity of Evidence about an Attester and produces Attestation 192 Results to be used by a Relying Party 194 Verifier Owner: An entity (typically an administrator), that is 195 authorized to configure Appraisal Policy for Evidence in a 196 Verifier 198 3. Reference Use Cases 200 This section covers a number of representative use cases for remote 201 attestation, independent of specific solutions. The purpose is to 202 provide motivation for various aspects of the architecture presented 203 in this draft. Many other use cases exist, and this document does 204 not intend to have a complete list, only to have a set of use cases 205 that collectively cover all the functionality required in the 206 architecture. 208 Each use case includes a description, and a summary of what an 209 Attester and a Relying Party refer to in the use case. 211 3.1. Network Endpoint Assessment 213 Network operators want a trustworthy report of identity and version 214 of information of the hardware and software on the machines attached 215 to their network, for purposes such as inventory, auditing, and/or 216 logging. The network operator may also want a policy by which full 217 access is only granted to devices that meet some definition of 218 health, and so wants to get claims about such information and verify 219 their validity. Remote attestation is desired to prevent vulnerable 220 or compromised devices from getting access to the network and 221 potentially harming others. 223 Typically, solutions start with a specific component (called a "Root 224 of Trust") that provides device identity and protected storage for 225 measurements. These components perform a series of measurements, and 226 express this with Evidence as to the hardware and firmware/software 227 that is running. 229 Attester: A device desiring access to a network 231 Relying Party: A network infrastructure device such as a router, 232 switch, or access point 234 3.2. Confidential Machine Learning (ML) Model Protection 236 A device manufacturer wants to protect its intellectual property in 237 terms of the ML model it developed and that runs in the devices that 238 its customers purchased, and it wants to prevent attackers, 239 potentially including the customer themselves, from seeing the 240 details of the model. 242 This typically works by having some protected environment in the 243 device attest to some manufacturer service. If remote attestation 244 succeeds, then the manufacturer service releases either the model, or 245 a key to decrypt a model the Attester already has in encrypted form, 246 to the requester. 248 Attester: A device desiring to run an ML model to do inferencing 250 Relying Party: A server or service holding ML models it desires to 251 protect 253 3.3. Confidential Data Retrieval 255 This is a generalization of the ML model use case above, where the 256 data can be any highly confidential data, such as health data about 257 customers, payroll data about employees, future business plans, etc. 258 Attestation is desired to prevent leaking data to compromised 259 devices. 261 Attester: An entity desiring to retrieve confidential data 263 Relying Party: An entity that holds confidential data for retrieval 264 by other entities 266 3.4. Critical Infrastructure Control 268 In this use case, potentially dangerous physical equipment (e.g., 269 power grid, traffic control, hazardous chemical processing, etc.) is 270 connected to a network. The organization managing such 271 infrastructure needs to ensure that only authorized code and users 272 can control such processes, and they are protected from malware or 273 other adversaries. When a protocol operation can affect some 274 critical system, the device attached to the critical equipment thus 275 wants some assurance that the requester has not been compromised. As 276 such, remote attestation can be used to only accept commands from 277 requesters that are within policy. 279 Attester: A device or application wishing to control physical 280 equipment 282 Relying Party: A device or application connected to potentially 283 dangerous physical equipment (hazardous chemical processing, 284 traffic control, power grid, etc.) 286 3.5. Trusted Execution Environment (TEE) Provisioning 288 A "Trusted Application Manager (TAM)" server is responsible for 289 managing the applications running in the TEE of a client device. To 290 do this, the TAM wants to assess the state of a TEE, or of 291 applications in the TEE, of a client device. The TEE attests to the 292 TAM, which can then decide whether the TEE is already in compliance 293 with the TAM's latest policy, or if the TAM needs to uninstall, 294 update, or install approved applications in the TEE to bring it back 295 into compliance with the TAM's policy. 297 Attester: A device with a trusted execution environment capable of 298 running trusted applications that can be updated 300 Relying Party: A Trusted Application Manager 302 3.6. Hardware Watchdog 304 One significant problem is malware that holds a device hostage and 305 does not allow it to reboot to prevent updates to be applied. This 306 is a significant problem, because it allows a fleet of devices to be 307 held hostage for ransom. 309 A hardware watchdog can be implemented by forcing a reboot unless 310 remote attestation to a server succeeds within a periodic interval, 311 and having the reboot do remediation by bringing a device into 312 compliance, including installation of patches as needed. 314 Attester: The device that is desired to keep from being held hostage 315 for a long period of time 317 Relying Party: A remote server that will securely grant the Attester 318 permission to continue operating (i.e., not reboot) for a period 319 of time 321 4. Architectural Overview 323 Figure 1 depicts the data that flows between different roles, 324 independent of protocol or use case. 326 ************ ************ **************** 327 * Endorser * * Verifier * * Relying Party* 328 ************ * Owner * * Owner * 329 | ************ **************** 330 | | | 331 Endorsements| | | 332 | |Appraisal | 333 | |Policy | 334 | |for | Appraisal 335 | |Evidence | Policy for 336 | | | Attestation 337 | | | Result 338 v v | 339 .-----------------. | 340 .----->| Verifier |------. | 341 | '-----------------' | | 342 | | | 343 | Attestation| | 344 | Results | | 345 | Evidence | | 346 | | | 347 | v v 348 .----------. .-----------------. 349 | Attester | | Relying Party | 350 '----------' '-----------------' 352 Figure 1: Conceptual Data Flow 354 An Attester creates Evidence that is conveyed to a Verifier. 356 The Verifier uses the Evidence, and any Endorsements from Endorsers, 357 by applying an Evidence Appraisal Policy to assess the 358 trustworthiness of the Attester, and generates Attestation Results 359 for use by Relying Parties. The Evidence Appraisal Policy might be 360 obtained from an Endorser along with the Endorsements, or might be 361 obtained via some other mechanism such as being configured in the 362 Verifier by an administrator. 364 The Relying Party uses Attestation Results by applying its own 365 Appraisal Policy to make application-specific decisions such as 366 authorization decisions. The Attestation Result Appraisal Policy 367 might, for example, be configured in the Relying Party by an 368 administrator. 370 4.1. Appraisal Policies 372 The Verifier, when appraising Evidence, or the Relying Party, when 373 appraising Attestation Results, checks the values of some claims 374 against constraints specified in its Appraisal Policy. Such 375 constraints might involve a comparison for equality against a 376 reference value, or a check for being in a range bounded by reference 377 values, or membership in a set of reference values, or a check 378 against values in other claims, or any other test. 380 Such reference values might be specified as part of the Appraisal 381 Policy itself, or might be obtained from a separate source, such as 382 an Endorsement, and then used by the Appraisal Policy. 384 The actual data format and semantics of any reference values are 385 specific to claims and implementations. This architecture document 386 does not define any general purpose format for them or general means 387 for comparison. 389 4.2. Two Types of Environments of an Attester 391 An Attester consists of at least one Attesting Environment and at 392 least one Target Environment. In some implementations, the Attesting 393 and Target Environments might be combined. Other implementations 394 might have multiple Attesting and Target Environments, such as in the 395 examples described in more detail in Section 4.3 and Section 4.4. 396 Other examples may exist, and the examples discussed could even be 397 combined into even more complex implementations. 399 Claims are collected from Target Environments, as shown in Figure 2. 400 That is, Attesting Environments collect the raw values and the 401 information to be represented in claims, such as by doing some 402 measurement of a Target Environment's code, memory, and/or registers. 403 Attesting Environments then format the claims appropriately, and 404 typically use key material and cryptographic functions, such as 405 signing or cipher algorithms, to create Evidence. Places that 406 Attesting Environments can exist include Trusted Execution 407 Environments (TEE), embedded Secure Elements (eSE), and BIOS 408 firmware. An execution environment may not, by default, be capable 409 of claims collection for a given Target Environment. Attesting 410 Environments are designed specifically with claims collection in 411 mind. 413 .--------------------------------. 414 | | 415 | Verifier | 416 | | 417 '--------------------------------' 418 ^ 419 | 420 .-------------------------|----------. 421 | | | 422 | .----------------. | | 423 | | Target | | | 424 | | Environment | | | 425 | | | | Evidence | 426 | '----------------' | | 427 | | | | 428 | | | | 429 | Collect | | | 430 | Claims | | | 431 | | | | 432 | v | | 433 | .-------------. | 434 | | Attesting | | 435 | | Environment | | 436 | | | | 437 | '-------------' | 438 | Attester | 439 '------------------------------------' 441 Figure 2: Two Types of Environments 443 4.3. Layered Attestation Environments 445 By definition, the Attester role takes on the duty to create 446 Evidence. The fact that an Attester role is composed of environments 447 that can be nested or staged adds complexity to the architectural 448 layout of how an Attester can be composed and therefore has to 449 conduct the Claims collection in order to create believable 450 attestation Evidence. 452 Figure 3 depicts an example of a device that includes (A) a BIOS 453 stored in read-only memory in this example, (B) an updatable 454 bootloader, and (C) an operating system kernel. 456 .----------. .----------. 457 | | | | 458 | Endorser |------------------->| Verifier | 459 | | Endorsements | | 460 '----------' for A, B, and C '----------' 461 ^ 462 .------------------------------------. | 463 | | | 464 | .---------------------------. | | 465 | | Target | | | Layered 466 | | Environment | | | Evidence 467 | | C | | | for 468 | '---------------------------' | | B and C 469 | Collect | | | 470 | claims | | | 471 | .---------------|-----------. | | 472 | | Target v | | | 473 | | Environment .-----------. | | | 474 | | B | Attesting | | | | 475 | | |Environment|-----------' 476 | | | B | | | 477 | | '-----------' | | 478 | | ^ | | 479 | '---------------------|-----' | 480 | Collect | | Evidence | 481 | claims v | for B | 482 | .-----------. | 483 | | Attesting | | 484 | |Environment| | 485 | | A | | 486 | '-----------' | 487 | | 488 '------------------------------------' 490 Figure 3: Layered Attester 492 Attesting Environment A, the read-only BIOS in this example, has to 493 ensure the integrity of the bootloader (Target Environment B). There 494 are potentially multiple kernels to boot, and the decision is up to 495 the bootloader. Only a bootloader with intact integrity will make an 496 appropriate decision. Therefore, these Claims have to be measured 497 securely. At this stage of the boot-cycle of the device, the Claims 498 collected typically cannot be composed into Evidence. 500 After the boot sequence is started, the BIOS conducts the most 501 important and defining feature of layered attestation, which is that 502 the successfully measured Target Environment B now becomes (or 503 contains) an Attesting Environment for the next layer. This 504 procedure in Layered Attestation is sometimes called "staging". It 505 is important that the new Attesting Environment B not be able to 506 alter any Claims about its own Target Environment B. This can be 507 ensured having those Claims be either signed by Attesting Environment 508 A or stored in an untamperable manner by Attesting Environment A. 510 Continuing with this example, the bootloader's Attesting Environment 511 B is now in charge of collecting Claims about Target Environment C, 512 which in this example is the kernel to be booted. The final Evidence 513 thus contains two sets of Claims: one set about the bootloader as 514 measured and signed by the BIOS, plus a set of Claims about the 515 kernel as measured and signed by the bootloader. 517 This example could be extended further by, say, making the kernel 518 become another Attesting Environment for an application as another 519 Target Environment, resulting in a third set of Claims in the 520 Evidence pertaining to that application. 522 The essence of this example is a cascade of staged environments. 523 Each environment has the responsibility of measuring the next 524 environment before the next environment is started. In general, the 525 number of layers may vary by device or implementation, and an 526 Attesting Environment might even have multiple Target Environments 527 that it measures, rather than only one as shown in Figure 3. 529 4.4. Composite Device 531 A Composite Device is an entity composed of multiple sub-entities 532 such that its trustworthiness has to be determined by the appraisal 533 of all these sub-entities. 535 Each sub-entity has at least one Attesting Environment collecting the 536 claims from at least one Target Environment, then this sub-entity 537 generates Evidence about its trustworthiness. Therefore each sub- 538 entity can be called an Attester. Among all the Attesters, there may 539 be only some which have the ability to communicate with the Verifier 540 while others do not. 542 For example, a carrier-grade router consists of a chassis and 543 multiple slots. The trustworthiness of the router depends on all its 544 slots' trustworthiness. Each slot has an Attesting Environment such 545 as a TEE collecting the claims of its boot process, after which it 546 generates Evidence from the claims. Among these slots, only a main 547 slot can communicate with the Verifier while other slots cannot. But 548 other slots can communicate with the main slot by the links between 549 them inside the router. So the main slot collects the Evidence of 550 other slots, produces the final Evidence of the whole router and 551 conveys the final Evidence to the Verifier. Therefore the router is 552 a Composite Device, each slot is an Attester, and the main slot is 553 the lead Attester. 555 Another example is a multi-chassis router composed of multiple single 556 carrier-grade routers. The multi-chassis router provides higher 557 throughput by interconnecting multiple routers and can be logically 558 treated as one router for simpler management. Among these routers, 559 there is only one main router that connects to the Verifier. Other 560 routers are only connected to the main router by the network cables, 561 and therefore they are managed and appraised via this main router's 562 help. So, in this case, the multi-chassis router is the Composite 563 Device, each router is an Attester and the main router is the lead 564 Attester. 566 Figure 4 depicts the conceptual data flow for a Composite Device. 568 .-----------------------------. 569 | Verifier | 570 '-----------------------------' 571 ^ 572 | 573 | Evidence of 574 | Composite Device 575 | 576 .----------------------------------|-------------------------------. 577 | .--------------------------------|-----. .------------. | 578 | | Collect .------------. | | | | 579 | | Claims .--------->| Attesting |<--------| Attester B |-. | 580 | | | |Environment | | '------------. | | 581 | | .----------------. | |<----------| Attester C |-. | 582 | | | Target | | | | '------------' | | 583 | | | Environment(s) | | |<------------| ... | | 584 | | | | '------------' | Evidence '------------' | 585 | | '----------------' | of | 586 | | | Attesters | 587 | | lead Attester A | (via Internal Links or | 588 | '--------------------------------------' Network Connections) | 589 | | 590 | Composite Device | 591 '------------------------------------------------------------------' 593 Figure 4: Conceptual Data Flow for a Composite Device 595 In the Composite Device, each Attester generates its own Evidence by 596 its Attesting Environment(s) collecting the claims from its Target 597 Environment(s). The lead Attester collects the Evidence of all other 598 Attesters and then generates the Evidence of the whole Composite 599 Attester. 601 5. Topological Models 603 Figure 1 shows a basic model for communication between an Attester, a 604 Verifier, and a Relying Party. The Attester conveys its Evidence to 605 the Verifier for appraisal, and the Relying Party gets the 606 Attestation Results from the Verifier. There are multiple other 607 possible models. This section includes some reference models, but 608 this is not intended to be a restrictive list, and other variations 609 may exist. 611 5.1. Passport Model 613 In this model, an Attester conveys Evidence to a Verifier, which 614 compares the Evidence against its Appraisal Policy. The Verifier 615 then gives back an Attestation Result. If the Attestation Result was 616 a successful one, the Attester can then present the Attestation 617 Result to a Relying Party, which then compares the Attestation Result 618 against its own Appraisal Policy. 620 There are three ways in which the process may fail. First, the 621 Verifier may refuse to issue the Attestation Result due to some error 622 in processing, or some missing input to the Verifier. The second way 623 in which the process may fail is when the resulting Result is 624 examined by the Relying Party, and based upon the Appraisal Policy, 625 the result does not pass the policy. The third way is when the 626 Verifier is unreachable. 628 Since the resource access protocol between the Attester and Relying 629 Party includes an Attestation Result, in this model the details of 630 that protocol constrain the serialization format of the Attestation 631 Result. The format of the Evidence on the other hand is only 632 constrained by the Attester-Verifier remote attestation protocol. 634 +-------------+ 635 | | Compare Evidence 636 | Verifier | against Appraisal Policy 637 | | 638 +-------------+ 639 ^ | 640 Evidence| |Attestation 641 | | Result 642 | v 643 +----------+ +---------+ 644 | |------------->| |Compare Attestation 645 | Attester | Attestation | Relying | Result against 646 | | Result | Party | Appraisal 647 +----------+ +---------+ Policy 649 Figure 5: Passport Model 651 The passport model is so named because of its resemblance to how 652 nations issue passports to their citizens. The nature of the 653 Evidence that an individual needs to provide to its local authority 654 is specific to the country involved. The citizen retains control of 655 the resulting passport document and presents it to other entities 656 when it needs to assert a citizenship or identity claim, such as an 657 airport immigration desk. The passport is considered sufficient 658 because it vouches for the citizenship and identity claims, and it is 659 issued by a trusted authority. Thus, in this immigration desk 660 analogy, the passport issuing agency is a Verifier, the passport is 661 an Attestation Result, and the immigration desk is a Relying Party. 663 5.2. Background-Check Model 665 In this model, an Attester conveys Evidence to a Relying Party, which 666 simply passes it on to a Verifier. The Verifier then compares the 667 Evidence against its Appraisal Policy, and returns an Attestation 668 Result to the Relying Party. The Relying Party then compares the 669 Attestation Result against its own appraisal policy. 671 The resource access protocol between the Attester and Relying Party 672 includes Evidence rather than an Attestation Result, but that 673 Evidence is not processed by the Relying Party. Since the Evidence 674 is merely forwarded on to a trusted Verifier, any serialization 675 format can be used for Evidence because the Relying Party does not 676 need a parser for it. The only requirement is that the Evidence can 677 be _encapsulated in_ the format required by the resource access 678 protocol between the Attester and Relying Party. 680 However, like in the Passport model, an Attestation Result is still 681 consumed by the Relying Party and so the serialization format of the 682 Attestation Result is still important. If the Relying Party is a 683 constrained node whose purpose is to serve a given type resource 684 using a standard resource access protocol, it already needs the 685 parser(s) required by that existing protocol. Hence, the ability to 686 let the Relying Party obtain an Attestation Result in the same 687 serialization format allows minimizing the code footprint and attack 688 surface area of the Relying Party, especially if the Relying Party is 689 a constrained node. 691 +-------------+ 692 | | Compare Evidence 693 | Verifier | against Appraisal 694 | | Policy 695 +-------------+ 696 ^ | 697 Evidence| |Attestation 698 | | Result 699 | v 700 +------------+ +-------------+ 701 | |-------------->| | Compare Attestation 702 | Attester | Evidence | Relying | Result against 703 | | | Party | Appraisal Policy 704 +------------+ +-------------+ 706 Figure 6: Background-Check Model 708 The background-check model is so named because of the resemblance of 709 how employers and volunteer organizations perform background checks. 710 When a prospective employee provides claims about education or 711 previous experience, the employer will contact the respective 712 institutions or former employers to validate the claim. Volunteer 713 organizations often perform police background checks on volunteers in 714 order to determine the volunteer's trustworthiness. Thus, in this 715 analogy, a prospective volunteer is an Attester, the organization is 716 the Relying Party, and a former employer or government agency that 717 issues a report is a Verifier. 719 5.3. Combinations 721 One variation of the background-check model is where the Relying 722 Party and the Verifier on the same machine, and so there is no need 723 for a protocol between the two. 725 It is also worth pointing out that the choice of model is generally 726 up to the Relying Party, and the same device may need to create 727 Evidence for different Relying Parties and different use cases (e.g., 728 a network infrastructure device to gain access to the network, and 729 then a server holding confidential data to get access to that data). 730 As such, both models may simultaneously be in use by the same device. 732 Figure 7 shows another example of a combination where Relying Party 1 733 uses the passport model, whereas Relying Party 2 uses an extension of 734 the background-check model. Specifically, in addition to the basic 735 functionality shown in Figure 6, Relying Party 2 actually provides 736 the Attestation Result back to the Attester, allowing the Attester to 737 use it with other Relying Parties. This is the model that the 738 Trusted Application Manager plans to support in the TEEP architecture 739 [I-D.ietf-teep-architecture]. 741 +-------------+ 742 | | Compare Evidence 743 | Verifier | against Appraisal Policy 744 | | 745 +-------------+ 746 ^ | 747 Evidence| |Attestation 748 | | Result 749 | v 750 +-------------+ 751 | | Compare 752 | Relying | Attestation Result 753 | Party 2 | against Appraisal Policy 754 +-------------+ 755 ^ | 756 Evidence| |Attestation 757 | | Result 758 | v 759 +----------+ +----------+ 760 | |-------------->| | Compare Attestation 761 | Attester | Attestation | Relying | Result against 762 | | Result | Party 1 | Appraisal Policy 763 +----------+ +----------+ 765 Figure 7: Example Combination 767 6. Roles and Entities 769 HENK VERSION 771 An entity in the RATS architecture includes at least one of the roles 772 defined in this document. As a result, the entity can participate as 773 a constituent of the RATS architecture. Additionally, an entity can 774 aggregate more than one role into itself. These collapsed roles 775 combine the duties of multiple roles. In these cases, interaction 776 between these roles do not necessarily use the Internet Protocol. 777 They can be using a loopback device or other IP-based communication 778 between separate environments, but they do not have to. Alternative 779 channels to convey conceptual messages include function calls, 780 sockets, GPIO interfaces, local busses, or hypervisor calls. This 781 type of conveyance is typically found in Composite Devices. Most 782 importantly, these conveyance methods are out-of-scope of RATS, but 783 they are presumed to exist in order to convey conceptual messages 784 appropriately between roles. 786 For example, an entity that both connects to a wide-area network and 787 to a system bus is taking on both the Attester and Verifier roles. 788 As a system bus entity, a Verifier consumes Evidence from other 789 devices connected to the system bus that implement Attester roles. 790 As a wide-area network connected entity, it may implement an Attester 791 role. The entity, as a system bus Verifier, may choose to fully 792 isolate its role as a wide-area network Attester. 794 In essence, an entity that combines more than one role also creates 795 and consumes the corresponding conceptual messages as defined in this 796 document. 798 7. Role Hosting and Composition 800 NED VERSION 802 The RATS architecture includes the definition of Roles (e.g. 803 Attester, Verifier, Relying Party, Endorser) and conceptual messages 804 (e.g., Evidence, Attestation Results, Endorsements, Appraisal 805 Policies) that captures canonical attestation behaviors, that are 806 common to a broad range of attestation-enabled systems. An entity 807 that combines multiple Roles produces and consumes the associated 808 Role Messages. 810 The RATS architecture is not prescriptive about deployment 811 configuration options of attestation-enabled systems, therefore the 812 various Roles can be hosted on any participating entity. This 813 implies, for a given entity, that multiple Roles could be co-resident 814 so that the duties of multiple roles could be performed 815 simultaneously. Nevertheless, the semantics of which Role Messages 816 are inputs and outputs to a Role entity remains constant. As a 817 result, the entity can participate as a constituent of the RATS 818 architecture while flexibly accommodating the needs of various 819 deployment architectures. 821 Interactions between Roles do not necessarily require use of Internet 822 protocols. They could, for example, use inter-process communication, 823 local system buses, shared memory, hypervisors, IP-loopback devices 824 or any communication path between the various environments that may 825 exist on the entity that combines multiple Roles. 827 The movement of Role Messages between locally hosted Roles is 828 referred to as "local conveyance". Most importantly, the definition 829 of local conveyance methods is out-of-scope for the RATS 830 architecture. 832 The following paragraph elaborates on an exemplary usage scenario: 834 In a Composite Device scenario, in addition to local entities that 835 host the lead Attester and other subordinate Attesters, the Composite 836 Device can host the Verifier role locally to appraise Evidence from 837 one or more subordinate Attesters. The local Verifier might convey 838 local Attestation Results to a remote Relying party or the Relying 839 Party role also could become local where an application-specific 840 action is taken locally. For example, a secure boot scenario 841 prevents system software from loading if the firmware fails to 842 satisfy a local trustworthiness appraisal policy. 844 In a multi-network scenario, a network node might bridge a wide-area 845 network, local-area network, and span various system buses. In so 846 doing, the bridge node might need to host multiple Roles depending on 847 the type of behavior each connected domain expects. For example, the 848 node might be an Attester to a wide-area network, a Verifier to the 849 local-area network, and a Relying Party to components attached to a 850 local system bus. 852 8. Trust Model 854 The scope of this document is scenarios for which a Relying Party 855 trusts a Verifier that can appraise the trustworthiness of 856 information about an Attester. Such trust might come by the Relying 857 Party trusting the Verifier (or its public key) directly, or might 858 come by trusting an entity (e.g., a Certificate Authority) that is in 859 the Verifier's certificate chain. The Relying Party might implicitly 860 trust a Verifier (such as in the Verifying Relying Party 861 combination). Or, for a stronger level of security, the Relying 862 Party might require that the Verifier itself provide information 863 about itself that the Relying Party can use to assess the 864 trustworthiness of the Verifier before accepting its Attestation 865 Results. 867 The Endorser and Verifier Owner may need to trust the Verifier before 868 giving the Endorsement and Appraisal Policy to it. Such trust can 869 also be established directly or indirectly, implicitly or explicitly. 870 One explicit way to establish such trust may be the Verifier first 871 acts as an Attester and creates Evidence about itself to be consumed 872 by the Endorser and/or Verifier Owner as the Relying Parties. If it 873 is accepted as trustworthy, then they can provide Endorsements and 874 Appraisal Policies that enable it to act as a Verifier. 876 The Verifier trusts (or more specifically, the Verifier's security 877 policy is written in a way that configures the Verifier to trust) a 878 manufacturer, or the manufacturer's hardware, so as to be able to 879 appraise the trustworthiness of that manufacturer's devices. In 880 solutions with weaker security, a Verifier might be configured to 881 implicitly trust firmware or even software (e.g., a hypervisor). 882 That is, it might appraise the trustworthiness of an application 883 component, or operating system component or service, under the 884 assumption that information provided about it by the lower-layer 885 hypervisor or firmware is true. A stronger level of security comes 886 when information can be vouched for by hardware or by ROM code, 887 especially if such hardware is physically resistant to hardware 888 tampering. The component that is implicitly trusted is often 889 referred to as a Root of Trust. 891 In some scenarios, Evidence might contain sensitive information such 892 as Personally Identifiable Information. Thus, an Attester must trust 893 entities to which it conveys Evidence, to not reveal sensitive data 894 to unauthorized parties. The Verifier might share this information 895 with other authorized parties, according rules that it controls. In 896 the background-check model, this Evidence may also be revealed to 897 Relying Party(s). 899 9. Conceptual Messages 901 9.1. Evidence 903 Evidence is a set of claims about the target environment that reveal 904 operational status, health, configuration or construction that have 905 security relevance. Evidence is evaluated by a Verifier to establish 906 its relevance, compliance, and timeliness. Claims need to be 907 collected in a manner that is reliable. Evidence needs to be 908 securely associated with the target environment so that the Verifier 909 cannot be tricked into accepting claims originating from a different 910 environment (that may be more trustworthy). Evidence also must be 911 protected from man-in-the-middle attackers who may observe, change or 912 misdirect Evidence as it travels from Attester to Verifier. The 913 timeliness of Evidence can be captured using claims that pinpoint the 914 time or interval when changes in operational status, health, and so 915 forth occur. 917 9.2. Endorsements 919 An Endorsement is a secure statement that some entity (e.g., a 920 manufacturer) vouches for the integrity of the device's signing 921 capability. For example, if the signing capability is in hardware, 922 then an Endorsement might be a manufacturer certificate that signs a 923 public key whose corresponding private key is only known inside the 924 device's hardware. Thus, when Evidence and such an Endorsement are 925 used together, an appraisal procedure can be conducted based on 926 Appraisal Policies that may not be specific to the device instance, 927 but merely specific to the manufacturer providing the Endorsement. 928 For example, an Appraisal Policy might simply check that devices from 929 a given manufacturer have information matching a set of known-good 930 reference values, or an Appraisal Policy might have a set of more 931 complex logic on how to appraise the validity of information. 933 However, while an Appraisal Policy that treats all devices from a 934 given manufacturer the same may be appropriate for some use cases, it 935 would be inappropriate to use such an Appraisal Policy as the sole 936 means of authorization for use cases that wish to constrain _which_ 937 compliant devices are considered authorized for some purpose. For 938 example, an enterprise using remote attestation for Network Endpoint 939 Assessment may not wish to let every healthy laptop from the same 940 manufacturer onto the network, but instead only want to let devices 941 that it legally owns onto the network. Thus, an Endorsement may be 942 helpful information in authenticating information about a device, but 943 is not necessarily sufficient to authorize access to resources which 944 may need device-specific information such as a public key for the 945 device or component or user on the device. 947 9.3. Attestation Results 949 Attestation Results may indicate compliance or non-compliance with a 950 Verifier's Appraisal Policy. A result that indicates non-compliance 951 can be used by an Attester (in the passport model) or a Relying Party 952 (in the background-check model) to indicate that the Attester should 953 not be treated as authorized and may be in need of remediation. In 954 some cases, it may even indicate that the Evidence itself cannot be 955 authenticated as being correct. 957 An Attestation Result that indicates compliance can be used by a 958 Relying Party to make authorization decisions based on the Relying 959 Party's Appraisal Policy. The simplest such policy might be to 960 simply authorize any party supplying a compliant Attestation Result 961 signed by a trusted Verifier. A more complex policy might also 962 entail comparing information provided in the result against known- 963 good reference values, or applying more complex logic on such 964 information. 966 Thus, Attestation Results often need to include detailed information 967 about the Attester, for use by Relying Parties, much like physical 968 passports and drivers licenses include personal information such as 969 name and date of birth. Unlike Evidence, which is often very device- 970 and vendor-specific, Attestation Results can be vendor-neutral if the 971 Verifier has a way to generate vendor-agnostic information based on 972 the appraisal of vendor-specific information in Evidence. This 973 allows a Relying Party's Appraisal Policy to be simpler, potentially 974 based on standard ways of expressing the information, while still 975 allowing interoperability with heterogeneous devices. 977 Finally, whereas Evidence is signed by the device (or indirectly by a 978 manufacturer, if Endorsements are used), Attestation Results are 979 signed by a Verifier, allowing a Relying Party to only need a trust 980 relationship with one entity, rather than a larger set of entities, 981 for purposes of its Appraisal Policy. 983 10. Claims Encoding Formats 985 The following diagram illustrates a relationship to which remote 986 attestation is desired to be added: 988 +-------------+ +------------+ Evaluate 989 | |-------------->| | request 990 | Attester | Access some | Relying | against 991 | | resource | Party | security 992 +-------------+ +------------+ policy 994 Figure 8: Typical Resource Access 996 In this diagram, the protocol between Attester and a Relying Party 997 can be any new or existing protocol (e.g., HTTP(S), COAP(S), 802.1x, 998 OPC UA, etc.), depending on the use case. Such protocols typically 999 already have mechanisms for passing security information for purposes 1000 of authentication and authorization. Common formats include JWTs 1001 [RFC7519], CWTs [RFC8392], and X.509 certificates. 1003 To enable remote attestation to be added to existing protocols, 1004 enabling a higher level of assurance against malware for example, it 1005 is important that information needed for appraising the Attester be 1006 usable with existing protocols that have constraints around what 1007 formats they can transport. For example, OPC UA [OPCUA] (probably 1008 the most common protocol in industrial IoT environments) is defined 1009 to carry X.509 certificates and so security information must be 1010 embedded into an X.509 certificate to be passed in the protocol. 1011 Thus, remote attestation related information could be natively 1012 encoded in X.509 certificate extensions, or could be natively encoded 1013 in some other format (e.g., a CWT) which in turn is then encoded in 1014 an X.509 certificate extension. 1016 Especially for constrained nodes, however, there is a desire to 1017 minimize the amount of parsing code needed in a Relying Party, in 1018 order to both minimize footprint and to minimize the attack surface 1019 area. So while it would be possible to embed a CWT inside a JWT, or 1020 a JWT inside an X.509 extension, etc., there is a desire to encode 1021 the information natively in the format that is natural for the 1022 Relying Party. 1024 This motivates having a common "information model" that describes the 1025 set of remote attestation related information in an encoding-agnostic 1026 way, and allowing multiple encoding formats (CWT, JWT, X.509, etc.) 1027 that encode the same information into the claims format needed by the 1028 Relying Party. 1030 The following diagram illustrates that Evidence and Attestation 1031 Results might each have multiple possible encoding formats, so that 1032 they can be conveyed by various existing protocols. It also 1033 motivates why the Verifier might also be responsible for accepting 1034 Evidence that encodes claims in one format, while issuing Attestation 1035 Results that encode claims in a different format. 1037 Evidence Attestation Results 1038 .--------------. CWT CWT .-------------------. 1039 | Attester-A |------------. .----------->| Relying Party V | 1040 '--------------' v | `-------------------' 1041 .--------------. JWT .------------. JWT .-------------------. 1042 | Attester-B |-------->| Verifier |-------->| Relying Party W | 1043 '--------------' | | `-------------------' 1044 .--------------. X.509 | | X.509 .-------------------. 1045 | Attester-C |-------->| |-------->| Relying Party X | 1046 '--------------' | | `-------------------' 1047 .--------------. TPM | | TPM .-------------------. 1048 | Attester-D |-------->| |-------->| Relying Party Y | 1049 '--------------' '------------' `-------------------' 1050 .--------------. other ^ | other .-------------------. 1051 | Attester-E |------------' '----------->| Relying Party Z | 1052 '--------------' `-------------------' 1054 Figure 9: Multiple Attesters and Relying Parties with Different 1055 Formats 1057 11. Freshness 1059 It is important to prevent replay attacks where an attacker replays 1060 old Evidence or an old Attestation Result that is no longer correct. 1061 To do so, some mechanism of ensuring that the Evidence and 1062 Attestation Result are fresh, meaning that there is some degree of 1063 assurance that they still reflect the latest state of the Attester, 1064 and that any Attestation Result was generated using the latest 1065 Appraisal Policy for Evidence. There is, however, always a race 1066 condition possible in that the state of the Attester, and the 1067 Appraisal Policy for Evidence, might change immediately after the 1068 Evidence or Attestation Result was generated. The goal is merely to 1069 narrow the time window to something the Verifier (for Evidence) or 1070 Relying Party (for an Attestation Result) is willing to accept. 1072 There are two common approaches to providing some assurance of 1073 freshness. The first approach is that a nonce is generated by a 1074 remote entity (e.g., the Verifier for Evidence, or the Relying Party 1075 for an Attestation Result), and the nonce is then signed and included 1076 along with the claims in the Evidence or Attestation Result, so that 1077 the remote entity knows that the claims were signed after the nonce 1078 was generated. 1080 A second approach is to rely on synchronized clocks, and include a 1081 signed timestamp (e.g., using [I-D.birkholz-rats-tuda]) along with 1082 the claims in the Evidence or Attestation Result, so that the remote 1083 entity knows that the claims were signed at that time, as long as it 1084 has some assurance that the timestamp is correct. This typically 1085 requires additional claims about the signer's time synchronization 1086 mechanism in order to provide such assurance. 1088 In either approach, it is important to note that the actual values in 1089 claims might have been generated long before the claims are signed. 1090 If so, it is the signer's responsibility to ensure that the values 1091 are still correct when they are signed. For example, values might 1092 have been generated at boot, and then used in claims as long as the 1093 signer can guarantee that they cannot have changed since boot. 1095 A more detailed discussion with examples appears in Section 17. 1097 12. Privacy Considerations 1099 The conveyance of Evidence and the resulting Attestation Results 1100 reveal a great deal of information about the internal state of a 1101 device. In many cases, the whole point of the Attestation process is 1102 to provide reliable information about the type of the device and the 1103 firmware/software that the device is running. This information might 1104 be particularly interesting to many attackers. For example, knowing 1105 that a device is running a weak version of firmware provides a way to 1106 aim attacks better. 1108 Evidence and Attestation Results data structures are expected to 1109 support integrity protection encoding (e.g., COSE, JOSE, X.509) and 1110 optionally might support confidentiality protection (e.g., COSE, 1111 JOSE). Therefore, if confidentiality protection is omitted or 1112 unavailable, the protocols that convey Evidence or Attestation 1113 Results are responsible for detailing what kinds of information are 1114 disclosed, and to whom they are exposed. 1116 Furthermore, because Evidence might contain sensitive information, 1117 Attesters are responsible for only sending such Evidence to trusted 1118 Verifiers. Some Attesters might want a stronger level of assurance 1119 of the trustworthiness of a Verifier before sending Evidence to it. 1121 In such cases, an Attester can first act as a Relying Party and ask 1122 for the Verifier's own Attestation Result, and appraising it just as 1123 a Relying Party would appraise an Attestation Result for any other 1124 purpose. 1126 13. Security Considerations 1128 Any solution that conveys information used for security purposes, 1129 whether such information is in the form of Evidence, Attestation 1130 Results, Endorsements, or Appraisal Policy, needs to support end-to- 1131 end integrity protection and replay attack prevention, and often also 1132 needs to support additional security protections. For example, 1133 additional means of authentication, confidentiality, integrity, 1134 replay, denial of service and privacy protection are needed in many 1135 use cases. Section 11 discusses ways in which freshness can be used 1136 in this architecture to protect against replay attacks. 1138 To assess the security provided by a particular Appraisal Policy, it 1139 is important to understand the strength of the Root of Trust, e.g., 1140 whether it is mutable software, or firmware that is read-only after 1141 boot, or immutable hardware/ROM. 1143 It is also important that the Appraisal Policy was itself obtained 1144 securely. As such, if Appraisal Policies for a Relying Party or for 1145 a Verifier can be configured via a network protocol, the ability to 1146 create Evidence about the integrity of the entity providing the 1147 Appraisal Policy needs to be considered. 1149 The security of conveyed information may be applied at different 1150 layers, whether by a conveyance protocol, or an information encoding 1151 format. This architecture expects attestation messages (i.e., 1152 Evidence, Attestation Results, Endorsements and Policies) are end-to- 1153 end protected based on the role interaction context. For example, if 1154 an Attester produces Evidence that is relayed through some other 1155 entity that doesn't implement the Attester or the intended Verifier 1156 roles, then the relaying entity should not expect to have access to 1157 the Evidence. 1159 14. IANA Considerations 1161 This document does not require any actions by IANA. 1163 15. Acknowledgments 1165 Special thanks go to Joerg Borchert, Nancy Cam-Winget, Jessica 1166 Fitzgerald-McKay, Thomas Fossati, Diego Lopez, Laurence Lundblade, 1167 Wei Pan, Paul Rowe, Hannes Tschofenig, Frank Xia, and David Wooten. 1169 16. Contributors 1171 Thomas Hardjono created older versions of the terminology section in 1172 collaboration with Ned Smith. Eric Voit provided the conceptual 1173 separation between Attestation Provision Flows and Attestation 1174 Evidence Flows. Monty Wisemen created the content structure of the 1175 first three architecture drafts. Carsten Bormann provided many of 1176 the motivational building blocks with respect to the Internet Threat 1177 Model. 1179 17. Appendix A: Time Considerations 1181 The table below defines a number of relevant events, with an ID that 1182 is used in subsequent diagrams. The times of said events might be 1183 defined in terms of an absolute clock time such as Coordinated 1184 Universal Time, or might be defined relative to some other timestamp 1185 or timeticks counter. 1187 +----+------------+-----------------------------------------------+ 1188 | ID | Event | Explanation of event | 1189 +====+============+===============================================+ 1190 | VG | Value | A value to appear in a claim was created | 1191 | | generation | | 1192 +----+------------+-----------------------------------------------+ 1193 | NS | Nonce sent | A random number not predictable to an | 1194 | | | Attester is sent | 1195 +----+------------+-----------------------------------------------+ 1196 | NR | Nonce | The nonce is relayed to an Attester by | 1197 | | relayed | enother entity | 1198 +----+------------+-----------------------------------------------+ 1199 | EG | Evidence | An Attester collects claims and generates | 1200 | | generation | Evidence | 1201 +----+------------+-----------------------------------------------+ 1202 | ER | Evidence | A Relying Party relays Evidence to a Verifier | 1203 | | relayed | | 1204 +----+------------+-----------------------------------------------+ 1205 | RG | Result | A Verifier appraises Evidence and generates | 1206 | | generation | an Attestation Result | 1207 +----+------------+-----------------------------------------------+ 1208 | RR | Result | A Relying Party relays an Attestation Result | 1209 | | relayed | to a Relying Party | 1210 +----+------------+-----------------------------------------------+ 1211 | RA | Result | The Relying Party appraises Attestation | 1212 | | appraised | Results | 1213 +----+------------+-----------------------------------------------+ 1214 | OP | Operation | The Relying Party performs some operation | 1215 | | performed | requested by the Attester. For example, | 1216 | | | acting upon some message just received across | 1217 | | | a session created earlier at time(RA). | 1218 +----+------------+-----------------------------------------------+ 1219 | RX | Result | An Attestation Result should no longer be | 1220 | | expiry | accepted, according to the Verifier that | 1221 | | | generated it | 1222 +----+------------+-----------------------------------------------+ 1224 Table 1 1226 We now walk through a number of hypothetical examples of how a 1227 solution might be built. This list is not intended to be complete, 1228 but is just representative enough to highlight various timing 1229 considerations. 1231 17.1. Example 1: Timestamp-based Passport Model Example 1233 The following example illustrates a hypothetical Passport Model 1234 solution that uses timestamps and requires roughly synchronized 1235 clocks between the Attester, Verifier, and Relying Party, which 1236 depends on using a secure clock synchronization mechanism. 1238 .----------. .----------. .---------------. 1239 | Attester | | Verifier | | Relying Party | 1240 '----------' '----------' '---------------' 1241 time(VG) | | 1242 | | | 1243 ~ ~ ~ 1244 | | | 1245 time(EG) | | 1246 |------Evidence{time(EG)}-------->| | 1247 | time(RG) | 1248 |<-----Attestation Result---------| | 1249 | {time(RG),time(RX)} | | 1250 ~ ~ 1251 | | 1252 |------Attestation Result{time(RG),time(RX)}-->time(RA) 1253 | | 1254 ~ ~ 1255 | | 1256 | time(OP) 1257 | | 1259 The Verifier can check whether the Evidence is fresh when appraising 1260 it at time(RG) by checking "time(RG) - time(EG) < Threshold", where 1261 the Verifier's threshold is large enough to account for the maximum 1262 permitted clock skew between the Verifier and the Attester. 1264 If time(VG) is also included in the Evidence along with the claim 1265 value generated at that time, and the Verifier decides that it can 1266 trust the time(VG) value, the Verifier can also determine whether the 1267 claim value is recent by checking "time(RG) - time(VG) < Threshold", 1268 again where the threshold is large enough to account for the maximum 1269 permitted clock skew between the Verifier and the Attester. 1271 The Relying Party can check whether the Attestation Result is fresh 1272 when appraising it at time(RA) by checking "time(RA) - time(RG) < 1273 Threshold", where the Relying Party's threshold is large enough to 1274 account for the maximum permitted clock skew between the Relying 1275 Party and the Verifier. The result might then be used for some time 1276 (e.g., throughout the lifetime of a connection established at 1277 time(RA)). The Relying Party must be careful, however, to not allow 1278 continued use beyond the period for which it deems the Attestation 1279 Result to remain fresh enough. Thus, it might allow use (at 1280 time(OP)) as long as "time(OP) - time(RG) < Threshold". However, if 1281 the Attestation Result contains an expiry time time(RX) then it could 1282 explicitly check "time(OP) < time(RX)". 1284 17.2. Example 2: Nonce-based Passport Model Example 1286 The following example illustrates a hypothetical Passport Model 1287 solution that uses nonces and thus does not require that any clocks 1288 are synchronized. 1290 .----------. .----------. .---------------. 1291 | Attester | | Verifier | | Relying Party | 1292 '----------' '----------' '---------------' 1293 time(VG) | | 1294 | | | 1295 ~ ~ ~ 1296 | | | 1297 |<---Nonce1--------------------time(NS) | 1298 time(EG) | | 1299 |----Evidence-------------------->| | 1300 | {Nonce1, time(EG)-time(VG)} | | 1301 | time(RG) | 1302 |<---Attestation Result-----------| | 1303 | {time(RX)-time(RG)} | | 1304 ~ ~ 1305 | | 1306 |<---Nonce2------------------------------------time(NS') 1307 time(RR) 1308 |----Attestation Result{time(RX)-time(RG)}---->time(RA) 1309 | Nonce2, time(RR)-time(EG) | 1310 ~ ~ 1311 | | 1312 | time(OP) 1314 In this example solution, the Verifier can check whether the Evidence 1315 is fresh at time(RG) by verifying that "time(RG) - time(NS) < 1316 Threshold". 1318 The Verifier cannot, however, simply rely on a Nonce to determine 1319 whether the value of a claim is recent, since the claim value might 1320 have been generated long before the nonce was sent by the Verifier. 1321 However, if the Verifier decides that the Attester can be trusted to 1322 correctly provide the delta time(EG)-time(VG), then it can determine 1323 recency by checking "time(RG)-time(NS) + time(EG)-time(VG) < 1324 Threshold". 1326 Similarly if, based on an Attestation Result from a Verifier it 1327 trusts, the Relying Party decides that the Attester can be trusted to 1328 correctly provide time deltas, then it can determine whether the 1329 Attestation Result is fresh by checking "time(OP) - time(NS') + 1330 time(RR)-time(EG) < Threshold". Although the Nonce2 and time(RR)- 1331 time(EG) values cannot be inside the Attestation Result, they might 1332 be signed by the Attester such that the Attestation Result vouches 1333 for the Attester's signing capability. 1335 The Relying Party must still be careful, however, to not allow 1336 continued use beyond the period for which it deems the Attestation 1337 Result to remain valid. Thus, if the Attestation Result sends a 1338 validity lifetime in terms of time(RX)-time(RG), then the Relying 1339 Party can check "time(OP) - time(NS') < time(RX)-time(RG)". 1341 17.3. Example 3: Timestamp-based Background-Check Model Example 1343 The following example illustrates a hypothetical Background-Check 1344 Model solution that uses timestamps and requires roughly synchronized 1345 clocks between the Attester, Verifier, and Relying Party. 1347 .----------. .---------------. .----------. 1348 | Attester | | Relying Party | | Verifier | 1349 '----------' '---------------' '----------' 1350 time(VG) | | 1351 | | | 1352 ~ ~ ~ 1353 | | | 1354 time(EG) | | 1355 |----Evidence------->| | 1356 | {time(EG)} time(ER)--Evidence{time(EG)}-->| 1357 | | time(RG) 1358 | time(RA)<-Attestation Result---| 1359 | | {time(RX)} | 1360 ~ ~ ~ 1361 | | | 1362 | time(OP) | 1364 The time considerations in this example are equivalent to those 1365 discussed under Example 1 above. 1367 17.4. Example 4: Nonce-based Background-Check Model Example 1369 The following example illustrates a hypothetical Background-Check 1370 Model solution that uses nonces and thus does not require that any 1371 clocks are synchronized. In this example solution, a nonce is 1372 generated by a Verifier at the request of a Relying Party, when the 1373 Relying Party needs to send one to an Attester. 1375 .----------. .---------------. .----------. 1376 | Attester | | Relying Party | | Verifier | 1377 '----------' '---------------' '----------' 1378 time(VG) | | 1379 | | | 1380 ~ ~ ~ 1381 | | | 1382 | |<-----Nonce-------------time(NS) 1383 |<---Nonce-----------time(NR) | 1384 time(EG) | | 1385 |----Evidence{Nonce}--->| | 1386 | time(ER)--Evidence{Nonce}----->| 1387 | | time(RG) 1388 | time(RA)<-Attestation Result---| 1389 | | {time(RX)-time(RG)} | 1390 ~ ~ ~ 1391 | | | 1392 | time(OP) | 1394 The Verifier can check whether the Evidence is fresh, and whether a 1395 claim value is recent, the same as in Example 2 above. 1397 However, unlike in Example 2, the Relying Party can use the Nonce to 1398 determine whether the Attestation Result is fresh, by verifying that 1399 "time(OP) - time(NR) < Threshold". 1401 The Relying Party must still be careful, however, to not allow 1402 continued use beyond the period for which it deems the Attestation 1403 Result to remain valid. Thus, if the Attestation Result sends a 1404 validity lifetime in terms of time(RX)-time(RG), then the Relying 1405 Party can check "time(OP) - time(ER) < time(RX)-time(RG)". 1407 18. References 1409 18.1. Normative References 1411 [RFC7519] Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token 1412 (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015, 1413 . 1415 [RFC8392] Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, 1416 "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, 1417 May 2018, . 1419 18.2. Informative References 1421 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 1422 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 1423 . 1425 [OPCUA] OPC Foundation, "OPC Unified Architecture Specification, 1426 Part 2: Security Model, Release 1.03", OPC 10000-2 , 25 1427 November 2015, . 1431 [I-D.birkholz-rats-tuda] 1432 Fuchs, A., Birkholz, H., McDonald, I., and C. Bormann, 1433 "Time-Based Uni-Directional Attestation", Work in 1434 Progress, Internet-Draft, draft-birkholz-rats-tuda-02, 9 1435 March 2020, . 1438 [I-D.ietf-teep-architecture] 1439 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 1440 "Trusted Execution Environment Provisioning (TEEP) 1441 Architecture", Work in Progress, Internet-Draft, draft- 1442 ietf-teep-architecture-08, 4 April 2020, 1443 . 1446 Authors' Addresses 1448 Henk Birkholz 1449 Fraunhofer SIT 1450 Rheinstrasse 75 1451 64295 Darmstadt 1452 Germany 1454 Email: henk.birkholz@sit.fraunhofer.de 1456 Dave Thaler 1457 Microsoft 1458 United States of America 1460 Email: dthaler@microsoft.com 1462 Michael Richardson 1463 Sandelman Software Works 1464 Canada 1466 Email: mcr+ietf@sandelman.ca 1467 Ned Smith 1468 Intel Corporation 1469 United States of America 1471 Email: ned.smith@intel.com 1473 Wei Pan 1474 Huawei Technologies 1476 Email: william.panwei@huawei.com