idnits 2.17.1 draft-ietf-rats-architecture-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 531 has weird spacing: '... claims v ...' -- The document date (1 September 2020) is 1333 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-birkholz-rats-tuda-03 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-12 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RATS Working Group H. Birkholz 3 Internet-Draft Fraunhofer SIT 4 Intended status: Informational D. Thaler 5 Expires: 5 March 2021 Microsoft 6 M. Richardson 7 Sandelman Software Works 8 N. Smith 9 Intel 10 W. Pan 11 Huawei Technologies 12 1 September 2020 14 Remote Attestation Procedures Architecture 15 draft-ietf-rats-architecture-06 17 Abstract 19 In network protocol exchanges, it is often the case that one entity 20 (a Relying Party) requires evidence about a remote peer to assess the 21 peer's trustworthiness, and a way to appraise such evidence. The 22 evidence is typically a set of claims about its software and hardware 23 platform. This document describes an architecture for such remote 24 attestation procedures (RATS). 26 Note to Readers 28 Discussion of this document takes place on the RATS Working Group 29 mailing list (rats@ietf.org), which is archived at 30 https://mailarchive.ietf.org/arch/browse/rats/ 31 (https://mailarchive.ietf.org/arch/browse/rats/). 33 Source for this draft and an issue tracker can be found at 34 https://github.com/ietf-rats-wg/architecture (https://github.com/ 35 ietf-rats-wg/architecture). 37 Status of This Memo 39 This Internet-Draft is submitted in full conformance with the 40 provisions of BCP 78 and BCP 79. 42 Internet-Drafts are working documents of the Internet Engineering 43 Task Force (IETF). Note that other groups may also distribute 44 working documents as Internet-Drafts. The list of current Internet- 45 Drafts is at https://datatracker.ietf.org/drafts/current/. 47 Internet-Drafts are draft documents valid for a maximum of six months 48 and may be updated, replaced, or obsoleted by other documents at any 49 time. It is inappropriate to use Internet-Drafts as reference 50 material or to cite them other than as "work in progress." 52 This Internet-Draft will expire on 5 March 2021. 54 Copyright Notice 56 Copyright (c) 2020 IETF Trust and the persons identified as the 57 document authors. All rights reserved. 59 This document is subject to BCP 78 and the IETF Trust's Legal 60 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 61 license-info) in effect on the date of publication of this document. 62 Please review these documents carefully, as they describe your rights 63 and restrictions with respect to this document. Code Components 64 extracted from this document must include Simplified BSD License text 65 as described in Section 4.e of the Trust Legal Provisions and are 66 provided without warranty as described in the Simplified BSD License. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 71 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 72 3. Reference Use Cases . . . . . . . . . . . . . . . . . . . . . 5 73 3.1. Network Endpoint Assessment . . . . . . . . . . . . . . . 5 74 3.2. Confidential Machine Learning (ML) Model Protection . . . 6 75 3.3. Confidential Data Retrieval . . . . . . . . . . . . . . . 6 76 3.4. Critical Infrastructure Control . . . . . . . . . . . . . 7 77 3.5. Trusted Execution Environment (TEE) Provisioning . . . . 7 78 3.6. Hardware Watchdog . . . . . . . . . . . . . . . . . . . . 7 79 3.7. FIDO Biometric Authentication . . . . . . . . . . . . . . 8 80 4. Architectural Overview . . . . . . . . . . . . . . . . . . . 8 81 4.1. Appraisal Policies . . . . . . . . . . . . . . . . . . . 10 82 4.2. Two Types of Environments of an Attester . . . . . . . . 10 83 4.3. Layered Attestation Environments . . . . . . . . . . . . 11 84 4.4. Composite Device . . . . . . . . . . . . . . . . . . . . 13 85 5. Topological Models . . . . . . . . . . . . . . . . . . . . . 16 86 5.1. Passport Model . . . . . . . . . . . . . . . . . . . . . 16 87 5.2. Background-Check Model . . . . . . . . . . . . . . . . . 17 88 5.3. Combinations . . . . . . . . . . . . . . . . . . . . . . 18 89 6. Roles and Entities . . . . . . . . . . . . . . . . . . . . . 19 90 7. Trust Model . . . . . . . . . . . . . . . . . . . . . . . . . 20 91 7.1. Relying Party . . . . . . . . . . . . . . . . . . . . . . 20 92 7.2. Attester . . . . . . . . . . . . . . . . . . . . . . . . 21 93 7.3. Relying Party Owner . . . . . . . . . . . . . . . . . . . 21 94 7.4. Verifier . . . . . . . . . . . . . . . . . . . . . . . . 21 95 7.5. Endorser and Verifier Owner . . . . . . . . . . . . . . . 22 96 8. Conceptual Messages . . . . . . . . . . . . . . . . . . . . . 22 97 8.1. Evidence . . . . . . . . . . . . . . . . . . . . . . . . 22 98 8.2. Endorsements . . . . . . . . . . . . . . . . . . . . . . 22 99 8.3. Attestation Results . . . . . . . . . . . . . . . . . . . 23 100 9. Claims Encoding Formats . . . . . . . . . . . . . . . . . . . 24 101 10. Freshness . . . . . . . . . . . . . . . . . . . . . . . . . . 26 102 11. Privacy Considerations . . . . . . . . . . . . . . . . . . . 28 103 12. Security Considerations . . . . . . . . . . . . . . . . . . . 28 104 12.1. Attester and Attestation Key Protection . . . . . . . . 29 105 12.1.1. On-Device Attester and Key Protection . . . . . . . 29 106 12.1.2. Attestation Key Provisioning Processes . . . . . . . 30 107 12.2. Integrity Protection . . . . . . . . . . . . . . . . . . 30 108 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 31 109 14. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 31 110 15. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 32 111 16. Appendix A: Time Considerations . . . . . . . . . . . . . . . 32 112 16.1. Example 1: Timestamp-based Passport Model Example . . . 33 113 16.2. Example 2: Nonce-based Passport Model Example . . . . . 35 114 16.3. Example 3: Handle-based Passport Model Example . . . . . 36 115 16.4. Example 4: Timestamp-based Background-Check Model 116 Example . . . . . . . . . . . . . . . . . . . . . . . . 38 117 16.5. Example 5: Nonce-based Background-Check Model Example . 38 118 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 39 119 17.1. Normative References . . . . . . . . . . . . . . . . . . 39 120 17.2. Informative References . . . . . . . . . . . . . . . . . 39 121 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 40 123 1. Introduction 125 In Remote Attestation Procedures (RATS), one peer (the "Attester") 126 produces believable information about itself - Evidence - to enable a 127 remote peer (the "Relying Party") to decide whether to consider that 128 Attester a trustworthy peer or not. RATS are facilitated by an 129 additional vital party, the Verifier. 131 The Verifier appraises Evidence via Appraisal Policies and creates 132 the Attestation Results to support Relying Parties in their decision 133 process. This documents defines a flexible architecture consisting 134 of attestation roles and their interactions via conceptual messages. 135 Additionally, this document defines a universal set of terms that can 136 be mapped to various existing and emerging Remote Attestation 137 Procedures. Common topological models and the data flows associated 138 with them, such as the "Passport Model" and the "Background-Check 139 Model" are illustrated. The purpose is to define useful terminology 140 for attestation and enable readers to map their solution architecture 141 to the canonical attestation architecture provided here. Having a 142 common terminology that provides well-understood meanings for common 143 themes such as roles, device composition, topological models, and 144 appraisal is vital for semantic interoperability across solutions and 145 platforms involving multiple vendors and providers. 147 Amongst other things, this document is about trust and 148 trustworthiness. Trust is a choice one makes about another system. 149 Trustworthiness is a quality about the other system that can be used 150 in making one's decision to trust it or not. This is subtle 151 difference and being familiar with the difference is crucial for 152 using this document. Additionally, the concepts of freshness and 153 trust relationships with respect to RATS are elaborated on to enable 154 implementers in order to choose appropriate solutions to compose 155 their Remote Attestation Procedures. 157 2. Terminology 159 This document uses the following terms. 161 Appraisal Policy for Evidence: A set of rules that informs how a 162 Verifier evaluates the validity of information about an Attester. 163 Compare /security policy/ in [RFC4949] 165 Appraisal Policy for Attestation Results: A set of rules that direct 166 how a Relying Party uses the Attestation Results regarding an 167 Attester generated by the Verifiers. Compare /security policy/ in 168 [RFC4949] 170 Attestation Result: The output generated by a Verifier, typically 171 including information about an Attester, where the Verifier 172 vouches for the validity of the results 174 Attester: A role performed by an entity (typically a device) whose 175 Evidence must be appraised in order to infer the extent to which 176 the Attester is considered trustworthy, such as when deciding 177 whether it is authorized to perform some operation 179 Claim: A piece of asserted information, often in the form of a name/ 180 value pair. (Compare /claim/ in [RFC7519]) 182 Endorsement: A secure statement that some entity (typically a 183 manufacturer) vouches for the integrity of an Attester's signing 184 capability 186 Endorser: An entity (typically a manufacturer) whose Endorsements 187 help Verifiers appraise the authenticity of Evidence 189 Evidence: A set of information about an Attester that is to be 190 appraised by a Verifier. Evidence may include configuration data, 191 measurements, telemetry, or inferences. 193 Relying Party: A role performed by an entity that depends on the 194 validity of information about an Attester, for purposes of 195 reliably applying application specific actions. Compare /relying 196 party/ in [RFC4949] 198 Relying Party Owner: An entity (typically an administrator), that is 199 authorized to configure Appraisal Policy for Attestation Results 200 in a Relying Party 202 Verifier: A role performed by an entity that appraises the validity 203 of Evidence about an Attester and produces Attestation Results to 204 be used by a Relying Party 206 Verifier Owner: An entity (typically an administrator), that is 207 authorized to configure Appraisal Policy for Evidence in a 208 Verifier 210 3. Reference Use Cases 212 This section covers a number of representative use cases for remote 213 attestation, independent of specific solutions. The purpose is to 214 provide motivation for various aspects of the architecture presented 215 in this draft. Many other use cases exist, and this document does 216 not intend to have a complete list, only to have a set of use cases 217 that collectively cover all the functionality required in the 218 architecture. 220 Each use case includes a description followed by a summary of the 221 Attester and a Relying Party roles. 223 3.1. Network Endpoint Assessment 225 Network operators want a trustworthy report that includes identity 226 and version of information of the hardware and software on the 227 machines attached to their network, for purposes such as inventory, 228 audit, anomaly detection, record maintenance and/or trending reports 229 (logging). The network operator may also want a policy by which full 230 access is only granted to devices that meet some definition of 231 hygiene, and so wants to get claims about such information and verify 232 their validity. Remote attestation is desired to prevent vulnerable 233 or compromised devices from getting access to the network and 234 potentially harming others. 236 Typically, solutions start with a specific component (called a "Root 237 of Trust") that provides device identity and protected storage for 238 measurements. The system components perform a series of measurements 239 that may be signed by the Root of Trust, considered as Evidence about 240 the hardware, firmware, BIOS, software, etc. that is running. 242 Attester: A device desiring access to a network 244 Relying Party: A network infrastructure device such as a router, 245 switch, or access point 247 3.2. Confidential Machine Learning (ML) Model Protection 249 A device manufacturer wants to protect its intellectual property. 250 This is primarily the ML model it developed and runs in the devices 251 purchased by its customers. The goals for the protection include 252 preventing attackers, potentially the customer themselves, from 253 seeing the details of the model. 255 This typically works by having some protected environment in the 256 device go through a remote attestation with some manufacturer service 257 that can assess its trustworthiness. If remote attestation succeeds, 258 then the manufacturer service releases either the model, or a key to 259 decrypt a model the Attester already has in encrypted form, to the 260 requester. 262 Attester: A device desiring to run an ML model to do inferencing 264 Relying Party: A server or service holding ML models it desires to 265 protect 267 3.3. Confidential Data Retrieval 269 This is a generalization of the ML model use case above, where the 270 data can be any highly confidential data, such as health data about 271 customers, payroll data about employees, future business plans, etc. 272 An assessment of system state is made against a set of policies to 273 evaluate the state of a system using attestations for the system 274 requesting data. Attestation is desired to prevent leaking data to 275 compromised devices. 277 Attester: An entity desiring to retrieve confidential data 279 Relying Party: An entity that holds confidential data for retrieval 280 by other entities 282 3.4. Critical Infrastructure Control 284 In this use case, potentially dangerous physical equipment (e.g., 285 power grid, traffic control, hazardous chemical processing, etc.) is 286 connected to a network. The organization managing such 287 infrastructure needs to ensure that only authorized code and users 288 can control such processes, and they are protected from malware or 289 other adversaries. When a protocol operation can affect some 290 critical system, the device attached to the critical equipment thus 291 wants some assurance that the requester has not been compromised. As 292 such, remote attestation can be used to only accept commands from 293 requesters that are within policy. 295 Attester: A device or application wishing to control physical 296 equipment 298 Relying Party: A device or application connected to potentially 299 dangerous physical equipment (hazardous chemical processing, 300 traffic control, power grid, etc.) 302 3.5. Trusted Execution Environment (TEE) Provisioning 304 A "Trusted Application Manager (TAM)" server is responsible for 305 managing the applications running in the TEE of a client device. To 306 do this, the TAM wants to assess the state of a TEE, or of 307 applications in the TEE, of a client device. The TEE conducts a 308 remote attestation procedure with the TAM, which can then decide 309 whether the TEE is already in compliance with the TAM's latest 310 policy, or if the TAM needs to uninstall, update, or install approved 311 applications in the TEE to bring it back into compliance with the 312 TAM's policy. 314 Attester: A device with a trusted execution environment capable of 315 running trusted applications that can be updated 317 Relying Party: A Trusted Application Manager 319 3.6. Hardware Watchdog 321 One significant problem is malware that holds a device hostage and 322 does not allow it to reboot to prevent updates from being applied. 323 This is a significant problem, because it allows a fleet of devices 324 to be held hostage for ransom. 326 In the case, the Relying Party is the watchdog timer in the TPM/ 327 secure enclave itself, as described in [TCGarch] section 43.3. The 328 Attestation Results are returned to the device, and provided to the 329 enclave. 331 If the watchdog does not receive regular, and fresh, Attestation 332 Results as to the systems' health, then it forces a reboot. 334 Attester: The device that is desired to keep from being held hostage 335 for a long period of time 337 Relying Party: A remote server that will securely grant the Attester 338 permission to continue operating (i.e., not reboot) for a period 339 of time 341 3.7. FIDO Biometric Authentication 343 In the Fast IDentity Online (FIDO) protocol [WebAuthN], [CTAP], the 344 device in the user's hand authenticates the human user, whether by 345 biometrics (such as fingerprints), or by PIN and password. FIDO 346 authentication puts a large amount of trust in the device compared to 347 typical password authentication because it is the device that 348 verifies the biometric, PIN and password inputs from the user, not 349 the server. For the Relying Party to know that the authentication is 350 trustworthy, the Relying Party needs to know that the Authenticator 351 part of the device is trustworthy. The FIDO protocol employs remote 352 attestation for this. 354 The FIDO protocol supports several remote attestation protocols and a 355 mechanism by which new ones can be registered and added. Remote 356 attestation defined by RATS is thus a candidate for use in the FIDO 357 protocol. 359 Other biometric authentication protocols such as the Chinese IFAA 360 standard and WeChat Pay as well as Google Pay make use of attestation 361 in one form or another. 363 Attester: Every FIDO Authenticator contains an Attester. 365 Relying Party: Any web site, mobile application back end or service 366 that does biometric authentication. 368 4. Architectural Overview 370 Figure 1 depicts the data that flows between different roles, 371 independent of protocol or use case. 373 ************ ************ **************** 374 * Endorser * * Verifier * * Relying Party* 375 ************ * Owner * * Owner * 376 | ************ **************** 377 | | | 378 Endorsements| | | 379 | |Appraisal | 380 | |Policy | 381 | |for | Appraisal 382 | |Evidence | Policy for 383 | | | Attestation 384 | | | Results 385 v v | 386 .-----------------. | 387 .----->| Verifier |------. | 388 | '-----------------' | | 389 | | | 390 | Attestation| | 391 | Results | | 392 | Evidence | | 393 | | | 394 | v v 395 .----------. .-----------------. 396 | Attester | | Relying Party | 397 '----------' '-----------------' 399 Figure 1: Conceptual Data Flow 401 An Attester creates Evidence that is conveyed to a Verifier. 403 The Verifier uses the Evidence, and any Endorsements from Endorsers, 404 by applying an Appraisal Policy for Evidence to assess the 405 trustworthiness of the Attester, and generates Attestation Results 406 for use by Relying Parties. The Appraisal Policy for Evidence might 407 be obtained from an Endorser along with the Endorsements, and/or 408 might be obtained via some other mechanism such as being configured 409 in the Verifier by the Verifier Owner. 411 The Relying Party uses Attestation Results by applying its own 412 Appraisal Policy to make application-specific decisions such as 413 authorization decisions. The Appraisal Policy for Attestation 414 Results is configured in the Relying Party by the Relying Party 415 Owner, and/or is programmed into the Relying Party. 417 4.1. Appraisal Policies 419 The Verifier, when appraising Evidence, or the Relying Party, when 420 appraising Attestation Results, checks the values of some claims 421 against constraints specified in its Appraisal Policy. Such 422 constraints might involve a comparison for equality against a 423 reference value, or a check for being in a range bounded by reference 424 values, or membership in a set of reference values, or a check 425 against values in other claims, or any other test. 427 Such reference values might be specified as part of the Appraisal 428 Policy itself, or might be obtained from a separate source, such as 429 an Endorsement, and then used by the Appraisal Policy. 431 The actual data format and semantics of any reference values are 432 specific to claims and implementations. This architecture document 433 does not define any general purpose format for them or general means 434 for comparison. 436 4.2. Two Types of Environments of an Attester 438 An Attester consists of at least one Attesting Environment and at 439 least one Target Environment. In some implementations, the Attesting 440 and Target Environments might be combined. Other implementations 441 might have multiple Attesting and Target Environments, such as in the 442 examples described in more detail in Section 4.3 and Section 4.4. 443 Other examples may exist, and the examples discussed could even be 444 combined into even more complex implementations. 446 Claims are collected from Target Environments, as shown in Figure 2. 447 That is, Attesting Environments collect the values and the 448 information to be represented in Claims, by reading system registers 449 and variables, calling into subsystems, taking measurements on code 450 or memory and so on of the Target Environment. Attesting 451 Environments then format the claims appropriately, and typically use 452 key material and cryptographic functions, such as signing or cipher 453 algorithms, to create Evidence. There is no limit to or requirement 454 on the places that an Attesting Environment can exist, but they 455 typically are in Trusted Execution Environments (TEE), embedded 456 Secure Elements (eSE), and BIOS firmware. An execution environment 457 may not, by default, be capable of claims collection for a given 458 Target Environment. Execution environments that are designed to be 459 capable of claims collection are referred to in this document as 460 Attesting Environments. 462 .--------------------------------. 463 | | 464 | Verifier | 465 | | 466 '--------------------------------' 467 ^ 468 | 469 .-------------------------|----------. 470 | | | 471 | .----------------. | | 472 | | Target | | | 473 | | Environment | | | 474 | | | | Evidence | 475 | '----------------' | | 476 | | | | 477 | | | | 478 | Collect | | | 479 | Claims | | | 480 | | | | 481 | v | | 482 | .-------------. | 483 | | Attesting | | 484 | | Environment | | 485 | | | | 486 | '-------------' | 487 | Attester | 488 '------------------------------------' 490 Figure 2: Two Types of Environments 492 4.3. Layered Attestation Environments 494 By definition, the Attester role creates Evidence. An Attester may 495 consist of one or more nested or staged environments, adding 496 complexity to the architectural structure. The unifying component is 497 the Root of Trust and the nested, staged, or chained attestation 498 Evidence produced. The nested or chained structure includes Claims, 499 collected by the Attester to aid in the assurance or believability of 500 the attestation Evidence. 502 Figure 3 depicts an example of a device that includes (A) a BIOS 503 stored in read-only memory in this example, (B) an updatable 504 bootloader, and (C) an operating system kernel. 506 .----------. .----------. 507 | | | | 508 | Endorser |------------------->| Verifier | 509 | | Endorsements | | 510 '----------' for A, B, and C '----------' 511 ^ 512 .------------------------------------. | 513 | | | 514 | .---------------------------. | | 515 | | Target | | | Layered 516 | | Environment | | | Evidence 517 | | C | | | for 518 | '---------------------------' | | B and C 519 | Collect | | | 520 | claims | | | 521 | .---------------|-----------. | | 522 | | Target v | | | 523 | | Environment .-----------. | | | 524 | | B | Attesting | | | | 525 | | |Environment|-----------' 526 | | | B | | | 527 | | '-----------' | | 528 | | ^ | | 529 | '---------------------|-----' | 530 | Collect | | Evidence | 531 | claims v | for B | 532 | .-----------. | 533 | | Attesting | | 534 | |Environment| | 535 | | A | | 536 | '-----------' | 537 | | 538 '------------------------------------' 540 Figure 3: Layered Attester 542 Attesting Environment A, the read-only BIOS in this example, has to 543 ensure the integrity of the bootloader (Target Environment B). There 544 are potentially multiple kernels to boot, and the decision is up to 545 the bootloader. Only a bootloader with intact integrity will make an 546 appropriate decision. Therefore, these Claims have to be measured 547 securely. At this stage of the boot-cycle of the device, the Claims 548 collected typically cannot be composed into Evidence. 550 After the boot sequence is started, the BIOS conducts the most 551 important and defining feature of layered attestation, which is that 552 the successfully measured Target Environment B now becomes (or 553 contains) an Attesting Environment for the next layer. This 554 procedure in Layered Attestation is sometimes called "staging". It 555 is important that the new Attesting Environment B not be able to 556 alter any Claims about its own Target Environment B. This can be 557 ensured having those Claims be either signed by Attesting Environment 558 A or stored in an untamperable manner by Attesting Environment A. 560 Continuing with this example, the bootloader's Attesting Environment 561 B is now in charge of collecting Claims about Target Environment C, 562 which in this example is the kernel to be booted. The final Evidence 563 thus contains two sets of Claims: one set about the bootloader as 564 measured and signed by the BIOS, plus a set of Claims about the 565 kernel as measured and signed by the bootloader. 567 This example could be extended further by making the kernel become 568 another Attesting Environment for an application as another Target 569 Environment. This would result in a third set of Claims in the 570 Evidence pertaining to that application. 572 The essence of this example is a cascade of staged environments. 573 Each environment has the responsibility of measuring the next 574 environment before the next environment is started. In general, the 575 number of layers may vary by device or implementation, and an 576 Attesting Environment might even have multiple Target Environments 577 that it measures, rather than only one as shown in Figure 3. 579 4.4. Composite Device 581 A Composite Device is an entity composed of multiple sub-entities 582 such that its trustworthiness has to be determined by the appraisal 583 of all these sub-entities. 585 Each sub-entity has at least one Attesting Environment collecting the 586 claims from at least one Target Environment, then this sub-entity 587 generates Evidence about its trustworthiness. Therefore each sub- 588 entity can be called an Attester. Among all the Attesters, there may 589 be only some which have the ability to communicate with the Verifier 590 while others do not. 592 For example, a carrier-grade router consists of a chassis and 593 multiple slots. The trustworthiness of the router depends on all its 594 slots' trustworthiness. Each slot has an Attesting Environment such 595 as a TEE collecting the claims of its boot process, after which it 596 generates Evidence from the claims. Among these slots, only a main 597 slot can communicate with the Verifier while other slots cannot. But 598 other slots can communicate with the main slot by the links between 599 them inside the router. So the main slot collects the Evidence of 600 other slots, produces the final Evidence of the whole router and 601 conveys the final Evidence to the Verifier. Therefore the router is 602 a Composite Device, each slot is an Attester, and the main slot is 603 the lead Attester. 605 Another example is a multi-chassis router composed of multiple single 606 carrier-grade routers. The multi-chassis router provides higher 607 throughput by interconnecting multiple routers and can be logically 608 treated as one router for simpler management. A multi-chassis router 609 provides a management point that connects to the Verifier. Other 610 routers are only connected to the main router by the network cables, 611 and therefore they are managed and appraised via this main router's 612 help. So, in this case, the multi-chassis router is the Composite 613 Device, each router is an Attester and the main router is the lead 614 Attester. 616 Figure 4 depicts the conceptual data flow for a Composite Device. 618 .-----------------------------. 619 | Verifier | 620 '-----------------------------' 621 ^ 622 | 623 | Evidence of 624 | Composite Device 625 | 626 .----------------------------------|-------------------------------. 627 | .--------------------------------|-----. .------------. | 628 | | Collect .------------. | | | | 629 | | Claims .--------->| Attesting |<--------| Attester B |-. | 630 | | | |Environment | | '------------. | | 631 | | .----------------. | |<----------| Attester C |-. | 632 | | | Target | | | | '------------' | | 633 | | | Environment(s) | | |<------------| ... | | 634 | | | | '------------' | Evidence '------------' | 635 | | '----------------' | of | 636 | | | Attesters | 637 | | lead Attester A | (via Internal Links or | 638 | '--------------------------------------' Network Connections) | 639 | | 640 | Composite Device | 641 '------------------------------------------------------------------' 643 Figure 4: Conceptual Data Flow for a Composite Device 645 In the Composite Device, each Attester generates its own Evidence by 646 its Attesting Environment(s) collecting the claims from its Target 647 Environment(s). The lead Attester collects the Evidence of all other 648 Attesters and then generates the Evidence of the whole Composite 649 Attester. 651 An entity can take on multiple RATS roles (e.g., Attester, Verifier, 652 Relying Party, etc.) at the same time. The combination of roles can 653 be arbitrary. For example, in this Composite Device scenario, the 654 entity inside the lead Attester can also take on the role of a 655 Verifier, and the outside entity of Verifier can take on the role of 656 a Relying Party. After collecting the Evidence of other Attesters, 657 this inside Verifier uses Endorsements and Appraisal Policies 658 (obtained the same way as any other Verifier) in the verification 659 process to generate Attestation Results. The inside Verifier then 660 conveys the Attestation Results of other Attesters, whether in the 661 same conveyance protocol as the Evidence or not, to the outside 662 Verifier. 664 In this situation, the trust model described in Section 7 is also 665 suitable for this inside Verifier. 667 5. Topological Models 669 Figure 1 shows a basic model for communication between an Attester, a 670 Verifier, and a Relying Party. The Attester conveys its Evidence to 671 the Verifier for appraisal, and the Relying Party gets the 672 Attestation Result from the Verifier. There are multiple other 673 possible models. This section includes some reference models. This 674 is not intended to be a restrictive list, and other variations may 675 exist. 677 5.1. Passport Model 679 The passport model is so named because of its resemblance to how 680 nations issue passports to their citizens. The nature of the 681 Evidence that an individual needs to provide to its local authority 682 is specific to the country involved. The citizen retains control of 683 the resulting passport document and presents it to other entities 684 when it needs to assert a citizenship or identity claim, such as an 685 airport immigration desk. The passport is considered sufficient 686 because it vouches for the citizenship and identity claims, and it is 687 issued by a trusted authority. Thus, in this immigration desk 688 analogy, the passport issuing agency is a Verifier, the passport is 689 an Attestation Result, and the immigration desk is a Relying Party. 691 In this model, an Attester conveys Evidence to a Verifier, which 692 compares the Evidence against its Appraisal Policy. The Verifier 693 then gives back an Attestation Result. If the Attestation Result was 694 a successful one, the Attester can then present the Attestation 695 Result to a Relying Party, which then compares the Attestation Result 696 against its own Appraisal Policy. 698 There are three ways in which the process may fail. First, the 699 Verifier may refuse to issue the Attestation Result due to some error 700 in processing, or some missing input to the Verifier. The second way 701 in which the process may fail is when the Attestation Result is 702 examined by the Relying Party, and based upon the Appraisal Policy, 703 the result does not pass the policy. The third way is when the 704 Verifier is unreachable. 706 Since the resource access protocol between the Attester and Relying 707 Party includes an Attestation Result, in this model the details of 708 that protocol constrain the serialization format of the Attestation 709 Result. The format of the Evidence on the other hand is only 710 constrained by the Attester-Verifier remote attestation protocol. 712 +-------------+ 713 | | Compare Evidence 714 | Verifier | against Appraisal Policy 715 | | 716 +-------------+ 717 ^ | 718 Evidence| |Attestation 719 | | Result 720 | v 721 +----------+ +---------+ 722 | |------------->| |Compare Attestation 723 | Attester | Attestation | Relying | Result against 724 | | Result | Party | Appraisal 725 +----------+ +---------+ Policy 727 Figure 5: Passport Model 729 5.2. Background-Check Model 731 The background-check model is so named because of the resemblance of 732 how employers and volunteer organizations perform background checks. 733 When a prospective employee provides claims about education or 734 previous experience, the employer will contact the respective 735 institutions or former employers to validate the claim. Volunteer 736 organizations often perform police background checks on volunteers in 737 order to determine the volunteer's trustworthiness. Thus, in this 738 analogy, a prospective volunteer is an Attester, the organization is 739 the Relying Party, and a former employer or government agency that 740 issues a report is a Verifier. 742 In this model, an Attester conveys Evidence to a Relying Party, which 743 simply passes it on to a Verifier. The Verifier then compares the 744 Evidence against its Appraisal Policy, and returns an Attestation 745 Result to the Relying Party. The Relying Party then compares the 746 Attestation Result against its own appraisal policy. 748 The resource access protocol between the Attester and Relying Party 749 includes Evidence rather than an Attestation Result, but that 750 Evidence is not processed by the Relying Party. Since the Evidence 751 is merely forwarded on to a trusted Verifier, any serialization 752 format can be used for Evidence because the Relying Party does not 753 need a parser for it. The only requirement is that the Evidence can 754 be _encapsulated in_ the format required by the resource access 755 protocol between the Attester and Relying Party. 757 However, like in the Passport model, an Attestation Result is still 758 consumed by the Relying Party and so the serialization format of the 759 Attestation Result is still important. If the Relying Party is a 760 constrained node whose purpose is to serve a given type resource 761 using a standard resource access protocol, it already needs the 762 parser(s) required by that existing protocol. Hence, the ability to 763 let the Relying Party obtain an Attestation Result in the same 764 serialization format allows minimizing the code footprint and attack 765 surface area of the Relying Party, especially if the Relying Party is 766 a constrained node. 768 +-------------+ 769 | | Compare Evidence 770 | Verifier | against Appraisal 771 | | Policy 772 +-------------+ 773 ^ | 774 Evidence| |Attestation 775 | | Result 776 | v 777 +------------+ +-------------+ 778 | |-------------->| | Compare Attestation 779 | Attester | Evidence | Relying | Result against 780 | | | Party | Appraisal Policy 781 +------------+ +-------------+ 783 Figure 6: Background-Check Model 785 5.3. Combinations 787 One variation of the background-check model is where the Relying 788 Party and the Verifier are on the same machine, performing both 789 functions together. In this case, there is no need for a protocol 790 between the two. 792 It is also worth pointing out that the choice of model is generally 793 up to the Relying Party. The same device may need to create Evidence 794 for different Relying Parties and/or different use cases. For 795 instance, it would provide Evidence to a network infrastructure 796 device to gain access to the network, and to a server holding 797 confidential data to gain access to that data. As such, both models 798 may simultaneously be in use by the same device. 800 Figure 7 shows another example of a combination where Relying Party 1 801 uses the passport model, whereas Relying Party 2 uses an extension of 802 the background-check model. Specifically, in addition to the basic 803 functionality shown in Figure 6, Relying Party 2 actually provides 804 the Attestation Result back to the Attester, allowing the Attester to 805 use it with other Relying Parties. This is the model that the 806 Trusted Application Manager plans to support in the TEEP architecture 807 [I-D.ietf-teep-architecture]. 809 +-------------+ 810 | | Compare Evidence 811 | Verifier | against Appraisal Policy 812 | | 813 +-------------+ 814 ^ | 815 Evidence| |Attestation 816 | | Result 817 | v 818 +-------------+ 819 | | Compare 820 | Relying | Attestation Result 821 | Party 2 | against Appraisal Policy 822 +-------------+ 823 ^ | 824 Evidence| |Attestation 825 | | Result 826 | v 827 +----------+ +----------+ 828 | |-------------->| | Compare Attestation 829 | Attester | Attestation | Relying | Result against 830 | | Result | Party 1 | Appraisal Policy 831 +----------+ +----------+ 833 Figure 7: Example Combination 835 6. Roles and Entities 837 An entity in the RATS architecture includes at least one of the roles 838 defined in this document. An entity can aggregate more than one role 839 into itself. These collapsed roles combine the duties of multiple 840 roles. 842 In these cases, interaction between these roles do not necessarily 843 use the Internet Protocol. They can be using a loopback device or 844 other IP-based communication between separate environments, but they 845 do not have to. Alternative channels to convey conceptual messages 846 include function calls, sockets, GPIO interfaces, local busses, or 847 hypervisor calls. This type of conveyance is typically found in 848 Composite Devices. Most importantly, these conveyance methods are 849 out-of-scope of RATS, but they are presumed to exist in order to 850 convey conceptual messages appropriately between roles. 852 For example, an entity that both connects to a wide-area network and 853 to a system bus is taking on both the Attester and Verifier roles. 854 As a system bus entity, a Verifier consumes Evidence from other 855 devices connected to the system bus that implement Attester roles. 856 As a wide-area network connected entity, it may implement an Attester 857 role. The entity, as a system bus Verifier, may choose to fully 858 isolate its role as a wide-area network Attester. 860 In essence, an entity that combines more than one role creates and 861 consumes the corresponding conceptual messages as defined in this 862 document. 864 7. Trust Model 866 7.1. Relying Party 868 The scope of this document is scenarios for which a Relying Party 869 trusts a Verifier that can appraise the trustworthiness of 870 information about an Attester. Such trust might come by the Relying 871 Party trusting the Verifier (or its public key) directly, or might 872 come by trusting an entity (e.g., a Certificate Authority) that is in 873 the Verifier's certificate chain. 875 The Relying Party might implicitly trust a Verifier, such as in a 876 Verifier/Relying Party combination where the Verifier and Relying 877 Party roles are combined. Or, for a stronger level of security, the 878 Relying Party might require that the Verifier first provide 879 information about itself that the Relying Party can use to assess the 880 trustworthiness of the Verifier before accepting its Attestation 881 Results. 883 For example, one explicit way for a Relying Party "A" to establish 884 such trust in a Verifier "B", would be for B to first act as an 885 Attester where A acts as a combined Verifier/Relying Party. If A 886 then accepts B as trustworthy, it can choose to accept B as a 887 Verifier for other Attesters. 889 Similarly, the Relying Party also needs to trust the Relying Party 890 Owner for providing its Appraisal Policy for Attestation Results, and 891 in some scenarios the Relying Party might even require that the 892 Relying Party Owner go through a remote attestation procedure with it 893 before the Relying Party will accept an updated policy. This can be 894 done similarly to how a Relying Party could establish trust in a 895 Verifier as discussed above. 897 7.2. Attester 899 In some scenarios, Evidence might contain sensitive information such 900 as Personally Identifiable Information. Thus, an Attester must trust 901 entities to which it conveys Evidence, to not reveal sensitive data 902 to unauthorized parties. The Verifier might share this information 903 with other authorized parties, according to rules that it controls. 904 In the background-check model, this Evidence may also be revealed to 905 Relying Party(s). 907 In some cases where Evidence contains sensitive information, an 908 Attester might even require that a Verifier first go through a remote 909 attestation procedure with it before the Attester will send the 910 sensitive Evidence. This can be done by having the Attester first 911 act as a Verifier/Relying Party, and the Verifier act as its own 912 Attester, as discussed above. 914 7.3. Relying Party Owner 916 The Relying Party Owner might also require that the Relying Party 917 first act as an Attester, providing Evidence that the Owner can 918 appraise, before the Owner would give the Relying Party an updated 919 policy that might contain sensitive information. In such a case, 920 mutual attestation might be needed, in which case typically one 921 side's Evidence must be considered safe to share with an untrusted 922 entity, in order to bootstrap the sequence. 924 7.4. Verifier 926 The Verifier trusts (or more specifically, the Verifier's security 927 policy is written in a way that configures the Verifier to trust) a 928 manufacturer, or the manufacturer's hardware, so as to be able to 929 appraise the trustworthiness of that manufacturer's devices. In 930 solutions with weaker security, a Verifier might be configured to 931 implicitly trust firmware or even software (e.g., a hypervisor). 932 That is, it might appraise the trustworthiness of an application 933 component, operating system component, or service under the 934 assumption that information provided about it by the lower-layer 935 hypervisor or firmware is true. A stronger level of assurance of 936 security comes when information can be vouched for by hardware or by 937 ROM code, especially if such hardware is physically resistant to 938 hardware tampering. The component that is implicitly trusted is 939 often referred to as a Root of Trust. 941 A conveyance protocol that provides authentication and integrity 942 protection can be used to convey unprotected Evidence, assuming the 943 following properties exists: 945 1. The key material used to authenticate and integrity protect the 946 conveyance channel is trusted by the Verifier to speak for the 947 Attesting Environment(s) that collected claims about the Target 948 Environment(s). 950 2. All unprotected Evidence that is conveyed is supplied exclusively 951 by the Attesting Environment that has the key material that 952 protects the conveyance channel 954 3. The Root of Trust protects both the conveyance channel key 955 material and the Attesting Environment with equivalent strength 956 protections. 958 7.5. Endorser and Verifier Owner 960 In some scenarios, the Endorser and Verifier Owner may need to trust 961 the Verifier before giving the Endorsement and Appraisal Policy to 962 it. This can be done similarly to how a Relying Party might 963 establish trust in a Verifier as discussed above, and in such a case, 964 mutual attestation might even be needed as discussed in Section 7.3. 966 8. Conceptual Messages 968 8.1. Evidence 970 Evidence is a set of claims about the target environment that reveal 971 operational status, health, configuration or construction that have 972 security relevance. Evidence is evaluated by a Verifier to establish 973 its relevance, compliance, and timeliness. Claims need to be 974 collected in a manner that is reliable. Evidence needs to be 975 securely associated with the target environment so that the Verifier 976 cannot be tricked into accepting claims originating from a different 977 environment (that may be more trustworthy). Evidence also must be 978 protected from man-in-the-middle attackers who may observe, change or 979 misdirect Evidence as it travels from Attester to Verifier. The 980 timeliness of Evidence can be captured using claims that pinpoint the 981 time or interval when changes in operational status, health, and so 982 forth occur. 984 8.2. Endorsements 986 An Endorsement is a secure statement that some entity (e.g., a 987 manufacturer) vouches for the integrity of the device's signing 988 capability. For example, if the signing capability is in hardware, 989 then an Endorsement might be a manufacturer certificate that signs a 990 public key whose corresponding private key is only known inside the 991 device's hardware. Thus, when Evidence and such an Endorsement are 992 used together, an appraisal procedure can be conducted based on 993 Appraisal Policies that may not be specific to the device instance, 994 but merely specific to the manufacturer providing the Endorsement. 995 For example, an Appraisal Policy might simply check that devices from 996 a given manufacturer have information matching a set of known-good 997 reference values, or an Appraisal Policy might have a set of more 998 complex logic on how to appraise the validity of information. 1000 However, while an Appraisal Policy that treats all devices from a 1001 given manufacturer the same may be appropriate for some use cases, it 1002 would be inappropriate to use such an Appraisal Policy as the sole 1003 means of authorization for use cases that wish to constrain _which_ 1004 compliant devices are considered authorized for some purpose. For 1005 example, an enterprise using remote attestation for Network Endpoint 1006 Assessment may not wish to let every healthy laptop from the same 1007 manufacturer onto the network, but instead only want to let devices 1008 that it legally owns onto the network. Thus, an Endorsement may be 1009 helpful information in authenticating information about a device, but 1010 is not necessarily sufficient to authorize access to resources which 1011 may need device-specific information such as a public key for the 1012 device or component or user on the device. 1014 8.3. Attestation Results 1016 Attestation Results are the input used by the Relying Party to decide 1017 the extent to which it will trust a particular Attester, and allow it 1018 to access some data or perform some operation. Attestation Results 1019 may be a Boolean simply indicating compliance or non-compliance with 1020 a Verifier's Appraisal Policy, or a rich set of Claims about the 1021 Attester, against which the Relying Party applies its Appraisal 1022 Policy for Attestation Results. 1024 A result that indicates non-compliance can be used by an Attester (in 1025 the passport model) or a Relying Party (in the background-check 1026 model) to indicate that the Attester should not be treated as 1027 authorized and may be in need of remediation. In some cases, it may 1028 even indicate that the Evidence itself cannot be authenticated as 1029 being correct. 1031 An Attestation Result that indicates compliance can be used by a 1032 Relying Party to make authorization decisions based on the Relying 1033 Party's Appraisal Policy. The simplest such policy might be to 1034 simply authorize any party supplying a compliant Attestation Result 1035 signed by a trusted Verifier. A more complex policy might also 1036 entail comparing information provided in the result against known- 1037 good reference values, or applying more complex logic on such 1038 information. 1040 Thus, Attestation Results often need to include detailed information 1041 about the Attester, for use by Relying Parties, much like physical 1042 passports and drivers licenses include personal information such as 1043 name and date of birth. Unlike Evidence, which is often very device- 1044 and vendor-specific, Attestation Results can be vendor-neutral if the 1045 Verifier has a way to generate vendor-agnostic information based on 1046 the appraisal of vendor-specific information in Evidence. This 1047 allows a Relying Party's Appraisal Policy to be simpler, potentially 1048 based on standard ways of expressing the information, while still 1049 allowing interoperability with heterogeneous devices. 1051 Finally, whereas Evidence is signed by the device (or indirectly by a 1052 manufacturer, if Endorsements are used), Attestation Results are 1053 signed by a Verifier, allowing a Relying Party to only need a trust 1054 relationship with one entity, rather than a larger set of entities, 1055 for purposes of its Appraisal Policy. 1057 9. Claims Encoding Formats 1059 The following diagram illustrates a relationship to which remote 1060 attestation is desired to be added: 1062 +-------------+ +------------+ Evaluate 1063 | |-------------->| | request 1064 | Attester | Access some | Relying | against 1065 | | resource | Party | security 1066 +-------------+ +------------+ policy 1068 Figure 8: Typical Resource Access 1070 In this diagram, the protocol between Attester and a Relying Party 1071 can be any new or existing protocol (e.g., HTTP(S), COAP(S), ROLIE 1072 [RFC8322], 802.1x, OPC UA, etc.), depending on the use case. Such 1073 protocols typically already have mechanisms for passing security 1074 information for purposes of authentication and authorization. Common 1075 formats include JWTs [RFC7519], CWTs [RFC8392], and X.509 1076 certificates. 1078 To enable remote attestation to be added to existing protocols, 1079 enabling a higher level of assurance against malware for example, it 1080 is important that information needed for appraising the Attester be 1081 usable with existing protocols that have constraints around what 1082 formats they can transport. For example, OPC UA [OPCUA] (probably 1083 the most common protocol in industrial IoT environments) is defined 1084 to carry X.509 certificates and so security information must be 1085 embedded into an X.509 certificate to be passed in the protocol. 1086 Thus, remote attestation related information could be natively 1087 encoded in X.509 certificate extensions, or could be natively encoded 1088 in some other format (e.g., a CWT) which in turn is then encoded in 1089 an X.509 certificate extension. 1091 Especially for constrained nodes, however, there is a desire to 1092 minimize the amount of parsing code needed in a Relying Party, in 1093 order to both minimize footprint and to minimize the attack surface 1094 area. So while it would be possible to embed a CWT inside a JWT, or 1095 a JWT inside an X.509 extension, etc., there is a desire to encode 1096 the information natively in the format that is natural for the 1097 Relying Party. 1099 This motivates having a common "information model" that describes the 1100 set of remote attestation related information in an encoding-agnostic 1101 way, and allowing multiple encoding formats (CWT, JWT, X.509, etc.) 1102 that encode the same information into the claims format needed by the 1103 Relying Party. 1105 The following diagram illustrates that Evidence and Attestation 1106 Results might each have multiple possible encoding formats, so that 1107 they can be conveyed by various existing protocols. It also 1108 motivates why the Verifier might also be responsible for accepting 1109 Evidence that encodes claims in one format, while issuing Attestation 1110 Results that encode claims in a different format. 1112 Evidence Attestation Results 1113 .--------------. CWT CWT .-------------------. 1114 | Attester-A |------------. .----------->| Relying Party V | 1115 '--------------' v | `-------------------' 1116 .--------------. JWT .------------. JWT .-------------------. 1117 | Attester-B |-------->| Verifier |-------->| Relying Party W | 1118 '--------------' | | `-------------------' 1119 .--------------. X.509 | | X.509 .-------------------. 1120 | Attester-C |-------->| |-------->| Relying Party X | 1121 '--------------' | | `-------------------' 1122 .--------------. TPM | | TPM .-------------------. 1123 | Attester-D |-------->| |-------->| Relying Party Y | 1124 '--------------' '------------' `-------------------' 1125 .--------------. other ^ | other .-------------------. 1126 | Attester-E |------------' '----------->| Relying Party Z | 1127 '--------------' `-------------------' 1129 Figure 9: Multiple Attesters and Relying Parties with Different 1130 Formats 1132 10. Freshness 1134 A remote entity (Verifier or Relying Party) may need to learn the 1135 point in time (i.e., the "epoch") an Evidence or Attestation Result 1136 has been produced. This is essential in deciding whether the 1137 included Claims and their values can be considered fresh, meaning 1138 they still reflect the latest state of the Attester, and that any 1139 Attestation Result was generated using the latest Appraisal Policy 1140 for Evidence. 1142 Freshness is assessed based on a policy defined by the consuming 1143 entity, Verifier or Relying Party, that compares the estimated epoch 1144 against an "expiry" threshold defined locally to that policy. There 1145 is, however, always a race condition possible in that the state of 1146 the Attester, and the Appraisal Policy for Evidence, might change 1147 immediately after the Evidence or Attestation Result was generated. 1148 The goal is merely to narrow their recentness to something the 1149 Verifier (for Evidence) or Relying Party (for Attestation Result) is 1150 willing to accept. Freshness is a key component for enabling caching 1151 and reuse of both Evidence and Attestation Results, which is 1152 especially valuable in cases where their computation uses a 1153 substantial part of the resource budget (e.g., energy in constrained 1154 devices). 1156 There are two common approaches for determining the epoch of an 1157 Evidence or Attestation Result. 1159 The first approach is to rely on synchronized and trustworthy clocks, 1160 and include a signed timestamp (see [I-D.birkholz-rats-tuda]) along 1161 with the Claims in the Evidence or Attestation Result. Timestamps 1162 can be added on a per-Claim basis, to distinguish the time of 1163 creation of Evidence or Attestation Result from the time that a 1164 specific Claim was generated. The clock's trustworthiness typically 1165 requires additional Claims about the signer's time synchronization 1166 mechanism. 1168 A second approach places the onus of timekeeping solely on the 1169 appraising entity, i.e., the Verifier (for Evidence), or the Relying 1170 Party (for Attestation Results), and might be suitable, for example, 1171 in case the Attester does not have a reliable clock or time 1172 synchronisation is otherwise impaired. In this approach, a non- 1173 predictable nonce is sent by the appraising entity, and the nonce is 1174 then signed and included along with the Claims in the Evidence or 1175 Attestation Result. After checking that the sent and received nonces 1176 are the same, the appraising entity knows that the Claims were signed 1177 after the nonce was generated. This allows associating a "rough" 1178 epoch to the Evidence or Attestation Result. In this case the epoch 1179 is said to be rough because: 1181 * The epoch applies to the entire claim set instead of a more 1182 granular association, and 1184 * The time between the creation of Claims and the collection of 1185 Claims is indistinguishable. 1187 Implicit and explicit timekeeping can be combined into hybrid 1188 mechanisms. For example, if clocks exist and are considered 1189 trustworthy but are not synchronized, a nonce-based exchange may be 1190 used to determine the (relative) time offset between the involved 1191 peers, followed by any number of timestamp based exchanges. In 1192 another setup where all Roles (Attesters, Verifiers and Relying 1193 Parties) share the same broadcast channel, the nonce-based approach 1194 may be used to anchor all parties to the same (relative) timeline, 1195 without requiring synchronized clocks, by having a central entity 1196 emit nonces at regular intervals and have the "current" nonce 1197 included in the produced Evidence or Attestation Result. 1199 It is important to note that the actual values in Claims might have 1200 been generated long before the Claims are signed. If so, it is the 1201 signer's responsibility to ensure that the values are still correct 1202 when they are signed. For example, values generated at boot time 1203 might have been saved to secure storage until network connectivity is 1204 established to the remote Verifier and a nonce is obtained. 1206 A more detailed discussion with examples appears in Section 16. 1208 11. Privacy Considerations 1210 The conveyance of Evidence and the resulting Attestation Results 1211 reveal a great deal of information about the internal state of a 1212 device as well as any users the device is associated with. In many 1213 cases, the whole point of the Attestation process is to provide 1214 reliable information about the type of the device and the firmware/ 1215 software that the device is running. This information might be 1216 particularly interesting to many attackers. For example, knowing 1217 that a device is running a weak version of firmware provides a way to 1218 aim attacks better. 1220 Many claims in Attestation Evidence and Attestation Results are 1221 potentially PII (Personally Identifying Information) depending on the 1222 end-to-end use case of the attestation. Attestation that goes up to 1223 include containers and applications may further reveal details about 1224 a specific system or user. 1226 In some cases, an attacker may be able to make inferences about 1227 attestations from the results or timing of the processing. For 1228 example, an attacker might be able to infer the value of specific 1229 claims if it knew that only certain values were accepted by the 1230 Relying Party. 1232 Evidence and Attestation Results data structures are expected to 1233 support integrity protection encoding (e.g., COSE, JOSE, X.509) and 1234 optionally might support confidentiality protection (e.g., COSE, 1235 JOSE). Therefore, if confidentiality protection is omitted or 1236 unavailable, the protocols that convey Evidence or Attestation 1237 Results are responsible for detailing what kinds of information are 1238 disclosed, and to whom they are exposed. 1240 Furthermore, because Evidence might contain sensitive information, 1241 Attesters are responsible for only sending such Evidence to trusted 1242 Verifiers. Some Attesters might want a stronger level of assurance 1243 of the trustworthiness of a Verifier before sending Evidence to it. 1244 In such cases, an Attester can first act as a Relying Party and ask 1245 for the Verifier's own Attestation Result, and appraising it just as 1246 a Relying Party would appraise an Attestation Result for any other 1247 purpose. 1249 12. Security Considerations 1250 12.1. Attester and Attestation Key Protection 1252 Implementers need to pay close attention to the isolation and 1253 protection of the Attester and the factory processes for provisioning 1254 the Attestation Key Material. When either of these are compromised, 1255 the remote attestation becomes worthless because the attacker can 1256 forge Evidence. 1258 Remote attestation applies to use cases with a range of security 1259 requirements, so the protections discussed here range from low to 1260 high security where low security may be only application or process 1261 isolation by the device's operating system and high security involves 1262 specialized hardware to defend against physical attacks on a chip. 1264 12.1.1. On-Device Attester and Key Protection 1266 It is assumed that the Attester is located in an isolated environment 1267 of a device like a process, a dedicated chip a TEE or such that 1268 collects the Claims, formats them and signs them with an Attestation 1269 Key. The Attester must be protected from unauthorized modification to 1270 ensure it behaves correctly. There must also be confidentiality so 1271 that the signing key is not captured and used elsewhere to forge 1272 evidence. 1274 In many cases the user or owner of the device must not be able to 1275 modify or exfiltrate keys from the Attesting Environment of the 1276 Attester. For example the owner or user of a mobile phone or FIDO 1277 authenticator is not trusted. The point of remote attestation is for 1278 the Relying Party to be able to trust the Attester even though they 1279 don't trust the user or owner. 1281 Some of the measures for low level security include process or 1282 application isolation by a high-level operating system, and perhaps 1283 restricting access to root or system privilege. For extremely simple 1284 single-use devices that don't use a protected mode operating system, 1285 like a Bluetooth speaker, the isolation might only be the plastic 1286 housing for the device. 1288 At medium level security, a special restricted operating environment 1289 like a Trusted Execution Environment (TEE) might be used. In this 1290 case, only security-oriented software has access to the Attester and 1291 key material. 1293 For high level security, specialized hardware will likely be used 1294 providing protection against chip decapping attacks, power supply and 1295 clock glitching, faulting injection and RF and power side channel 1296 attacks. 1298 12.1.2. Attestation Key Provisioning Processes 1300 Attestation key provisioning is the process that occurs in the 1301 factory or elsewhere that establishes the signing key material on the 1302 device and the verification key material off the device. Sometimes 1303 this is referred to as "personalization". 1305 One way to provision a key is to first generate it external to the 1306 device and then copy the key onto the device. In this case, 1307 confidentiality of the generator, as well as the path over which the 1308 key is provisioned, is necessary. This can be achieved in a number 1309 of ways. 1311 Confidentiality can be achieved entirely with physical provisioning 1312 facility security involving no encryption at all. For low-security 1313 use cases, this might be simply locking doors and limiting personnel 1314 that can enter the facility. For high-security use cases, this might 1315 involve a special area of the facility accessible only to select 1316 security-trained personnel. 1318 Cryptography can also be used to support confidentiality, but keys 1319 that are used to then provision attestation keys must somehow have 1320 been provisioned securely beforehand (a recursive problem). 1322 In many cases both some physical security and some cryptography will 1323 be necessary and useful to establish confidentiality. 1325 Another way to provision the key material is to generate it on the 1326 device and export the verification key. If public key cryptography 1327 is being used, then only integrity is necessary. Confidentiality is 1328 not necessary. 1330 In all cases, the Attestation Key provisioning process must ensure 1331 that only attestation key material that is generated by a valid 1332 Endorser is established in Attesters and then configured correctly. 1333 For many use cases, this will involve physical security at the 1334 facility, to prevent unauthorized devices from being manufactured 1335 that may be counterfeit or incorrectly configured. 1337 12.2. Integrity Protection 1339 Any solution that conveys information used for security purposes, 1340 whether such information is in the form of Evidence, Attestation 1341 Results, Endorsements, or Appraisal Policy must support end-to-end 1342 integrity protection and replay attack prevention, and often also 1343 needs to support additional security properties, including: 1345 * end-to-end encryption, 1346 * denial of service protection, 1348 * authentication, 1350 * auditing, 1352 * fine grained access controls, and 1354 * logging. 1356 Section 10 discusses ways in which freshness can be used in this 1357 architecture to protect against replay attacks. 1359 To assess the security provided by a particular Appraisal Policy, it 1360 is important to understand the strength of the Root of Trust, e.g., 1361 whether it is mutable software, or firmware that is read-only after 1362 boot, or immutable hardware/ROM. 1364 It is also important that the Appraisal Policy was itself obtained 1365 securely. As such, if Appraisal Policies for a Relying Party or for 1366 a Verifier can be configured via a network protocol, the ability to 1367 create Evidence about the integrity of the entity providing the 1368 Appraisal Policy needs to be considered. 1370 The security of conveyed information may be applied at different 1371 layers, whether by a conveyance protocol, or an information encoding 1372 format. This architecture expects attestation messages (i.e., 1373 Evidence, Attestation Results, Endorsements and Policies) are end-to- 1374 end protected based on the role interaction context. For example, if 1375 an Attester produces Evidence that is relayed through some other 1376 entity that doesn't implement the Attester or the intended Verifier 1377 roles, then the relaying entity should not expect to have access to 1378 the Evidence. 1380 13. IANA Considerations 1382 This document does not require any actions by IANA. 1384 14. Acknowledgments 1386 Special thanks go to Joerg Borchert, Nancy Cam-Winget, Jessica 1387 Fitzgerald-McKay, Thomas Fossati, Diego Lopez, Laurence Lundblade, 1388 Paul Rowe, Hannes Tschofenig, Frank Xia, and David Wooten. 1390 15. Contributors 1392 Thomas Hardjono created older versions of the terminology section in 1393 collaboration with Ned Smith. Eric Voit provided the conceptual 1394 separation between Attestation Provision Flows and Attestation 1395 Evidence Flows. Monty Wisemen created the content structure of the 1396 first three architecture drafts. Carsten Bormann provided many of 1397 the motivational building blocks with respect to the Internet Threat 1398 Model. 1400 16. Appendix A: Time Considerations 1402 The table below defines a number of relevant events, with an ID that 1403 is used in subsequent diagrams. The times of said events might be 1404 defined in terms of an absolute clock time such as Coordinated 1405 Universal Time, or might be defined relative to some other timestamp 1406 or timeticks counter. 1408 +====+==============+=============================================+ 1409 | ID | Event | Explanation of event | 1410 +====+==============+=============================================+ 1411 | VG | Value | A value to appear in a Claim was created. | 1412 | | generated | In some cases, a value may have technically | 1413 | | | existed before an Attester became aware of | 1414 | | | it but the Attester might have no idea how | 1415 | | | long it has had that value. In such a | 1416 | | | case, the Value created time is the time at | 1417 | | | which the Claim containing the copy of the | 1418 | | | value was created. | 1419 +----+--------------+---------------------------------------------+ 1420 | HD | Handle | A centrally generated identifier for time- | 1421 | | distribution | bound recentness across a domain of devices | 1422 | | | is successfully distributed to Attesters. | 1423 +----+--------------+---------------------------------------------+ 1424 | NS | Nonce sent | A nonce not predictable to an Attester | 1425 | | | (recentness & uniqueness) is sent to an | 1426 | | | Attester. | 1427 +----+--------------+---------------------------------------------+ 1428 | NR | Nonce | A nonce is relayed to an Attester by | 1429 | | relayed | another entity. | 1430 +----+--------------+---------------------------------------------+ 1431 | HR | Handle | A handle distributed by a Handle | 1432 | | received | Distributor was received. | 1433 +----+--------------+---------------------------------------------+ 1434 | EG | Evidence | An Attester creates Evidence from collected | 1435 | | generation | Claims. | 1436 +----+--------------+---------------------------------------------+ 1437 | ER | Evidence | A Relying Party relays Evidence to a | 1438 | | relayed | Verifier. | 1439 +----+--------------+---------------------------------------------+ 1440 | RG | Result | A Verifier appraises Evidence and generates | 1441 | | generation | an Attestation Result. | 1442 +----+--------------+---------------------------------------------+ 1443 | RR | Result | A Relying Party relays an Attestation | 1444 | | relayed | Result to a Relying Party. | 1445 +----+--------------+---------------------------------------------+ 1446 | RA | Result | The Relying Party appraises Attestation | 1447 | | appraised | Results. | 1448 +----+--------------+---------------------------------------------+ 1449 | OP | Operation | The Relying Party performs some operation | 1450 | | performed | requested by the Attester. For example, | 1451 | | | acting upon some message just received | 1452 | | | across a session created earlier at | 1453 | | | time(RA). | 1454 +----+--------------+---------------------------------------------+ 1455 | RX | Result | An Attestation Result should no longer be | 1456 | | expiry | accepted, according to the Verifier that | 1457 | | | generated it. | 1458 +----+--------------+---------------------------------------------+ 1460 Table 1 1462 Using the table above, a number of hypothetical examples of how a 1463 solution might be built are illustrated below. a solution might be 1464 built. This list is not intended to be complete, but is just 1465 representative enough to highlight various timing considerations. 1467 All times are relative to the local clocks, indicated by an "a" 1468 (Attester), "v" (Verifier), or "r" (Relying Party) suffix. 1470 How and if clocks are synchronized depends upon the model. 1472 16.1. Example 1: Timestamp-based Passport Model Example 1474 The following example illustrates a hypothetical Passport Model 1475 solution that uses timestamps and requires roughly synchronized 1476 clocks between the Attester, Verifier, and Relying Party, which 1477 depends on using a secure clock synchronization mechanism. As a 1478 result, the receiver of a conceptual message containing a timestamp 1479 can directly compare it to its own clock and timestamps. 1481 .----------. .----------. .---------------. 1482 | Attester | | Verifier | | Relying Party | 1483 '----------' '----------' '---------------' 1484 time(VG_a) | | 1485 | | | 1486 ~ ~ ~ 1487 | | | 1488 time(EG_a) | | 1489 |------Evidence{time(EG_a)}------>| | 1490 | time(RG_v) | 1491 |<-----Attestation Result---------| | 1492 | {time(RG_v),time(RX_v)} | | 1493 ~ ~ 1494 | | 1495 |----Attestation Result{time(RG_v),time(RX_v)}-->time(RA_r) 1496 | | 1497 ~ ~ 1498 | | 1499 | time(OP_r) 1500 | | 1502 The Verifier can check whether the Evidence is fresh when appraising 1503 it at time(RG_v) by checking "time(RG_v) - time(EG_a) < Threshold", 1504 where the Verifier's threshold is large enough to account for the 1505 maximum permitted clock skew between the Verifier and the Attester. 1507 If time(VG_a) is also included in the Evidence along with the claim 1508 value generated at that time, and the Verifier decides that it can 1509 trust the time(VG_a) value, the Verifier can also determine whether 1510 the claim value is recent by checking "time(RG_v) - time(VG_a) < 1511 Threshold", again where the threshold is large enough to account for 1512 the maximum permitted clock skew between the Verifier and the 1513 Attester. 1515 The Relying Party can check whether the Attestation Result is fresh 1516 when appraising it at time(RA_r) by checking "time(RA_r) - time(RG_v) 1517 < Threshold", where the Relying Party's threshold is large enough to 1518 account for the maximum permitted clock skew between the Relying 1519 Party and the Verifier. The result might then be used for some time 1520 (e.g., throughout the lifetime of a connection established at 1521 time(RA_r)). The Relying Party must be careful, however, to not 1522 allow continued use beyond the period for which it deems the 1523 Attestation Result to remain fresh enough. Thus, it might allow use 1524 (at time(OP_r)) as long as "time(OP_r) - time(RG_v) < Threshold". 1525 However, if the Attestation Result contains an expiry time time(RX_v) 1526 then it could explicitly check "time(OP_r) < time(RX_v)". 1528 16.2. Example 2: Nonce-based Passport Model Example 1530 The following example illustrates a hypothetical Passport Model 1531 solution that uses nonces and thus does not require that any clocks 1532 are synchronized. 1534 As a result, the receiver of a conceptual message containing a 1535 timestamp cannot directly compare it to its own clock or timestamps. 1536 Thus we use a suffix ("a" for Attester, "v" for Verifier, and "r" for 1537 Relying Party) on the IDs below indicating which clock generated 1538 them, since times from different clocks cannot be compared. Only the 1539 delta between two events from the sender can be used by the receiver. 1541 .----------. .----------. .---------------. 1542 | Attester | | Verifier | | Relying Party | 1543 '----------' '----------' '---------------' 1544 time(VG_a) | | 1545 | | | 1546 ~ ~ ~ 1547 | | | 1548 |<--Nonce1---------------------time(NS_v) | 1549 time(EG_a) | | 1550 |---Evidence--------------------->| | 1551 | {Nonce1, time(EG_a)-time(VG_a)} | | 1552 | time(RG_v) | 1553 |<--Attestation Result------------| | 1554 | {time(RX_v)-time(RG_v)} | | 1555 ~ ~ 1556 | | 1557 |<--Nonce2-------------------------------------time(NS_r) 1558 time(RRa) 1559 |---Attestation Result{time(RX_v)-time(RG_v)}->time(RA_r) 1560 | Nonce2, time(RR_a)-time(EG_a) | 1561 ~ ~ 1562 | | 1563 | time(OP_r) 1565 In this example solution, the Verifier can check whether the Evidence 1566 is fresh at "time(RG_v)" by verifying that "time(RG_v)-time(NS_v) < 1567 Threshold". 1569 The Verifier cannot, however, simply rely on a Nonce to determine 1570 whether the value of a claim is recent, since the claim value might 1571 have been generated long before the nonce was sent by the Verifier. 1572 However, if the Verifier decides that the Attester can be trusted to 1573 correctly provide the delta "time(EG_a)-time(VG_a)", then it can 1574 determine recency by checking "time(RG_v)-time(NS_v) + time(EG_a)- 1575 time(VG_a) < Threshold". 1577 Similarly if, based on an Attestation Result from a Verifier it 1578 trusts, the Relying Party decides that the Attester can be trusted to 1579 correctly provide time deltas, then it can determine whether the 1580 Attestation Result is fresh by checking "time(OP_r)-time(NS_r) + 1581 time(RR_a)-time(EG_a) < Threshold". Although the Nonce2 and 1582 "time(RR_a)-time(EG_a)" values cannot be inside the Attestation 1583 Result, they might be signed by the Attester such that the 1584 Attestation Result vouches for the Attester's signing capability. 1586 The Relying Party must still be careful, however, to not allow 1587 continued use beyond the period for which it deems the Attestation 1588 Result to remain valid. Thus, if the Attestation Result sends a 1589 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 1590 Relying Party can check "time(OP_r)-time(NS_r) < time(RX_v)- 1591 time(RG_v)". 1593 16.3. Example 3: Handle-based Passport Model Example 1595 Handles are a third option to establish time-keeping next to nonces 1596 or timestamps. Handles are opaque data intended to be available to 1597 all RATS roles that interact with each other, such as the Attester or 1598 Verifier, in specified intervals. To enable this availability, 1599 handles are distributed centrally by the Handle Distributor role over 1600 the network. As any other role, the Handle Distributor role can be 1601 taken on by a dedicated entity or collapsed with other roles, such as 1602 a Verifier. The use of handles can compensate for a lack of clocks 1603 or other sources of time on entities taking on RATS roles. The only 1604 entity that requires access to a source of time is the entity taking 1605 on the role of Handle Distributor. 1607 Handles are different from nonces as they can be used more than once 1608 and can be used by more than one entity at the same time. Handles 1609 are different from timestamps as they do not have to convey 1610 information about a point in time, but their reception creates that 1611 information. The reception of a handle is similar to the event that 1612 increments a relative tickcounter. Receipt of a new handle 1613 invalidates a previously received handle. 1615 In this example, Evidence generation based on received handles always 1616 uses the current (most recent) handle. As handles are distributed 1617 over the network, all involved entities receive a fresh handle at 1618 roughly the same time. Due to distribution over the network, there 1619 is some jitter with respect to the time the Handle is received, 1620 time(HR), for each involved entity. To compensate for this jitter, 1621 there is a small period of overlap (a specified offset) in which both 1622 a current handle and corresponding former handle are valid in 1623 Evidence appraisal: "validity-duration = time(HR'_v) + offset - 1624 time(HR_v)". The offset is typically based on a network's round trip 1625 time. Analogously, the generation of valid Evidence is only 1626 possible, if the age of the handle used is lower than the validity- 1627 duration: "time(HR_v) - time(EG_a) < validity-duration". 1629 From the point of view of a Verifier, the generation of valid 1630 Evidence is only possible, if the age of the handle used in the 1631 Evidence generation is younger than the duration of the distribution 1632 interval - "(time(HR'_v)-time(HR_v)) - (time(HR_a)-time(EG_a)) < 1633 validity-duration". 1635 Due to the validity-duration of handles, multiple different pieces of 1636 Evidence can be generated based on the same handle. The resulting 1637 granularity (time resolution) of Evidence freshness is typically 1638 lower than the resolution of clock-based tickcounters. 1640 The following example illustrates a hypothetical Background-Check 1641 Model solution that uses handles and requires a trustworthy time 1642 source available to the Handle Distributor role. 1644 .-------------. 1645 .----------. | Handle | .----------. .---------------. 1646 | Attester | | Distributor | | Verifier | | Relying Party | 1647 '----------' '-------------' '----------' '---------------' 1648 time(VG_a) | | | 1649 | | | | 1650 ~ ~ ~ ~ 1651 | | | | 1652 time(HR_a)<---------+-------------time(HR_v)------>time(HR_r) 1653 | | | | 1654 time(EG_a) | | | 1655 |----Evidence{time(EG_a)}-------->| | 1656 | {Handle1,time(EG_a)-time(VG_a)}| | 1657 | | time(RG_v) | 1658 |<-----Attestation Result---------| | 1659 | {time(RG_v),time(RX_v)} | | 1660 | | | 1661 ~ ~ ~ 1662 | | | 1663 time(HR_a')<--------'---------------------------->time(HR_r') 1664 | | 1665 time(RR_a) / 1666 |--Attestation Result{time(RX_v)-time(RG_v)}-->time(RA_r) 1667 | {Handle2, time(RR_a)-time(EG_a)} | 1668 ~ ~ 1669 | | 1670 | time(OP_r) 1671 | | 1673 16.4. Example 4: Timestamp-based Background-Check Model Example 1675 The following example illustrates a hypothetical Background-Check 1676 Model solution that uses timestamps and requires roughly synchronized 1677 clocks between the Attester, Verifier, and Relying Party. 1679 .----------. .---------------. .----------. 1680 | Attester | | Relying Party | | Verifier | 1681 '----------' '---------------' '----------' 1682 time(VG_a) | | 1683 | | | 1684 ~ ~ ~ 1685 | | | 1686 time(EG_a) | | 1687 |----Evidence------->| | 1688 | {time(EG_a)} time(ER_r)--Evidence{time(EG_a)}->| 1689 | | time(RG_v) 1690 | time(RA_r)<-Attestation Result---| 1691 | | {time(RX_v)} | 1692 ~ ~ ~ 1693 | | | 1694 | time(OP_r) | 1696 The time considerations in this example are equivalent to those 1697 discussed under Example 1 above. 1699 16.5. Example 5: Nonce-based Background-Check Model Example 1701 The following example illustrates a hypothetical Background-Check 1702 Model solution that uses nonces and thus does not require that any 1703 clocks are synchronized. In this example solution, a nonce is 1704 generated by a Verifier at the request of a Relying Party, when the 1705 Relying Party needs to send one to an Attester. 1707 .----------. .---------------. .----------. 1708 | Attester | | Relying Party | | Verifier | 1709 '----------' '---------------' '----------' 1710 time(VG_a) | | 1711 | | | 1712 ~ ~ ~ 1713 | | | 1714 | |<-------Nonce-----------time(NS_v) 1715 |<---Nonce-----------time(NR_r) | 1716 time(EG_a) | | 1717 |----Evidence{Nonce}--->| | 1718 | time(ER_r)--Evidence{Nonce}--->| 1719 | | time(RG_v) 1720 | time(RA_r)<-Attestation Result-| 1721 | | {time(RX_v)-time(RG_v)} | 1722 ~ ~ ~ 1723 | | | 1724 | time(OP_r) | 1726 The Verifier can check whether the Evidence is fresh, and whether a 1727 claim value is recent, the same as in Example 2 above. 1729 However, unlike in Example 2, the Relying Party can use the Nonce to 1730 determine whether the Attestation Result is fresh, by verifying that 1731 "time(OP_r)-time(NR_r) < Threshold". 1733 The Relying Party must still be careful, however, to not allow 1734 continued use beyond the period for which it deems the Attestation 1735 Result to remain valid. Thus, if the Attestation Result sends a 1736 validity lifetime in terms of "time(RX_v)-time(RG_v)", then the 1737 Relying Party can check "time(OP_r)-time(ER_r) < time(RX_v)- 1738 time(RG_v)". 1740 17. References 1742 17.1. Normative References 1744 [RFC7519] Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token 1745 (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015, 1746 . 1748 [RFC8392] Jones, M., Wahlstroem, E., Erdtman, S., and H. Tschofenig, 1749 "CBOR Web Token (CWT)", RFC 8392, DOI 10.17487/RFC8392, 1750 May 2018, . 1752 17.2. Informative References 1754 [CTAP] FIDO Alliance, "Client to Authenticator Protocol", n.d., 1755 . 1759 [I-D.birkholz-rats-tuda] 1760 Fuchs, A., Birkholz, H., McDonald, I., and C. Bormann, 1761 "Time-Based Uni-Directional Attestation", Work in 1762 Progress, Internet-Draft, draft-birkholz-rats-tuda-03, 13 1763 July 2020, . 1766 [I-D.ietf-teep-architecture] 1767 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 1768 "Trusted Execution Environment Provisioning (TEEP) 1769 Architecture", Work in Progress, Internet-Draft, draft- 1770 ietf-teep-architecture-12, 13 July 2020, 1771 . 1774 [OPCUA] OPC Foundation, "OPC Unified Architecture Specification, 1775 Part 2: Security Model, Release 1.03", OPC 10000-2 , 25 1776 November 2015, . 1780 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 1781 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 1782 . 1784 [RFC8322] Field, J., Banghart, S., and D. Waltermire, "Resource- 1785 Oriented Lightweight Information Exchange (ROLIE)", 1786 RFC 8322, DOI 10.17487/RFC8322, February 2018, 1787 . 1789 [TCGarch] Trusted Computing Group, "Trusted Platform Module Library 1790 - Part 1: Architecture", n.d., 1791 . 1794 [WebAuthN] W3C, "Web Authentication: An API for accessing Public Key 1795 Credentials", n.d., . 1797 Authors' Addresses 1799 Henk Birkholz 1800 Fraunhofer SIT 1801 Rheinstrasse 75 1802 64295 Darmstadt 1803 Germany 1805 Email: henk.birkholz@sit.fraunhofer.de 1807 Dave Thaler 1808 Microsoft 1809 United States of America 1811 Email: dthaler@microsoft.com 1813 Michael Richardson 1814 Sandelman Software Works 1815 Canada 1817 Email: mcr+ietf@sandelman.ca 1819 Ned Smith 1820 Intel Corporation 1821 United States of America 1823 Email: ned.smith@intel.com 1825 Wei Pan 1826 Huawei Technologies 1828 Email: william.panwei@huawei.com