Internet-Draft | teep usecase for CC in network | April 2023 |
Yang, et al. | Expires 19 October 2023 | [Page] |
Confidential computing is the protection of data in use by performing computation in a hardware-based Trusted Execution Environment. Confidential computing could provide integrity and confidentiality for users who want to run applications and process data in that environment. When confidential computing is used in scenarios which need network to provision user data and applications in the TEE environment, TEEP architecture[I-D.ietf-teep-architecture] and protocol [I-D.ietf-teep-protocol] could be used. This document focuses on using TEEP to provision Network User data and applications in confidential computing. This document is a use case and extension of TEEP and could provide guidance for cloud computing, [MEC] and other scenarios to use confidential computing in network.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 19 October 2023.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The Confidential Computing Consortium defined the concept of confidential computing as the protection of data in use by performing computation in a hardware-based Trusted Execution Environment [CCC-White-Paper]. In detail, computing unit with confidential computing feature could generate an isolated hardware-protected area, in which data and applications will be protected from illegal access or tampering. When using network to provision confidential computing environment, users need to attest and deploy their data and applications in the TEE environment inside confidential computing device by network. This network could be a cloud, MEC or other network that provide confidential computing resource to users. The TEEP WG defined the standardization of an architecture and protocol for managing the lifecycle of trusted applications running inside a TEE. In confidential computing, the TEE can also be provisioned and managed by TEEP architecture and protocol. This document illustrates how a network user uses the TEEP protocol to provision its private data and applications in confidential computing device. The intended audiences for this use case are network users and operators who are interested in using confidential computing in network.¶
## Terms¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].¶
Figure 1 is the architecture of confidential computing in network. Two new components Network User and Network M/OC are introduced in this document. The connection between Network User and M/OC depends on the implementation of specific network. The connection between network user and UA (Untrusted Application) or TA depends on the implementation of application. The connection between TAM, TEEP Broker and TEEP Agent refers to the TEEP protocol. Interactions of all components in this scenario are described in the Usecase section.¶
The basic process of how a Network User utilizes confidential computing is shown below. In confidential computing, the bundle of an UA, TA, and PD refers to case 1,2,3,4 of TEEP architecture section 4.4. Case 5 and 6 are new cases that possible in implementation. At present, the main instances types exist in industry of confidential computing are confidential process, confidential container and confidential VM.¶
This use case refers to the case 1 of TEEP architecture. If the Network User provides this package, the process of TEEP is as follow. 1. Network User requests for confidential computing resource to the network M/OC. 2. M/OC orchestrates confidential computing device to undertake the request. 3. TAM requests remote attestation to the TEEP Agent, TEEP Agent then sends the evidence to TAM. The TAM works as Verifier in [RFC9334]. 4. After verification, Network User works as Relying Party to receive the attestation result. If positive, Network User establishes secure channel [NIST-Special-Publication-800-133-V2] with TEEP Agent, and transfers this package to TEEP Agent. 5. TEEP Agent deploys TA and personalization data in TEE, then deploy UA in REE. As for informing Network Users to develop their applications and data, the mapping of UA, TA and implementations are shown in figure 2. This document gathers the main hardware architectures that support confidential computing, which include [TrustZone], [SGX], [SEV-SNP], [CCA] and [TDX]. The brace means the operation steps to deploy packages. The arrow means deploy package to a destination. The "att" means attestation challenge for the target.¶
This usecase refers to the case 2 and case 3 of TEEP architecture. The PD is a separate package, the UA and TA could be separated or integrated as a package. If the Network User provides packages like this, the process of TEEP is as follow. 1. Network User requests for confidential computing resource to the network M/OC. 2. M/OC orchestrates confidential computing device to undertake the request. 3. Network User transfers UA and TA to confidential computing device via TAM. TAM then deploys these two applications in REE and TEE respectively. (In SGX, UA must be deployed first, then let the UA to load TA in SGX.) 4. TAM requests remote attestation to the TEEP Agent, TEEP Agent then sends the evidence to TAM. The TAM works as Verifier in RATs architecture. 5. After verification, Network User works as Relying Party to receive the attestation result. If positive, Network User establishes secure channel with TA, and deploys personalization data to the TA. The mapping of UA, TA and implementations are shown in figure 3.¶
In this case, the process of TEEP is as follow. 1. Network User requests for confidential computing resource to the network M/OC. 2. TAM in M/OC orchestrates confidential computing device to undertake the request. 3. Network User deploys UA in REE. 4. TAM requests remote attestation to the TEEP Agent, TEEP Agent then sends the evidence to TAM. The TAM works as Verifier in RATs architecture. 5. After verification, Network User works as Relying Party to receive the attestation result. If positive, the Network User establishes secure channel with TEEP Agent and transfers the TA and PD package to TEEP Agent. 6. TEEP Agent deploys TA and PD.¶
In this case, Network User provides TA and PD as a package with no UA attached. The process of TEEP in this case is as follow. 1. Network User requests for confidential computing resource to the network M/OC. 2. TAM in M/OC orchestrates confidential computing device to undertake the request. 3. TAM requests remote attestation to the TEEP Agent, TEEP Agent then sends the evidence to TAM. The TAM works as Verifier in RATs architecture. 4. After verification,Network User works as Relying Party to receive the attestation result. If positive, the Network User establishes secure channel with TEEP Agent and transfers TA and PD to TEEP Agent. 5. TEEP Agent deploys TA and PD.¶
## TA and PD are separate packages, no UA In this case, Network User provides TA and PD as separate packages with no UA attached. The process of TEEP in this case is as follow. 1. Network User requests for confidential computing resource to the network M/OC. 2. TAM in M/OC orchestrates confidential computing device to undertake the request. 3. Network User transfers TA to TAM, then TAM transfers TA to TEEP Agent. 4. TAM requests remote attestation to the TEEP Agent, TEEP Agent then sends the evidence to TAM. The TAM works as Verifier in RATs architecture. 5. After verification, Network User works as Relying Party to receive the attestation result. If positive, the Network User establishes secure channel with TA and transfers PD to it.¶
This document does not require actions by IANA.¶
Besides the security considerations in TEEP architecture, there is no more security and privacy issues in this document.¶
The original design of TEEP only includes TEEP Agent and TA inside TEE. While in confidential computing implementation, other submodules may also be involved in the TEE. In TEEP, these submodules could be covered by TEEP Agent. In SGX based confidential computing, submodule could provide convenient environment or API in which TA does not have to modify its source code to fit into SGX instructions. Submodules like Gramine and Occlum .etc are examples that could be included in TEEP Agent. If there is no submodule in TEEP Agent, the TA and UA need to be customized applications which fit into the SGX architecture. In SEV and other architectures that support whole guest VM as a TEE, TEEP Agent doesn't have to use extra submodule to work as a middleware or API. However with some submodules like Enarx which works as a runtime JIT compiler, TA could be deployed in a hardware independent way. In this scenario, TA could be deployed in different hardware architecture without re-compiling.¶