idnits 2.17.1 draft-arkko-dns-confidential-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (2 July 2021) is 1029 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-12) exists of draft-ietf-dprive-dnsoquic-02 == Outdated reference: A later version (-22) exists of draft-ietf-rats-architecture-12 == Outdated reference: A later version (-25) exists of draft-ietf-rats-eat-10 == Outdated reference: A later version (-18) exists of draft-ietf-tls-esni-11 == Outdated reference: A later version (-09) exists of draft-reddy-add-server-policy-selection-08 == Outdated reference: A later version (-04) exists of draft-thomson-tmi-01 == Outdated reference: A later version (-02) exists of draft-voit-rats-attestation-results-01 -- Obsolete informational reference (is this intentional?): RFC 7626 (Obsoleted by RFC 9076) Summary: 1 error (**), 0 flaws (~~), 8 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group J. Arkko 3 Internet-Draft J. Novotny 4 Intended status: Informational Ericsson 5 Expires: 3 January 2022 2 July 2021 7 Privacy Improvements for DNS Resolution with Confidential Computing 8 draft-arkko-dns-confidential-02 10 Abstract 12 Data leaks are a serious privacy problem for Internet users. Data in 13 flight and at rest can be protected with traditional communications 14 security and data encryption. Protecting data in use is more 15 difficult. In addition, failure to protect data in use can lead to 16 disclosing session or encryption keys needed for protecting data in 17 flight or at rest. 19 This document discusses the use of Confidential Computing, to reduce 20 the risk of leaks from data in use. Our example use case is in the 21 context of DNS resolution services. The document looks at the 22 operational implications of running services in a way that even the 23 owner of the service or compute platform cannot access user-specific 24 information produced by the resolution process. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at https://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on 3 January 2022. 43 Copyright Notice 45 Copyright (c) 2021 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 50 license-info) in effect on the date of publication of this document. 51 Please review these documents carefully, as they describe your rights 52 and restrictions with respect to this document. Code Components 53 extracted from this document must include Simplified BSD License text 54 as described in Section 4.e of the Trust Legal Provisions and are 55 provided without warranty as described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 60 2. Background . . . . . . . . . . . . . . . . . . . . . . . . . 4 61 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 62 4. Prerequisities . . . . . . . . . . . . . . . . . . . . . . . 5 63 5. Confidential Computing . . . . . . . . . . . . . . . . . . . 6 64 6. Using Confidential Computing for DNS Resolution . . . . . . . 7 65 7. Operational Considerations . . . . . . . . . . . . . . . . . 9 66 7.1. Operations . . . . . . . . . . . . . . . . . . . . . . . 9 67 7.2. Debugging . . . . . . . . . . . . . . . . . . . . . . . . 11 68 7.3. Dependencies . . . . . . . . . . . . . . . . . . . . . . 11 69 7.4. Additional services . . . . . . . . . . . . . . . . . . . 12 70 7.5. Performance . . . . . . . . . . . . . . . . . . . . . . . 12 71 8. Security Considerations . . . . . . . . . . . . . . . . . . . 13 72 8.1. Observations from outside the TEE . . . . . . . . . . . . 13 73 8.2. Trust Relationships . . . . . . . . . . . . . . . . . . . 13 74 8.3. Denial-of-Service Attacks . . . . . . . . . . . . . . . . 14 75 8.4. Other vulnerabilities . . . . . . . . . . . . . . . . . . 15 76 9. Recommendations . . . . . . . . . . . . . . . . . . . . . . . 16 77 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 17 78 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 17 79 11.1. Normative References . . . . . . . . . . . . . . . . . . 17 80 11.2. Informative References . . . . . . . . . . . . . . . . . 17 81 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22 83 1. Introduction 85 DNS privacy has been a popular topic in the last few years, and 86 continues to be. The issues with regards to privacy are first that 87 domain name meta-data is visible on the wire, even when the actual 88 communications are encryped. This is being addressed with better 89 technology. 91 But even if the meta-data is hidden inside communications, any DNS 92 resolvers still have the potential too see users' entire browsing 93 history. This is particularly problematic, given that commonly used 94 large public or operator resolver services are an obviously 95 attractive target, for both attacks and for commercial or other use 96 of information visible to them. 98 A lot of work is ongoing in the industry and the IETF to address some 99 of these issues: 101 * Work on encrypted DNS query protocols to hide the meta-data 102 related to domain names. 104 * Discovery mechanisms. These may enable a bigger fraction of DNS 105 query traffic to move to encrypted protocols, and may also help 106 distributed queries to different parties to avoid concentrating 107 all information in one place. 109 * Practices, expectations, contracts (e.g., [RFC8932], Mozilla's 110 trusted recursive resolver requirements [MozTRR]) 112 * Improvements outside DNS (e.g., encrypted Server Name Indication 113 (eSNI) [I-D.ietf-tls-esni]). 115 * General technology developments (e.g., confidential computing, 116 attestations, remote attestation work at the IETF RATS WG, and so 117 on) 119 The goal of this document is to build on all that work - and assume 120 all communications are or become encrypted, including the DNS 121 traffic. Our question is what problems remain? Is there a next 122 step? 124 Our worry is that resolvers can be a major remaining source of leaks, 125 e.g., through accidents, attacks, commercial use, or requests from 126 the authorities. We need to protect user's data in flight, at rest, 127 or in use - we wanted to experiment with technology that could reduce 128 leaks on the last two cases. Confidential Computing is one such 129 potential technology, but it is important to talk about it and get 130 broader feedback. The use of this technology does have some 131 operational impacts. 133 Our primary conclusions are that data held by servers should receive 134 at least as much security attention as communications do. The 135 authors feel that this is particularly crucial for DNS, due to the 136 potential to leak of users' browsing histories, but principles apply 137 also to other services. 139 As a result, all applicable tools should be considered, including 140 confidential computing that is discussed in this document. However, 141 the operational and business implications of such tools should be 142 considered. Feedback to us is very welcome. Are these approaches 143 feasible or infeasible? What aspects need to be taken into account 144 to successfully apply them? 146 2. Background 148 Communications security has been at the center of many security 149 improvements in the Internet. The goal has been to ensure that 150 communications are protected against outside observers and attackers 151 [RFC3552] [RFC7258]. Communications security is, however, not 152 sufficient by itself, and continuing success in better protection of 153 communications is highlighting the need to address other issues. 155 In particular, more attention needs to be paid to protecting data not 156 just in flight but also at rest or in use. User data leaks that can 157 occur from servers and other systems, through accidents, attacks, 158 commercial use of data, and requests for information by authorities. 159 Both data at rest and data in use needs to be protected. Being able 160 to protect data in use provides also benefits to protecting keys used 161 for protecting data in flight and at rest. 163 Data leaks are very common, and include highly publicized ones or 164 ones with significant consequences, such as [Cambridge]. Data leaks 165 are also not limited to traditional computer applications, but can 166 also impact anything from private health data [Vastaamo] to 167 children's toys [Toys] or smart TVs [SmartTV]. 169 The general issue and possible solutions have been discussed 170 extensively elsewhere, e.g., [Digging], [Mem], [Comparison], 171 [Innovative], [AMD], [Efficient], [CCC-Deepdive], [CC], and so on. 172 The Internet-relevant angle has also been discussed in few documents, 173 e.g., [I-D.lazanski-smart-users-internet], [I-D.iab-dedr-report] 174 [I-D.arkko-farrell-arch-model-t-redux], and so on. The topic is also 175 related to best practices for protocol and network architecture 176 design, and what information can be provided to what participants in 177 a system, see, e.g. [RFC8558] [I-D.thomson-tmi] 178 [I-D.arkko-arch-infrastructure-centralisation]. 180 Data leaks can occur in user-visible services that user has chosen to 181 use and agreed to provide information to (at least in theory 182 [Unread]). But leaks can also occur in other types of services, that 183 are part of the infrastructure, such as DNS resolution services or 184 parts of the communication infrastructure. 186 This document looks at the possibility of using a specific technical 187 solution, Confidential Computing [CCC-Deepdive], to reduce the risk 188 of leaks from data in use. We consider the operational implications 189 of running services in a way that even the owner of the service or 190 compute platform cannot access user-specific information that is 191 produced as a side-effect of the service. 193 We explore the use of Confidential Computing in the context of DNS 194 resolution services [RFC1035]. This is a nice and relatively simple 195 example, but there are of course potential other applications as 196 well. 198 DNS resolution services are of course also an important case where 199 privacy matters a lot for the users. Threats against the resolution 200 process could prevent the user from accessing services. Data leaks 201 from the process have the potential to expose the user's entire 202 browsing history. 204 The use of Confidential Computing in the DNS context has been also 205 discussed in other documents, e.g., [PDoT] and 206 [I-D.reddy-add-server-policy-selection]. 208 The DNS privacy issues have been also discussed in multiple 209 documents, such as [RFC7626] [RFC8324] and so on. 211 3. Terminology 213 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 214 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 215 "OPTIONAL" in this document are to be interpreted as described in BCP 216 14 [RFC2119] [RFC8174] when, and only when, they appear in all 217 capitals, as shown here. 219 4. Prerequisities 221 The primary sources of leaks are as follows: 223 * Communications interception. This threat can be addressed by 224 encrypted communications, such as the use of DNS-over-TLS (DoT) 225 [RFC7858], DNS-over-HTTPS (DoH) [RFC8484], or DNS-over-QUIC (DoQ) 226 [I-D.ietf-dprive-dnsoquic] instead of traditional DNS protocols. 228 * Data leakage from the server or service, either from data at rest 229 or in use. This can be addressed by encrypting the data while at 230 rest and employing the techniques discussed in this document for 231 data in use. 233 The specific information that is privacy sensitive depends on the 234 application. In DNS resolution application it is clear that the 235 users' browsing histories, i.e., which users asked for what names is 236 privacy sensitive, and protecting that information is the primary 237 focus in this document. In contrast, the domains themselves or the 238 associated address information is in the general case public and not 239 privacy sensitive. However, in some cases even this information may 240 be sensitive, such as in the case of internal domains of a corporate 241 network. Information not related to individuals may also be 242 sensitive in some cases, e.g., the collective browsing destinations 243 of an entire organization. 245 The above was also observed in [RFC7626] which stated the following: 247 "DNS data and the results of a DNS query are public [...], and may 248 not have any confidentiality requirements. However, the same is 249 not true of a single transaction or a sequence of transactions; 250 that transaction is not / should not be public." 252 Nevertheless, it should be noted that technology can help only 253 insofar as there is commercial willingness to provide the best 254 possible service and to protect the users' information. 256 Similarly, the techniques discussed in this document are not the 257 sole, or full answer to all problems. There are a lot of technical, 258 operational, and governance issues that also matter and practices 259 that help. A good compilation of some best practices can be found in 260 [RFC8932], and particularly Section 5.2 that discusses data at rest. 262 5. Confidential Computing 264 Confidential Computing is about protecting data in use by performing 265 computation in a hardware enforced Trusted Execution Environment 266 (TEE) [CCC-Deepdive]. It addresses the need to protect data in use, 267 which traditionally has been hard to achieve. It may also help 268 improve the encryption of data in flight and at rest, by helping 269 protect session keys and other security information used in that 270 process. 272 For our purposes, we focus on Trusted Execution Environments that use 273 computer hardware to provide the following characteristics: 275 * Attestability: The environment can provide verifiable evidence to 276 others (such as client using services running on it) about the 277 environment, its characteristics, and the software it runs. 279 * Code integrity: Unauthorized entities cannot modify software being 280 run within the environment. 282 * Data confidentiality and integrity: Unauthorized entities cannot 283 view or modify data while it is in use within the TEE. 285 These characteristics have been paraphrased from [CCC-Deepdive]. See 286 also [I-D.ietf-rats-architecture] for details of attestation. There 287 are additional characteristics that matter in some situations, but 288 for our purposes the above ones are central. 290 Specific technologies to perform Confidential Computing or run TEEs 291 are becoming common in CPUs, operating systems, and other supporting 292 software. For instance, Intel's Software Guard Extension (SGX) [SGX] 293 is one CPU manufacturer's approach to this technology. SGX allows 294 application developers to run software protected in a secure enclave 295 protected by the CPU, including for instance encrypting all memory 296 accesses outside the CPU and being able to provide remote attestation 297 to outsiders about which software image is being run. These secure 298 enclaves are the SGX approach to providing a TEE. 300 Confidential Computing is also becoming available on commonly 301 available cloud computing services. When a user employs these 302 services, they have the ability to run software and process data that 303 even the owner of the cloud system does not have access to. 305 Interestingly, that is quite a contrast to the worries expressed some 306 years ago about Trusted Computing technology, when it was feared that 307 it enabled running software in users' computers that could act 308 against the interests of the user in some cases, such as when 309 protecting media files [Stallman]. While those concerns may apply 310 even today in some cases, it is clear that whe the user can get 311 secure information about services running somewhere in the network, 312 this is an advantage for the users. 314 Note that availability might be another desirable characteristic for 315 Confidential Computing systems, but it is one that is not in any 316 special way supported by current technology. Ultimately, the owner 317 of the computer still has the ability to choose when to switch the 318 computer off, for instance. There is also no particular hardware 319 technology at this time to deal with Denial-of-Service attacks. Some 320 of the software techniques related to dealing with Denial-of-Service 321 attacks are discussed in the Security Considerations section. 323 6. Using Confidential Computing for DNS Resolution 325 Confidential Computing can be used to provide a privacy-friendly 326 resolution service in a server. 328 The basic arrangement is two-fold: 330 * User's computer and the DNS resolution server communicate using an 331 encrypted and integrity protected transport protocol, such as DoT 332 or DoH [RFC7858] [RFC8484]. 334 * The secure connection terminates inside a TEE running in the the 335 DNS resolution server. This TEE performs all the necessary 336 processing to respond to the user's query. The TEE will not 337 provide any user-specific information outside of the TEE, such as 338 logs of what names specific clients queried for. 340 The TEE may need to contact other local servers or in the Internet 341 to resolve a query that has no recently cached answer. We will 342 discuss later how this can be done securely: it is necessary to 343 prevent the linking any external actions such as receiving a 344 client request and observing a query going out to other DNS 345 servers in the Internet. 347 The arrangement is shown in Figure 1. 349 +------------------+ +----------------+ 350 | User's | | Server | 351 | Computer | | Computer | 352 | | | | 353 | | | +----------+ | 354 | | | | A TEE, | | 355 | +------------+ | | | running | | other DNS 356 | | DNS Client |-|-------------|--| a DNS |--|------ servers 357 | +------------+ | | | resolver | | (if needed) 358 | | | +----------+ | 359 | | | | 360 +------------------+ +----------------+ 362 Figure 1: Confidential Computing for DNS Resoluton 364 In this application, we strive to have no data at rest at all, at 365 least nothing that relates directly to users. Data in flight and 366 data in use are both protected by encryption. As a result of running 367 the resolution service in this manner, any user-specific information 368 should remain within the TEE, and not exposed to outsiders or even 369 the owner of the service or the compute platform where the service is 370 running in. 372 The authors believe that this is a desirable property. However, it 373 remains to assure users and clients that the service is actually run 374 in this manner. This can be done in two ways: 376 * Through off-line reliance on a particular service, i.e., a human 377 decision to use a particular system. Once there is a decision to 378 use a particular system, cryptographic means such as public keys 379 may be used to ensure that the client is indeed connected to the 380 expected server. However, there is no guarantee that the human- 381 space statements about the practices used in running the server 382 are valid. 384 * Cryptographic check that the service is actually running inside a 385 valid TEE and that it runs the expected software. Such checks 386 needs to rely on third parties. The attestation verification is 387 performed by a verifier - that can be either user's computer or a 388 designated verifier as discussed in [I-D.ietf-rats-architecture] 389 and [I-D.voit-rats-attestation-results] The verifier checks that 390 (a) the cryptographic attestation refers to a server machine that 391 is acceptable to the user (e.g., manufactured by a manufacturer it 392 trusts, CPU features considered secure are used, features 393 considered insecure are turned off, etc.) (b) that the software 394 image designated as being run in the attestation is a software 395 image that the relying party (end user) is willing to use (e.g., 396 has a hash that matches a known software that does not log user 397 actions, or is vouched as trustworthy by another party that the 398 relying party trusts). 400 7. Operational Considerations 402 This section discusses some aspects of the Confidential Computing 403 arrangement for DNS, based on the authors' experience with these 404 systems. 406 7.1. Operations 408 Given that the service executes confidentially, and is not observable 409 even by the owner of the hardware, the operations model becomes 410 different. Some different models may be applied: 412 * The service executes on a hardware platform (such as a commercial 413 cloud service) that has no access to information, but there is 414 some other management entity that does have access. The control 415 functions of this entity can communicate with the service 416 instances running in TEEs, and have access to the internal state 417 and statistics of the service instances. 419 * Truly confidential operations where the service and hardware 420 owners have decided to deploy a service that really does not 421 expose private user information to anyone, including themselves. 423 It is not clear how the first model differs from currently deployed 424 service models. It merely makes it possible to run a service without 425 exposing information to, say, the cloud provider, but any data 426 collection about user behaviours would still be possible for the 427 service owner. 429 As a result, this document focuses mostly on the second model. For 430 some functions, such as DNS resolution, it is possible to hide all 431 user-related information, and our document argues that we should do 432 so. 434 Of course, the owners of a service do need some information to run 435 the service, from an efficiency, scaling, problem tracking, and 436 security monitoring point of view. The service operator may even 437 benefit from seeing some overall trend information about various 438 queries and traffic. This does not have to mean exposing individual 439 user behaviours, however. 441 The authors have worked with aggregate statistics to be able to 442 provide load, performance, memory usage, cache statistics, error, and 443 other information out of the confidential processes. This helps the 444 operator understand the health and status of various service 445 instances. Even with aggregate statistics, there are some danger of 446 revealing private information. For instance, even a sum of counters 447 across all clients can reveal counters associated with an individual 448 user, if the aggregate counters can be sampled at any time with 449 arbitrary precision. For instance, the actions of a single client 450 can be determined by sampling the statistics before and after that 451 client sent a message. 453 A simplistic approach to producing safer statistics in such cases is 454 to truncate and/or obfuscate the least significant bits of the 455 statistics. It is often necessary to tailor such truncation to the 456 types of measurements, e.g., number of requests is typically a very 457 large number while the number of specific errors is usually small. 458 Truncation could of course be done dynamically. More generally, the 459 set of information provided to the operator about the confidential 460 process could be viewed in light of differential privacy. 462 Another complementary approach is to provide statistics only at set 463 intervals, or after a sufficient amount of new traffic has been 464 received. 466 Another complementary technique to monitor the health of confidential 467 services is the use of probes to ensure that the services function 468 correctly. Probes can also measure the performance of the services. 470 The case of excessive service conditions due to Denial-of-Service 471 attacks is discussed further under the Security Considerations 472 section. 474 7.2. Debugging 476 Various error conditions and software issues may occur, as is usual 477 with any service. There is a need to monitor problems that occur 478 inside the service or at the client. This can be done, for instance, 479 with the help of various statistics discussed earlier. 481 Some of the monitored conditions should include: 483 * All major (or preferably even minor) error conditions should have 484 an associated counter. This is necessary as no traditional 485 logging can be reasonably provided that would otherwise have 486 entries for, say, "client IP 203.0.113.0 sent a malformed 487 request". While some errors can be expected at any time, a major 488 increase in specific issues can indicate a problem. As a result, 489 the counters need to be monitored and issues investigated as 490 needed. 492 * Client connection failures, which might indicate software version, 493 trust root or other configuration problems. 495 Of course, for dedicated software testing purposes (such as debugging 496 interoperability problems), even confidential services need to be run 497 in a mode that exposes everything. Actual clients and users MUST be 498 able to ensure that they are connected to a production service 499 instance. This can be be done by providing debugging status as part 500 of the remote attestation, so that clients can verify it is off. 501 Alternatively, testing versions of the service are simply not listed 502 as trusted software versions. 504 7.3. Dependencies 506 The use of Confidential Computing introduces three additional 507 dependencies to the system: 509 There is a need to be able to verify that the CPU executing the 510 service is a legitimate CPU with the right hardware, and that the 511 software being run for the service is acceptable. While this can be 512 hard coded information in the service clients, in practice there is 513 often a need to rely on other parties for scalability. As a result, 514 there are two dependencies for legitimate CPU verification and for 515 checking acceptable software versions. These are services that need 516 to be run, and/or their use need to be agreed and possibly contracted 517 for. The CPU manufacturer often plays a role in the CPU 518 verification. 520 The third dependency is on the client. Depending on specific 521 protocol arrangements, Confidential Computing services often can 522 serve unmodified clients, but for the full benefits and for 523 validating attestations or software images, client changes are 524 necessary. The necessary communications may happen as part of TLS 525 negotiations or other general purpose protocols 526 [I-D.mandyam-tokbind-attest], [I-D.ietf-rats-eat]. 528 7.4. Additional services 530 Many services employ information that can be used to perform 531 additional services beyond the basic task. For instance, knowledge 532 about what the users requests or who the user is can be used for 533 various optimizations or additional information that can be delivered 534 to the user. Or the user can provide some additional information 535 that is taken into account by the service. 537 One concern with these types of additional services is that the 538 information used by them can be privacy sensitive. But Confidential 539 Computing can assist in this as well, as long as the relevant 540 information stays only within the TEE, it is better protected than 541 by, e.g., providing that extra information to a regular service on 542 the Internet. 544 Conversely, care needs to be taken whenever the service needs to 545 relay some information outside the TEE. Some specific situations 546 where this is needed with DNS are discussed in Section 7.1. 548 One example of additional services is that aggregate, privacy- 549 sensitive data may be produced about trends in a confidentially run 550 service, if it will not be possible to separate individual users from 551 that data. For instance, it would be difficult sell information 552 about individual users to help with targeted advertising, but the 553 overall popularity of some websites could be measured. 555 7.5. Performance 557 Confidential Computing technology may impact performance. Nakatsuka 558 et al. [PDoT] report on DNS resolution within a TEE where their 559 solution could outperform the open source Unbound DNS server in 560 certain scenarios, especially in situations where there are not a lot 561 of DNS client connections. We concur their suggestion that at 562 current stage of Confidential Computing technology, possible 563 implementations may be more suited for local DNS resolution services 564 rather than global scale implementation, where the performance hit 565 would be much more significant. Nonetheless with Confidential 566 Computing technology ever evolving we believe the low performance 567 overhead solutions will be possible in foreseeable future. 569 Other things being equal there's likely some performance hit, as 570 current Confidential Computing technology typically involves 571 separating a server into two parts, the trusted and untrusted parts. 572 In practice, all communications need to go through both, and the 573 communication between the two parts consumes some cycles. There are 574 also current limitations on amount of memory or threads supported by 575 these technologies. However, newer virtualization-based confidential 576 computing TEE approaches are likely going to improve these aspects. 578 Another performance hit comes from the overhead related to running 579 the attestation process, and passing the necessary extra information 580 in the communications protocols with the clients. In general, this 581 works best when the cost of the setup is amortized over a long-lived 582 session. Such sessions may exist between DoT/DoH-enabled clients and 583 resolvers. Also, there are many possible arrangements and possible 584 parties involved in attestation, see [I-D.ietf-rats-architecture]. 586 8. Security Considerations 588 Security issues in this arrangement are discussed below. 590 8.1. Observations from outside the TEE 592 While a TEE is considered to be secure and not observable, there may 593 be signs outside the TEE that can reveal information. 595 For instance, a server may receive a request from a client and 596 immediately send out a question to a server in the Internet about a 597 particular domain name. Observers - such as the owner of the server 598 computer or the cloud farm - may be able to link incoming user 599 queries to outgoing questions 601 Caching, randomly made other traffic, and timing obfuscation can 602 deter such attacks, at least to an extent. 604 8.2. Trust Relationships 606 For scaling reasons, the arrangement typically depends on the ability 607 to have trusted parties (a) for attesting the validity of a 608 particular CPU being manufactured by a CPU manufacturer, and (b) for 609 determining whether a particular software image hash is acceptable 610 for the task it is advertising to do. 612 Such trusted parties need to be configured, which presents an 613 additional operational burden. The information can of course be 614 provided as part of a device manufacturer's or application's initial 615 configuration, or be provided independently similar to how, for 616 instance, certificate authorities are run. 618 It is important to recognize that mere use of technology is not 619 sufficient to make the system secure. With communications, 620 establishing a secure, encrypted channel is of no use if it is not 621 with the intended party due to a certificate authority that proved to 622 be untrustworthy. With confidential computing, the same applies: one 623 has to have someone who can assert that a CPU is capable of 624 performing the confidential computing task and that the indicated 625 software is good for performing the task that the user expects it to 626 perform. That being said, when such trusted parties can be found, 627 the service performed by the server can become much more privacy 628 friendly. 630 8.3. Denial-of-Service Attacks 632 To paraphrase an old philosophical question, "If an evil packet is 633 sent behind the veil of encryption and no one is around to lift it, 634 did an attack happen?" [Chautauquan] 636 Denial-of-Service attacks are a more serious form of the problems 637 with operating services that the operator (intentionally) does not 638 fully see. There needs to be means to deal with these attacks. 640 Attacks that can be identified by particularly high traffic flows 641 from externally observable sources (e.g., source IP address) can of 642 course still be dealt with in similar ways as we do in more open 643 server designs. 645 But this is often not enough, and for this purpose some additional 646 support is needed in the systems, for both detection of attacks and 647 reacting to them. 649 One detection technique is to use the aggregate/truncated statistics 650 to analyze anomalous behaviour. Another technique is to have the 651 confidential part of the service produce extra information about 652 events that cross a threshold. For instance, a particular error may 653 occur exceptionally frequently, say among millions of requests, and 654 this could warrant exposing either something about the request (e.g., 655 the associated domain name) or something about the client (e.g., 656 connection type, protocol details, or sender address). 658 The operator of the services needs to be able to react to possible 659 attacks as well. One technique is to be able to provide instruction 660 to the confidential part of the service to refuse service for 661 specific requests (e.g., specific domain names) or for specific 662 clients (e.g., coming from specific addresses). Alternatively, the 663 service can also dynamically react to issues, e.g., by starting to 664 reduce the amount of resources dedicated to some classes of requests 665 that for some reason are starting to require exceptionally high 666 amount of resources. These techniques do not endanger user privacy, 667 but may of course impact provided service. 669 8.4. Other vulnerabilities 671 Like all security mechanisms, this solution is not a panacea. It 672 relies on the correct operation of a number of technologies and 673 entities. For instance, CPU bugs or side channel vulnerabilities can 674 cause information leaks to become possible. While confidential 675 computing offers a layer of protection against attacks even from the 676 owner of the computer hardware or the operating system, it is 677 believed that this protection does not extend to sophisticated 678 physical attacks, such being able to study chips with an electron 679 microscope. 681 And as discussed above, it is also critical to check what software is 682 being run, as otherwise any possible benefit would be negated by the 683 possibly negligent or nefarious actions the unchecked software makes. 685 The mechanism does offer an additional layer of defense, however. It 686 allows some of the trust that we place on our cloud platform owners, 687 CPUs, and software applications to be verified and controlled with 688 technical means. It may have some remaining vulnerabilities, but we 689 obviously already depend on, for instance, the correct operation of 690 our computing platforms. As such, Confidential Computing works to 691 reduce some of the vulnerabilities in this area. 693 It should also be a desirable feature for users. A service that 694 offers Confidential Computing-based protection of user data and can 695 show that its software does not leak user-specific information is 696 likely going to be more attractive to users than one that provides no 697 such assurances. Of course, overall user choice depends on many 698 factors beyond privacy, such as cost, ease of use, switching costs, 699 and so on. 701 There is also a danger of attacks or pressure from intelligence 702 agencies that could result in, e.g., the use of unpublicized 703 vulnerabilities in an attempt to dwarf the protections in 704 Confidential Computing. This could be used to perform pervasive 705 monitoring, for instance [RFC7258]. Even so, it is always beneficial 706 to push the costs and difficulty for attackers. Requiring parties 707 who perform pervasive monitoring to employ complex technical attacks 708 rather than being able to request logs from a service provider 709 significantly increases the difficulty and risk associated with such 710 monitoring. 712 9. Recommendations 714 Data held by servers SHOULD receive at least as much security 715 attention as communications do. 717 The authors would like to draw attention to the problem of data 718 leaks, particularly for data in use, and RECOMMEND the application of 719 all available tools to prevent inappropriate access to users' 720 information. 722 This is particularly crucial for DNS resolution services that have 723 the potential to learn user's browsing histories. But the principles 724 apply also to other services. 726 While using Confidential Computing without other modifications to the 727 service in question is possible, real benefits can only be realized 728 when the actual service is built for the purpose of avoiding data 729 leaks or user data capture. Systems may need to be tuned or 730 modified, for instance they MUST NOT produce logs that would negate 731 purpose of running them inside a TEE to begin with. Mechanisms 732 SHOULD be found to enable debugging and the detection of fault 733 situations and attacks, again without exposing private information 734 relating to individual users. 736 Some computing services can proceed on their own and require no 737 interaction with the rest of the world. These are easier to secure. 738 Even then, care SHOULD be taken to avoid request-response timing to 739 provide information useful for side-channel attacks. If so, the 740 owner of the server hardware can not determine much about what was 741 going on. 743 However, other services may require interaction with other systems, 744 such as is the case with a DNS resolver needing to find out a 745 particular name that is not in a cache or whose cache entry has 746 expired. This is because the resolution service is not a self- 747 contained computation task but ultimately needs, at least in some 748 cases, interaction with the rest of the world. 750 Consequently, the resolver needs to collaborate with other network 751 nodes that are not even in the same administrative domain and cannot 752 be guaranteed to subscribe to the same principles of protecting 753 user's information. In this case, even if communications to other 754 entities are encrypted, the potentially untrusted party at the other 755 end of the communications may leak information. 757 In such communications, care SHOULD be taken to avoid exposing any 758 information that would identify users, or allow fingerprinting the 759 capabilities of those users' systems. Similarly, care SHOULD be 760 taken to avoid exposing any timing information that would allow the 761 owner of the server hardware to determine what is going on, e.g., 762 which users are asking for what names. Even so, vulnerabilities may 763 appear if the attacker can force the system to behave in a particular 764 way, by, e.g., forcing cache overflow, overloading it with traffic it 765 knows about, etc. 767 The situation is slightly different when the interaction is with 768 other systems that form a part of the same administrative domain. In 769 particular, if those other systems employ similar confidential 770 computing setup, and an encrypted channel is used, then some 771 additional security can be provided compared to communicating with 772 other entities in the Internet. 774 10. Acknowledgments 776 The authors would like to thank Juhani Kauppi, Jimmy Kjaellman, and 777 Tero Kauppinen for their work on systems supporting some of the ideas 778 discussed in this memo, and Dave Thaler, Daniel Migault, Karl 779 Norrman, and Christian Schaefer for significant feedback on early 780 version of this draft. The author would also like to thank Marcus 781 Ihlar, Maria Luisa Mas, Miguel Angel Munos De La Torre Alonso, Jukka 782 Ylitalo, Bengt Sahlin, Tomas Mecklin, Ben Smeets and many others for 783 interesting discussions in this problem space. 785 11. References 787 11.1. Normative References 789 [RFC1035] Mockapetris, P.V., "Domain names - implementation and 790 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, 791 November 1987, . 793 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 794 Requirement Levels", BCP 14, RFC 2119, 795 DOI 10.17487/RFC2119, March 1997, 796 . 798 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 799 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 800 May 2017, . 802 11.2. Informative References 804 [AMD] Kaplan, D., Powell, J., and T. Woller, "AMD Memory 805 Encryption", AMD White Paper , April 2016. 807 [Cambridge] 808 Isaak, J. and M. Hanna, "User Data Privacy: Facebook, 809 Cambridge Analytica, and Privacy Protection", Computer 810 51.8 (2018): 56-59, https://ieeexplore.ieee.org/stamp/ 811 stamp.jsp?arnumber=8436400 , 2018. 813 [CC] Rashid, F.Y., "What Is Confidential Computing?", IEEE 814 Spectrum, https://spectrum.ieee.org/computing/hardware/ 815 what-is-confidential-computing , May 2020. 817 [CCC-Deepdive] 818 Confidential Computing Consortium, ., "A Technical 819 Analysis of Confidential Computing", 820 https://confidentialcomputing.io/whitepaper-02-latest , 821 January 2021. 823 [Chautauquan] 824 "The Chautauquan", Volume 3, Issue 9, p. 543 , June 1883. 826 [Comparison] 827 Mofrad, S., Zhang, F., Lu, S., and W. Shi, "A comparison 828 study of intel SGX and AMD memory encryption technology", 829 HASP '18, Proceedings of the 7th International Workshop on 830 Hardware and Architectural Support for Security and 831 Privacy, Pages 1-8, 832 https://doi.org/10.1145/3214292.3214301 , June 2018. 834 [Digging] Hammouchi, H., Cherqi, O., Mezzour, G., Ghogho, M., and M. 835 El Koutbi, "Digging Deeper into Data Breaches: An 836 Exploratory Data Analysis of Hacking Breaches Over Time", 837 Procedia Computer Science, Volume 151, pp. 1004-1009, ISSN 838 1877-0509, https://doi.org/10.1016/j.procs.2019.04.141, 839 https://www.sciencedirect.com/science/article/pii/ 840 S1877050919306064 , 2019. 842 [Efficient] 843 Suh, G.E., Clarke, D., Gasend, B., van Dijk, M., and S. 844 Devadas, "Efficient memory integrity verification and 845 encryption for secure processors", Proceedings. 36th 846 Annual IEEE/ACM International Symposium on 847 Microarchitecture, MICRO-36, San Diego, CA, USA, pp. 848 339-350, doi: 10.1109/MICRO.2003.1253207 , 2003. 850 [I-D.arkko-arch-infrastructure-centralisation] 851 Arkko, J., "Centralised Architectures in Internet 852 Infrastructure", Work in Progress, Internet-Draft, draft- 853 arkko-arch-infrastructure-centralisation-00, 4 November 854 2019, . 857 [I-D.arkko-farrell-arch-model-t-redux] 858 Arkko, J. and S. Farrell, "Internet Threat Model 859 Evolution: Background and Principles", Work in Progress, 860 Internet-Draft, draft-arkko-farrell-arch-model-t-redux-01, 861 22 February 2021, . 864 [I-D.iab-dedr-report] 865 Arkko, J. and T. Hardie, "Report from the IAB Workshop on 866 Design Expectations vs. Deployment Reality in Protocol 867 Development", Work in Progress, Internet-Draft, draft-iab- 868 dedr-report-01, 2 November 2020, 869 . 872 [I-D.ietf-dprive-dnsoquic] 873 Huitema, C., Mankin, A., and S. Dickinson, "Specification 874 of DNS over Dedicated QUIC Connections", Work in Progress, 875 Internet-Draft, draft-ietf-dprive-dnsoquic-02, 22 February 876 2021, . 879 [I-D.ietf-rats-architecture] 880 Birkholz, H., Thaler, D., Richardson, M., Smith, N., and 881 W. Pan, "Remote Attestation Procedures Architecture", Work 882 in Progress, Internet-Draft, draft-ietf-rats-architecture- 883 12, 23 April 2021, . 886 [I-D.ietf-rats-eat] 887 Mandyam, G., Lundblade, L., Ballesteros, M., and J. 888 O'Donoghue, "The Entity Attestation Token (EAT)", Work in 889 Progress, Internet-Draft, draft-ietf-rats-eat-10, 7 June 890 2021, . 893 [I-D.ietf-tls-esni] 894 Rescorla, E., Oku, K., Sullivan, N., and C. A. Wood, "TLS 895 Encrypted Client Hello", Work in Progress, Internet-Draft, 896 draft-ietf-tls-esni-11, 14 June 2021, 897 . 900 [I-D.lazanski-smart-users-internet] 901 Lazanski, D., "An Internet for Users Again", Work in 902 Progress, Internet-Draft, draft-lazanski-smart-users- 903 internet-00, 8 July 2019, 904 . 907 [I-D.mandyam-tokbind-attest] 908 Mandyam, G., Lundblade, L., and J. Azen, "Attested TLS 909 Token Binding", Work in Progress, Internet-Draft, draft- 910 mandyam-tokbind-attest-07, 24 January 2019, 911 . 914 [I-D.reddy-add-server-policy-selection] 915 Reddy, T., Wing, D., Richardson, M. C., and M. Boucadair, 916 "DNS Server Selection: DNS Server Information with 917 Assertion Token", Work in Progress, Internet-Draft, draft- 918 reddy-add-server-policy-selection-08, 29 March 2021, 919 . 922 [I-D.thomson-tmi] 923 Thomson, M., "Principles for the Involvement of 924 Intermediaries in Internet Protocols", Work in Progress, 925 Internet-Draft, draft-thomson-tmi-01, 3 January 2021, 926 . 929 [I-D.voit-rats-attestation-results] 930 Voit, E., Birkholz, H., Hardjono, T., Fossati, T., and V. 931 Scarlata, "Attestation Results for Secure Interactions", 932 Work in Progress, Internet-Draft, draft-voit-rats- 933 attestation-results-01, 10 June 2021, 934 . 937 [Innovative] 938 Ittai, A., Gueron, S., Johnson, S., and V. Scarlata, 939 "Innovative Technology for CPU Based Attestation and 940 Sealing", HASP'2013 , 2013. 942 [Mem] Henson, M. and S. Taylor, "Memory encryption: a survey of 943 existing techniques", ACM Computing Surveys volume 46 944 issue 4 , 2014. 946 [MozTRR] Mozilla, ., "Security/DOH-resolver-policy", 947 https://wiki.mozilla.org/Security/DOH-resolver-policy , 948 2019. 950 [PDoT] Nakatsuka, Y., Paverd, A., and G. Tsudik, "PDoT: Private 951 DNS-over-TLS with TEE Support", Digit. Threat.: Res. 952 Pract., Vol. 2, No. 1, Article 3, 953 https://dl.acm.org/doi/fullHtml/10.1145/3431171 , February 954 2021. 956 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 957 Text on Security Considerations", BCP 72, RFC 3552, 958 DOI 10.17487/RFC3552, July 2003, 959 . 961 [RFC7258] Farrell, S. and H. Tschofenig, "Pervasive Monitoring Is an 962 Attack", BCP 188, RFC 7258, DOI 10.17487/RFC7258, May 963 2014, . 965 [RFC7626] Bortzmeyer, S., "DNS Privacy Considerations", RFC 7626, 966 DOI 10.17487/RFC7626, August 2015, 967 . 969 [RFC7858] Hu, Z., Zhu, L., Heidemann, J., Mankin, A., Wessels, D., 970 and P. Hoffman, "Specification for DNS over Transport 971 Layer Security (TLS)", RFC 7858, DOI 10.17487/RFC7858, May 972 2016, . 974 [RFC8324] Klensin, J., "DNS Privacy, Authorization, Special Uses, 975 Encoding, Characters, Matching, and Root Structure: Time 976 for Another Look?", RFC 8324, DOI 10.17487/RFC8324, 977 February 2018, . 979 [RFC8484] Hoffman, P. and P. McManus, "DNS Queries over HTTPS 980 (DoH)", RFC 8484, DOI 10.17487/RFC8484, October 2018, 981 . 983 [RFC8558] Hardie, T., Ed., "Transport Protocol Path Signals", 984 RFC 8558, DOI 10.17487/RFC8558, April 2019, 985 . 987 [RFC8932] Dickinson, S., Overeinder, B., van Rijswijk-Deij, R., and 988 A. Mankin, "Recommendations for DNS Privacy Service 989 Operators", BCP 232, RFC 8932, DOI 10.17487/RFC8932, 990 October 2020, . 992 [SGX] Hoekstra, M.E., "Intel(R) SGX for Dummies (Intel(R) SGX 993 Design Objectives)", Intel, 994 https://software.intel.com/content/www/us/en/develop/ 995 blogs/protecting-application-secrets-with-intel-sgx.html , 996 September 2013. 998 [SmartTV] Malkin, N., Bernd, J., Johnson, M., and S. Egelman, "What 999 Can't Data Be Used For? Privacy Expectations about Smart 1000 TVs in the U.S.", European Workshop on Usable Security 1001 (Euro USEC), https://www.ndss-symposium.org/wp- 1002 content/uploads/2018/06/ 1003 eurousec2018_16_Malkin_paper.pdf" , 2018. 1005 [Stallman] Stallman, R., "Can You Trust Your Computer?", GNU.org, 1006 https://www.gnu.org/philosophy/can-you-trust.html , n.d.. 1008 [Toys] Chu, G., Apthorpe, N., and N. Feamster, "Security and 1009 Privacy Analyses of Internet of Things Childrens' Toys", 1010 IEEE Internet of Things Journal 6.1 (2019): 978-985, 1011 https://arxiv.org/pdf/1805.02751.pdf , 2019. 1013 [Unread] Obar, J. and A. Oeldorf, "The biggest lie on the 1014 internet{:} Ignoring the privacy policies and terms of 1015 service policies of social networking services", 1016 Information, Communication and Society (2018): 1-20 , 1017 2018. 1019 [Vastaamo] Redcross Finland, ., "Read this if your personal data was 1020 leaked in the Vastaamo data system break-in", 1021 https://www.redcross.fi/news/20201029/read-if-your- 1022 personal-data-was-leaked-vastaamo-data-system-break , 1023 October 2020. 1025 Authors' Addresses 1027 Jari Arkko 1028 Ericsson 1030 Email: jari.arkko@ericsson.com 1032 Jiri Novotny 1033 Ericsson 1035 Email: jiri.novotny@ericsson.com