idnits 2.17.1 draft-arkko-farrell-arch-model-t-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) == There are 2 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 09, 2020) is 1509 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-12) exists of draft-iab-protocol-maintenance-04 == Outdated reference: A later version (-13) exists of draft-ietf-mls-architecture-04 == Outdated reference: A later version (-34) exists of draft-ietf-quic-transport-27 == Outdated reference: A later version (-25) exists of draft-ietf-rats-eat-03 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-07 == Outdated reference: A later version (-18) exists of draft-ietf-teep-protocol-00 == Outdated reference: A later version (-18) exists of draft-ietf-tls-esni-05 == Outdated reference: A later version (-02) exists of draft-mcfadden-smart-endpoint-taxonomy-for-cless-01 == Outdated reference: A later version (-03) exists of draft-taddei-smart-cless-introduction-02 -- Obsolete informational reference (is this intentional?): RFC 6962 (Obsoleted by RFC 9162) -- Obsolete informational reference (is this intentional?): RFC 7231 (Obsoleted by RFC 9110) -- Obsolete informational reference (is this intentional?): RFC 7540 (Obsoleted by RFC 9113) -- No information found for draft-trammell-whats-an-endpoint - is the name correct? Summary: 1 error (**), 0 flaws (~~), 11 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group J. Arkko 3 Internet-Draft Ericsson 4 Intended status: Informational S. Farrell 5 Expires: September 10, 2020 Trinity College Dublin 6 March 09, 2020 8 Challenges and Changes in the Internet Threat Model 9 draft-arkko-farrell-arch-model-t-03 11 Abstract 13 Communications security has been at the center of many security 14 improvements in the Internet. The goal has been to ensure that 15 communications are protected against outside observers and attackers. 17 This memo suggests that the existing RFC 3552 threat model, while 18 important and still valid, is no longer alone sufficient to cater for 19 the pressing security and privacy issues seen on the Internet today. 20 For instance, it is often also necessary to protect against endpoints 21 that are compromised, malicious, or whose interests simply do not 22 align with the interests of users. While such protection is 23 difficult, there are some measures that can be taken and we argue 24 that investigation of these issues is warranted. 26 It is particularly important to ensure that as we continue to develop 27 Internet technology, non-communications security related threats, and 28 privacy issues, are properly understood. 30 Status of This Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at https://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on September 10, 2020. 47 Copyright Notice 49 Copyright (c) 2020 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (https://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2. Observations . . . . . . . . . . . . . . . . . . . . . . . . 5 66 2.1. Communications Security Improvements . . . . . . . . . . 5 67 2.2. Beyond Communications Security . . . . . . . . . . . . . 6 68 2.3. Examples . . . . . . . . . . . . . . . . . . . . . . . . 9 69 2.3.1. Deliberate adversarial behaviour in applications . . 9 70 2.3.2. Inadvertent adversarial behaviours . . . . . . . . . 15 71 3. Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 16 72 3.1. The Role of End-to-end . . . . . . . . . . . . . . . . . 16 73 3.2. Trusted networks . . . . . . . . . . . . . . . . . . . . 18 74 3.2.1. Even closed networks can have compromised nodes . . . 19 75 3.3. Balancing Threats . . . . . . . . . . . . . . . . . . . . 20 76 4. Areas requiring more study . . . . . . . . . . . . . . . . . 21 77 5. Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . 25 78 6. Potential changes in BCP 72/RFC 3552 . . . . . . . . . . . . 27 79 6.1. Simple change . . . . . . . . . . . . . . . . . . . . . . 28 80 6.2. Additional discussion of compromises . . . . . . . . . . 29 81 6.3. Guidance with regards to communications security . . . . 29 82 6.3.1. Limiting time scope of compromise . . . . . . . . . . 29 83 6.3.2. Forcing active attack . . . . . . . . . . . . . . . . 30 84 6.3.3. Traffic analysis . . . . . . . . . . . . . . . . . . 31 85 6.3.4. Containing compromise of trust points . . . . . . . . 31 86 7. Potential Changes in BCP 188/RFC 7258 . . . . . . . . . . . . 32 87 8. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 32 88 9. Informative References . . . . . . . . . . . . . . . . . . . 33 89 Appendix A. Contributors . . . . . . . . . . . . . . . . . . . . 42 90 Appendix B. Acknowledgements . . . . . . . . . . . . . . . . . . 42 91 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 43 93 1. Introduction 95 Communications security has been at the center of many security 96 improvements in the Internet. The goal has been to ensure that 97 communications are protected against outside observers and attackers. 98 At the IETF, this approach has been formalized in BCP 72 [RFC3552], 99 which defined the Internet threat model in 2003. 101 The purpose of a threat model is to outline what threats exist in 102 order to assist the protocol designer. But RFC 3552 also ruled some 103 threats to be in scope and of primary interest, and some threats out 104 of scope [RFC3552]: 106 The Internet environment has a fairly well understood threat 107 model. In general, we assume that the end-systems engaging in a 108 protocol exchange have not themselves been compromised. 109 Protecting against an attack when one of the end-systems has been 110 compromised is extraordinarily difficult. It is, however, 111 possible to design protocols which minimize the extent of the 112 damage done under these circumstances. 114 By contrast, we assume that the attacker has nearly complete 115 control of the communications channel over which the end-systems 116 communicate. This means that the attacker can read any PDU 117 (Protocol Data Unit) on the network and undetectably remove, 118 change, or inject forged packets onto the wire. 120 However, the communications-security -only threat model is becoming 121 outdated. Some of the causes for this are: 123 o Success! Advances in protecting most of our communications with 124 strong cryptographic means. This has resulted in much improved 125 communications security, but also highlights the need for 126 addressing other, remaining issues. This is not to say that 127 communications security is not important, it still is: 128 improvements are still needed. Not all communications have been 129 protected, and even out of the already protected communications, 130 not all of their aspects have been fully protected. Fortunately, 131 there are ongoing projects working on improvements. 133 o Adversaries have increased their pressure against other avenues of 134 attack, from supply-channel attacks, to compromising devices to 135 legal coercion of centralized endpoints in conversations. 137 o New adversaries and risks have arisen, e.g., due to creation of 138 large centralized information sources. 140 o While communications-security does seem to be required to protect 141 privacy, more is needed, especially if endpoints choose to act 142 against the interests of their peers or users. 144 In short, attacks are migrating towards the currently easier targets, 145 which no longer necessarily include direct attacks on traffic flows. 146 In addition, trading information about users and ability to influence 147 them has become a common practice for many Internet services, often 148 without users understanding those practices. 150 This memo suggests that the existing threat model, while important 151 and still valid, is no longer alone sufficient to cater for the 152 pressing security and privacy issues on the Internet. For instance, 153 while it continues to be very important to protect Internet 154 communications against outsiders, it is also necessary to protect 155 systems against endpoints that are compromised, malicious, or whose 156 interests simply do not align with the interests of the users. 158 Of course, there are many trade-offs in the Internet on who one 159 chooses to interact with and why or how. It is not the role of this 160 memo to dictate those choices. But it is important that we 161 understand the implications of different practices. It is also 162 important that when it comes to basic Internet infrastructure, our 163 chosen technologies lead to minimal exposure with respect to the non- 164 communications threats. 166 It is particularly important to ensure that non-communications 167 security related threats are properly understood for any new Internet 168 technology. While the consideration of these issues is relatively 169 new in the IETF, this memo provides some initial ideas about 170 potential broader threat models to consider when designing protocols 171 for the Internet or when trying to defend against pervasive 172 monitoring. Further down the road, updated threat models could 173 result in changes in BCP 72 [RFC3552] (guidelines for writing 174 security considerations) and BCP 188 [RFC7258] (pervasive 175 monitoring), to include proper consideration of non-communications 176 security threats. 178 It may also be necessary to have dedicated guidance on how systems 179 design and architecture affect security. The sole consideration of 180 communications security aspects in designing Internet protocols may 181 lead to accidental or increased impact of security issues elsewhere. 182 For instance, allowing a participant to unnecessarily collect or 183 receive information may lead to a similar effect as described in 184 [RFC8546] for protocols: over time, unnecessary information will get 185 used with all the associated downsides, regardless of what deployment 186 expectations there were during protocol design. 188 This memo does not stand alone. To begin with, it is a merge of 189 earlier work by the two authors [I-D.farrell-etm] 190 [I-D.arkko-arch-internet-threat-model]. There are also other 191 documents discussing this overall space, e.g. 192 [I-D.lazanski-smart-users-internet] [I-D.arkko-arch-dedr-report]. 194 The authors of this memo envisage independent development of each of 195 those (and other work) with an eventual goal to extract an updated 196 (but usefully brief!) description of an extended threat model from 197 the collection of works. We consider it an open question whether 198 this memo, or any of the others, would be usefully published as an 199 RFC. 201 The rest of this memo is organized as follows. Section 2 makes some 202 observations about the situation, with respect to communications 203 security and beyond. The section also provides a number of real- 204 world examples. 206 Section 3 discusses some high-level implications that can be drawn, 207 such as the need to consider what the "ends" really are in an "end- 208 to-end" communication. 210 Section 4 lists some areas where additional work is required before 211 we could feel confident in crafting guidelines, whereas Section 5 212 presents what we think are perhaps already credible potential 213 guidelines - both from the point of view of a system design, as well 214 as from the point of IETF procedures and recommended analysis 215 procedures when designing new protocols. Section 6 and Section 7 216 tentatively suggest some changes to current IETF BCPs in this space. 218 Comments are solicited on these and other aspects of this document. 219 The best place for discussion is on the model-t list. 220 (https://www.ietf.org/mailman/listinfo/model-t) 222 Finally, Section 8 draws some conclusions for next steps. 224 2. Observations 226 2.1. Communications Security Improvements 228 Being able to ask about threat model improvements is due to progress 229 already made: The fraction of Internet traffic that is 230 cryptographically protected has grown tremendously in the last few 231 years. Several factors have contributed to this change, from Snowden 232 revelations to business reasons and to better available technology 233 such as HTTP/2 [RFC7540], TLS 1.3 [RFC8446], QUIC 234 [I-D.ietf-quic-transport]. 236 In many networks, the majority of traffic has flipped from being 237 cleartext to being encrypted. Reaching the level of (almost) all 238 traffic being encrypted is no longer something unthinkable but rather 239 a likely outcome in a few years. 241 At the same time, technology developments and policy choices have 242 driven the scope of cryptographic protection from protecting only the 243 pure payload to protecting much of the rest as well, including far 244 more header and meta-data information than was protected before. For 245 instance, efforts are ongoing in the IETF to assist encrypting 246 transport headers [I-D.ietf-quic-transport], server domain name 247 information in TLS [I-D.ietf-tls-esni], and domain name queries 248 [RFC8484]. 250 There have also been improvements to ensure that the security 251 protocols that are in use actually have suitable credentials and that 252 those credentials have not been compromised, see, for instance, Let's 253 Encrypt [RFC8555], HSTS [RFC6797], HPKP [RFC7469], and Expect-CT 254 [I-D.ietf-httpbis-expect-ct]. 256 This is not to say that all problems in communications security have 257 been resolved - far from it. But the situation is definitely 258 different from what it was a few years ago. Remaining issues will be 259 and are worked on; the fight between defense and attack will also 260 continue. Communications security will stay at the top of the agenda 261 in any Internet technology development. 263 2.2. Beyond Communications Security 265 There are, however, significant issues beyond communications security 266 in the Internet. To begin with, it is not necessarily clear that one 267 can trust all the endpoints in any protocol interaction. 269 Of course, client endpoint implementations were never fully trusted, 270 but the environments in which those endpoints exist are changing. 271 For instance, users may not have as much control over their own 272 devices as they used to, due to manufacturer-controlled operating 273 system installations and locked device ecosystems. And within those 274 ecosystems, even the applications that are available tend to have 275 privileges that users by themselves might not desire those 276 applications be granted, such as excessive rights to media, location, 277 and peripherals. There are also designated efforts by various 278 authorities to hack end-user devices as a means of intercepting data 279 about the user. 281 The situation is different, but not necessarily better on the side of 282 servers. The pattern of communications in today's Internet is almost 283 always via a third party that has at least as much information as the 284 other parties have. For instance, these third parties are typically 285 endpoints for any transport layer security connections, and able to 286 see much communications or other messaging in cleartext. There are 287 some exceptions, of course, e.g., messaging applications with end-to- 288 end confidentiality protection. 290 With the growth of trading users' information by many of these third 291 parties, it becomes necessary to take precautions against endpoints 292 that are compromised, malicious, or whose interests simply do not 293 align with the interests of the users. 295 Specifically, the following issues need attention: 297 o Security of users' devices and the ability of the user to control 298 their own equipment. 300 o Leaks and attacks related to data at rest. 302 o Coercion of some endpoints to reveal information to authorities or 303 surveillance organizations, sometimes even in an extra-territorial 304 fashion. 306 o Application design patterns that result in cleartext information 307 passing through a third party or the application owner. 309 o Involvement of entities that have no direct need for involvement 310 for the sake of providing the service that the user is after. 312 o Network and application architectures that result in a lot of 313 information collected in a (logically) central location. 315 o Leverage and control points outside the hands of the users or end- 316 user device owners. 318 For instance, while e-mail transport security [RFC7817] has become 319 much more widely deployed in recent years, progress in securing 320 e-mail messages between users has been much slower. This has lead to 321 a situation where e-mail content is considered a critical resource by 322 some mail service providers who use the content for machine learning, 323 advertisement targeting, and other purposes unrelated to message 324 delivery. Equally however, it is unclear how some useful anti-spam 325 techniques could be deployed in an end-to-end encrypted mail universe 326 (with today's end-to-end mail security protocols) and there are many 327 significant challenges should one desire to deploy end-to-end email 328 security at scale. 330 The Domain Name System (DNS) shows signs of ageing but due to the 331 legacy of deployed systems has changed very slowly. Newer technology 333 [RFC8484] developed at the IETF enables DNS queries to be performed 334 with confidentiality and authentication (of a recursive resolver), 335 but its initial deployment is happening mostly in browsers that use 336 global DNS resolver services, such as Cloudflare's 1.1.1.1 or 337 Google's 8.8.8.8. This results in faster evolution and better 338 security for end users. 340 However, if one steps back and considers the potential security and 341 privacy effects of these developments, the outcome could appear 342 different. While the security and confidentiality of the protocol 343 exchanges improves with the introduction of this new technology, at 344 the same time this could lead to a move from using (what appears to 345 be) a large worldwide distributed set of DNS resolvers into a far 346 smaller set of centralised global resolvers. While these resolvers 347 are very well maintained (and a great service), they are potential 348 high-value targets for pervasive monitoring and Denial-of-Service 349 (DoS) attacks. In 2016, for example, DoS attacks were launched 350 against Dyn, [DynDDoS] then one of the largest DNS providers, leading 351 to some outages. It is difficult to imagine that DNS resolvers 352 wouldn't be a target in many future attacks or pervasive monitoring 353 projects. 355 Unfortunately, there is little that even large service providers can 356 do to not be a DDoS target, (though anycast and other DDoS 357 mitigations can certainly help when one is targeted), nor to refuse 358 authority-sanctioned pervasive monitoring. As a result it seems that 359 a reasonable defense strategy may be to aim for outcomes where such 360 highly centralised control points are unnecessary or don't handle 361 sensitive data. (Recalling that with the DNS, meta-data about the 362 requestor and the act of requesting an answer are what is potentially 363 sensitive, rather than the content of the answer.) 365 There are other examples of the perils of centralised solutions in 366 Internet infrastructure. The DNS example involves an interesting 367 combination of information flows (who is asking for what domain 368 names) as well as a potential ability to exert control (what domains 369 will actually resolve to an address). Routing systems are primarily 370 about control. While there are intra-domain centralized routing 371 solutions (such as PCE [RFC4655]), a control within a single 372 administrative domain is usually not the kind of centralization that 373 we would be worried about. Global centralization would be much more 374 concerning. Fortunately, global Internet routing is performed among 375 peers. However, controls could be introduced even in this global, 376 distributed system. To secure some of the control exchanges, the 377 Resource Public Key Infrastructure (RPKI) system ([RFC6480]) allows 378 selected Certification Authorities (CAs) to help drive decisions 379 about which participants in the routing infrastructure can make what 380 claims. If this system were globally centralized, it would be a 381 concern, but again, fortunately, current designs involve at least 382 regional distribution. 384 In general, many recent attacks relate more to information than 385 communications. For instance, personal information leaks typically 386 happen via information stored on a compromised server rather than 387 capturing communications. There is little hope that such attacks can 388 be prevented entirely. Again, the best course of action seems to be 389 avoid the disclosure of information in the first place, or at least 390 to not perform that in a manner that makes it possible that others 391 can readily use the information. 393 2.3. Examples 395 2.3.1. Deliberate adversarial behaviour in applications 397 In this section we describe some documented examples of deliberate 398 adversarial behaviour by applications that could affect Internet 399 protocol development. The adversarial behaviours described below 400 involve various kinds of attack, varying from simple fraud, to 401 credential theft, surveillance and contributing to DDoS attacks. 402 This is not intended to be a comprehensive nor complete survey, but 403 to motivate us to consider deliberate adversarial behaviour by 404 applications. 406 While we have these examples of deliberate adversarial behaviour, 407 there are also many examples of application developers doing their 408 best to protect the security and privacy of their users or customers. 409 That's just the same as the case today where we need to consider in- 410 network actors as potential adversaries despite the many examples of 411 network operators who do act primarily in the best interests of their 412 users. 414 2.3.1.1. Malware in curated application stores 416 Despite the best efforts of curators, so-called App-Stores frequently 417 distribute malware of many kinds and one recent study [Curated] 418 claims that simple obfuscation enables malware to avoid detection by 419 even sophisticated operators. Given the scale of these deployments, 420 distribution of even a small percentage of malware-infected 421 applications can affect a huge number of people. 423 2.3.1.2. Virtual private networks (VPNs) 425 Virtual private networks (VPNs) are supposed to hide user traffic to 426 various degrees depending on the particular technology chosen by the 427 VPN provider. However, not all VPNs do what they say, some for 428 example misrepresenting the countries in which they provide vantage 429 points [Vpns]. 431 2.3.1.3. Compromised (home) networks 433 What we normally might consider network devices such as home routers 434 do also run applications that can end up being adversarial, for 435 example running DNS and DHCP attacks from home routers targeting 436 other devices in the home. One study [Home] reports on a 2011 attack 437 that affected 4.5 million DSL modems in Brazil. The absence of 438 software update [RFC8240] has been a major cause of these issues and 439 rises to the level that considering this as intentional behaviour by 440 device vendors who have chosen this path is warranted. 442 2.3.1.4. Web tracking 444 One of the biggest threats to user privacy on the Web is ubiquitous 445 tracking. This is often done to support advertising based business 446 models. 448 While some people may be sanguine about this kind of tracking, others 449 consider this behaviour unwelcome, when or if they are informed that 450 it happens, [Attitude] though the evidence here seems somewhat harder 451 to interpret and many studies (that we have found to date) involve 452 small numbers of users. Historically, browsers have not made this 453 kind of tracking visible and have enabled it by default, though some 454 recent browser versions are starting to enable visibility and 455 blocking of some kinds of tracking. Browsers are also increasingly 456 imposing more stringent requirements on plug-ins for varied security 457 reasons. 459 Third party tracking 461 One form of tracking is by third parties. HTTP header fields (such 462 as cookies, [RFC6265]) are commonly used for such tracking, as are 463 structures within the content of HTTP responses such as links to 1x1 464 pixel images and (ab)use of Javascript APIs offered by browsers 465 [Tracking]. 467 Whenever a resource is loaded from a server, that server can include 468 a cookie which will be sent back to the server on future loads. This 469 includes situations where the resource is loaded as a resource on a 470 page, such as an image or a JavaScript module. When loading a 471 resource, the server is aware of the top-level page that the resource 472 is used on, through the use of the Referer HTTP header [RFC7231]. 473 those loads include a Referer header which contains the top-level 474 page from which that subresource is being loaded. 476 The combination of these features makes it possible to track a user 477 across the Web. The tracker convinces a number of content sites 478 ("first parties") to include a resource from the tracker site. This 479 resource can perform some function such as displaying an 480 advertisement or providing analytics to the first party site. But 481 the resource may also be simply a tracker. When the user visits one 482 of the content sites, the tracker receives both a Referer header and 483 the cookie. For an individual user with a particular browser, the 484 cookie is the same regardless of which site the tracker is on. This 485 allows the tracker to observe what pages within the set of content 486 sites the user visits. The resulting information is commonly used 487 for targeting advertisements, but it can also be used for other 488 purposes. 490 This capability itself constitutes a major threat to user privacy. 491 Additional techniques such as cookie syncing, identifier correlation, 492 and fingerprinting make the problem even worse. 494 As a given tracker will not be on all sites, that tracker has 495 incomplete coverage. However, trackers often collude (a practice 496 called "cookie syncing") to combine the information from different 497 tracking cookies. 499 Sometimes trackers will be embedded on a site which collects a user 500 identifier, such as social media identity or an e-mail address. If 501 the site can inform the tracker of the identifier, that allows the 502 tracker to tie the identifier to the cookie. 504 While a browser may block cookies, fingerprinting browsers often 505 allows tracking the users. For instance, features such as User-Agent 506 string, plugin and font support, screen resolution, and timezone can 507 yield a fingerprint that is sometimes unique to a single user 508 [AmIUnique] and which persists beyond cookie scope and lifetime. 509 Even in cases where this fingerprint is not unique, the anonymity set 510 may be sufficiently small that, coupled with other data, this yields 511 a unique, per-user identifier. Fingerprinting of this type is more 512 prevalent on systems and platforms where data-set features are 513 flexible, such as desktops, where plugins are more commonly in use. 514 Fingerprinting prevention is an active research area; see [Boix2018] 515 for more information. 517 Other types of tracking linked to web tracking 519 Third party web tracking is not the only concern. An obvious 520 tracking danger exists also in popular ecosystems - such as social 521 media networks - that house a large part of many users' online 522 existence. There is no need for a third party to track the user's 523 browsing as all actions are performed within a single site, where 524 most messaging, viewing, and sharing activities happen. 526 Browsers themselves or services used by the browser can also become a 527 potential source of tracking users. For instance, the URL/search bar 528 service may leak information about the user's actions to a search 529 provider via an "autocomplete" feature. [Leith2020] 531 Tracking through users' IP addresses or DNS queries is also a danger. 532 This may happen by directly observing the cleartext IP or DNS 533 traffic, though DNS tracking may be preventable via DNS protocols 534 that are secured end-to-end. But the DNS queries are also (by 535 definition) seen by the used DNS recursive resolver service, which 536 may accidentally or otherwise track the users' activities. This is 537 particularly problematic if a large number of users employ either a 538 commonly used ISP service or an Internet-based resolver service 539 [I-D.arkko-arch-infrastructure-centralisation]. In contrast, use of 540 a DNS recursive that sees little traffic could equally be used for 541 tracking. Similarly, other applications, such an mail or instant 542 messaging protocols, that can carry HTML content can be integrated 543 with web tracking. (See Section 2.3.1.6.) 545 2.3.1.5. Web site policy deception 547 Many web sites today provide some form of privacy policy and terms of 548 service, that are known to be mostly unread [Unread]. This implies 549 that, legal fiction aside, users of those sites have not in reality 550 agreed to the specific terms published and so users are therefore 551 highly exposed to being exploited by web sites, for example 552 [Cambridge] is a recent well-publicised case where a service provider 553 abused the data of 87 million users via a partnership. While many 554 web site operators claim that they care deeply about privacy, it 555 seems prudent to assume that some (or most?) do not in fact care 556 about user privacy, or at least not in ways with which many of their 557 users would agree. And of course, today's web sites are actually 558 mostly fairly complex web applications and are no longer static sets 559 of HTML files, so calling these "web sites" is perhaps a misnomer, 560 but considered as web applications, that may for example link in 561 advertising networks, it seems clear that many exist that are 562 adversarial. 564 2.3.1.6. Tracking bugs in mail 566 Some mail user agents (MUAs) render HTML content by default (with a 567 subset not allowing that to be turned off, perhaps particularly on 568 mobile devices) and thus enable the same kind of adversarial tracking 569 seen on the web. Attempts at such intentional tracking are also seen 570 many times per day by email users - in one study [Mailbug] the 571 authors estimated that 62% of leakage to third parties was 572 intentional, for example if leaked data included a hash of the 573 recipient email address. 575 2.3.1.7. Troll farms in online social networks 577 Online social network applications/platforms are well-known to be 578 vulnerable to troll farms, sometimes with tragic consequences where 579 organised/paid sets of users deliberately abuse the application 580 platform for reasons invisible to a normal user. For-profit 581 companies building online social networks are well aware that subsets 582 of their "normal" users are anything but. In one US study, [Troll] 583 sets of troll accounts were roughly equally distributed on both sides 584 of a controversial discussion. While Internet protocol designers do 585 sometimes consider sybil attacks [Sybil], arguably we have not 586 provided mechanisms to handle such attacks sufficiently well, 587 especially when they occur within walled-gardens. Equally, one can 588 make the case that some online social networks, at some points in 589 their evolution, appear to have prioritised counts of active users so 590 highly that they have failed to invest sufficient effort for 591 detection of such troll farms. 593 2.3.1.8. Smart televisions 595 There have been examples of so-called "smart" televisions spying on 596 their owners and one survey of user attitudes [SmartTV] found "broad 597 agreement was that it is unacceptable for the data to be repurposed 598 or shared" although the level of user understanding may be 599 questionable. What is clear though is that such devices generally 600 have not provided controls for their owners that would allow them to 601 meaningfully make a decision as to whether or not they want to share 602 such data. 604 2.3.1.9. Internet of things 606 Internet of Things (IoT) devices (which might be "so-called Internet 607 of Things" as all devices were already things:-) have been found 608 deficient when their security and privacy aspects were analysed, for 609 example children's toys [Toys]. While in some cases this may be due 610 to incompetence rather than being deliberately adversarial behaviour, 611 the levels of incompetence frequently seen imply these aspects have 612 simply not been considered a priority. 614 2.3.1.10. Attacks leveraging compromised high-level DNS infrastructure 616 Recent attacks [DeepDive] against DNS infrastructure enable 617 subsequent targeted attacks on specific application layer sources or 618 destinations. The general method appears to be to attack DNS 619 infrastructure, in these cases infrastructure that is towards the top 620 of the DNS naming hierarchy and "far" from the presumed targets, in 621 order to be able to fake DNS responses to a PKI, thereby acquiring 622 TLS server certificates so as to subsequently attack TLS connections 623 from clients to services (with clients directed to an attacker-owned 624 server via additional fake DNS responses). 626 Attackers in these cases seem well resourced and patient - with 627 "practice" runs over months and with attack durations being 628 infrequent and short (e.g. 1 hour) before the attacker withdraws. 630 These are sophisticated multi-protocol attacks, where weaknesses 631 related to deployment of one protocol (DNS) bootstrap attacks on 632 another protocol (e.g. IMAP/TLS), via abuse of a 3rd protocol 633 (ACME), partly in order to capture user IMAP login credentials, so as 634 to be able to harvest message store content from a real message 635 store. 637 The fact that many mail clients regularly poll their message store 638 means that a 1-hour attack is quite likely to harvest many cleartext 639 passwords or crackable password hashes. The real IMAP server in such 640 a case just sees fewer connections during the "live" attack, and some 641 additional connections later. Even heavy email users in such cases 642 that might notice a slight gap in email arrivals, would likely 643 attribute that to some network or service outage. 645 In many of these cases the paucity of DNSSEC-signed zones (about 1% 646 of existing zones) and the fact that many resolvers do not enforce 647 DNSSEC validation (e.g., in some mobile operating systems) assisted 648 the attackers. 650 It is also notable that some of the personnel dealing with these 651 attacks against infrastructure entites are authors of RFCs and 652 Internet-drafts. That we haven't provided protocol tools that better 653 protect against these kinds of attack ought hit "close to home" for 654 the IETF. 656 In terms of the overall argument being made here, the PKI and DNS 657 interactions, and the last step in the "live" attack all involve 658 interaction with a deliberately adversarial application. Later, use 659 of acquired login credentials to harvest message store content 660 involves an adversarial client application. It all cases, a TLS 661 implementation's PKI and TLS protocol code will see the fake 662 endpoints as protocol-valid, even if, in the real world, they are 663 clearly fake. This appears to be a good argument that our current 664 threat model is lacking in some respect(s), even as applied to our 665 currently most important security protocol (TLS). 667 2.3.1.11. BGP hijacking 669 There is a clear history of BGP hijacking [BgpHijack] being used to 670 ensure endpoints connect to adversarial applications. As in the 671 previous example, such hijacks can be used to trick a PKI into 672 issuing a certificate for a fake entity. Indeed one study 673 [HijackDet] used the emergence of new web server TLS key pairs during 674 the event, (detected via Internet-wide scans), as a distinguisher 675 between one form of deliberate BGP hijacking and inadvertent route 676 leaks. 678 2.3.1.12. Anti-virus vendor selling user clickstream data 680 An anti-virus product vendor was feeding user clickstream data to a 681 subsidiary that then sold on supposedly "anonymised" but highly 682 detailed data to unrelated parties. [avleak] After browser makers 683 had removed that vendor's browser extension from their online stores, 684 the anti-virus product itself apparently took over data collection 685 initially only offering users an opt-out, with the result that 686 apparently few users were even aware of the data collection, never 687 mind the subsequent clickstream sales. Very shortly after 688 publication of [avleak], the anti-virus vendor announced they were 689 closing down the subsidiary. 691 2.3.2. Inadvertent adversarial behaviours 693 Not all adversarial behaviour by applications is deliberate, some is 694 likely due to various levels of carelessness (some quite 695 understandable, others not) and/or due to erroneous assumptions about 696 the environments in which those applications (now) run. 698 We very briefly list some such cases: 700 o Application abuse for command and control, for example, use of IRC 701 or apache logs for [CommandAndControl] 703 o Carelessly leaky data stores [LeakyBuckets], for example, lots of 704 Amazon S3 leaks showing that careless admins can too easily cause 705 application server data to become available to adversaries 707 o Virtualisation exposing secrets, for example, Meltdown and Spectre 708 [MeltdownAndSpectre] [Kocher2019] [Lipp2018] and other similar 709 side-channel attacks. 711 o Compromised badly-maintained web sites, that for example, have led 712 to massive online [Passwords]. 714 o Supply-chain attacks, for example, the [TargetAttack] or malware 715 within pre-installed applications on Android phones [Bloatware]. 717 o Breaches of major service providers, that many of us might have 718 assumed would be sufficiently capable to be the best large-scale 719 "Identity providers", for example: 721 * 3 billion accounts: https://www.wired.com/story/yahoo-breach- 722 three-billion-accounts/ 724 * "up to 600M" account passwords stored in clear: 725 https://www.pcmag.com/news/367319/facebook-stored-up-to-600m- 726 user-passwords-in-plain-text 728 * many millions at risk: https://www.zdnet.com/article/us-telcos- 729 caught-selling-your-location-data-again-senator-demands-new- 730 laws/ 732 * 50 million accounts: https://www.cnet.com/news/facebook-breach- 733 affected-50-million-people/ 735 * 14 million accounts: https://www.zdnet.com/article/millions- 736 verizon-customer-records-israeli-data/ 738 * "hundreds of thousands" of accounts: 739 https://www.wsj.com/articles/google-exposed-user-data-feared- 740 repercussions-of-disclosing-to-public-1539017194 742 * unknown numbers, some email content exposed: 743 https://motherboard.vice.com/en_us/article/ywyz3x/hackers- 744 could-read-your-hotmail-msn-outlook-microsoft-customer-support 746 o Breaches of smaller service providers: Too many to enumerate, 747 sadly 749 3. Analysis 751 3.1. The Role of End-to-end 753 [RFC1958] notes that "end-to-end functions can best be realised by 754 end-to-end protocols": 756 The basic argument is that, as a first principle, certain required 757 end-to-end functions can only be performed correctly by the end- 758 systems themselves. A specific case is that any network, however 759 carefully designed, will be subject to failures of transmission at 760 some statistically determined rate. The best way to cope with 761 this is to accept it, and give responsibility for the integrity of 762 communication to the end systems. Another specific case is end- 763 to-end security. 765 The "end-to-end argument" was originally described by Saltzer et al 766 [Saltzer]. They said: 768 The function in question can completely and correctly be 769 implemented only with the knowledge and help of the application 770 standing at the endpoints of the communication system. Therefore, 771 providing that questioned function as a feature of the 772 communication system itself is not possible. 774 These functional arguments align with other, practical arguments 775 about the evolution of the Internet under the end-to-end model. The 776 endpoints evolve quickly, often with simply having one party change 777 the necessary software on both ends. Whereas waiting for network 778 upgrades would involve potentially a large number of parties from 779 application owners to multiple network operators. 781 The end-to-end model supports permissionless innovation where new 782 innovation can flourish in the Internet without excessive wait for 783 other parties to act. 785 But the details matter. What is considered an endpoint? What 786 characteristics of Internet are we trying to optimize? This memo 787 makes the argument that, for security purposes, there is a 788 significant distinction between actual endpoints from a user's 789 interaction perspective (e.g., another user) and from a system 790 perspective (e.g., a third party relaying a message). 792 This memo proposes to focus on the distinction between "real ends" 793 and other endpoints to guide the development of protocols. A 794 conversation between one "real end" to another "real end" has 795 necessarily different security needs than a conversation between, 796 say, one of the "real ends" and a component in a larger system. The 797 end-to-end argument is used primarily for the design of one protocol. 798 The security of the system, however, depends on the entire system and 799 potentially multiple storage, compute, and communication protocol 800 aspects. All have to work properly together to obtain security. 802 For instance, a transport connection between two components of a 803 system is not an end-to-end connection even if it encompasses all the 804 protocol layers up to the application layer. It is not end-to-end, 805 if the information or control function it carries actually extends 806 beyond those components. For instance, just because an e-mail server 807 can read the contents of an e-mail message does not make it a 808 legitimate recipient of the e-mail. 810 This memo also proposes to focus on the "need to know" aspect in 811 systems. Information should not be disclosed, stored, or routed in 812 cleartext through parties that do not absolutely need to have that 813 information. 815 The proposed argument about real ends is as follows: 817 Application functions are best realised by the entities directly 818 serving the users, and when more than one entity is involved, by 819 end-to-end protocols. The role and authority of any additional 820 entities necessary to carry out a function should match their part 821 of the function. No information or control roles should be 822 provided to these additional entities unless it is required by the 823 function they provide. 825 For instance, a particular piece of information may be necessary for 826 the other real endpoint, such as message contents for another user. 827 The same piece of information may not be necessary for any additional 828 parties, unless the information had to do with, say, routing 829 information for the message to reach the other user. When 830 information is only needed by the actual other endpoint, it should be 831 protected and be only relayed to the actual other endpoint. Protocol 832 design should ensure that the additional parties do not have access 833 to the information. 835 Note that it may well be that the easiest design approach is to send 836 all information to a third party and have majority of actual 837 functionality reside in that third party. But this is a case of a 838 clear tradeoff between ease of change by evolving that third party 839 vs. providing reasonable security against misuse of information. 841 Note that the above "real ends" argument is not limited to 842 communication systems. Even an application that does not communicate 843 with anyone else than its user may be implemented on top of a 844 distributed system where some information about the user is exposed 845 to untrusted parties. 847 The implications of the system security also extend beyond 848 information and control aspects. For instance, poorly design 849 component protocols can become DoS vectors which are then used to 850 attack other parts of the system. Availability is an important 851 aspect to consider in the analysis along other aspects. 853 3.2. Trusted networks 855 Some systems are thought of as being deployed only in a closed 856 setting, where all the relevant nodes are under direct control of the 857 network administrators. Technologies developed for such networks 858 tend to be optimized, at least initially, for these environments, and 859 may lack security features necessary for different types of 860 deployments. 862 It is well known that many such systems evolve over time, grow, and 863 get used and connected in new ways. For instance, the collaboration 864 and mergers between organizations, and new services for customers may 865 change the system or its environment. A system that used to be truly 866 within an administrative domain may suddenly need to cross network 867 boundaries or even run over the Internet. As a result, it is also 868 well known that it is good to ensure that underlying technologies 869 used in such systems can cope with that evolution, for instance, by 870 having the necessary security capabilities to operate in different 871 environments. 873 In general, the outside vs. inside security model is outdated for 874 most situations, due to the complex and evolving networks and the 875 need to support mixture of devices from different sources (e.g., BYOD 876 networks). Network virtualization also implies that previously clear 877 notions of local area networks and physical proximity may create an 878 entirely different reality from what appears from a simple notion of 879 a local network. 881 Similarly, even trusted, well-managed parties can be problematic, 882 even when operating openly in the Internet. Systems that collect 883 data from a large number of Internet users, or that are used by a 884 large number of devices have some inherent issues: large data stores 885 attract attempts to use that data in a manner that is not consistent 886 with the users' interests. They can also become single points of 887 failure through network management, software, or business failures. 888 See also [I-D.arkko-arch-infrastructure-centralisation]. 890 3.2.1. Even closed networks can have compromised nodes 892 This memo argues that the situation is even more dire than what was 893 explained above. It is impossible to ensure that all components in a 894 network are actually trusted. Even in a closed network with 895 carefully managed components there may be compromised components, and 896 this should be factored into the design of the system and protocols 897 used in the system. 899 For instance, during the Snowden revelations it was reported that 900 internal communication flows of large content providers were 901 compromised in an effort to acquire information from large numbers of 902 end users. This shows the need to protect not just communications 903 targeted to go over the Internet, but in many cases also internal and 904 control communications. 906 Furthermore, there is a danger of compromised nodes, so 907 communications security alone will be insufficient to protect against 908 this. The defences against this include limiting information within 909 networks to the parties that have a need to know, as well as limiting 910 control capabilities. This is necessary even when all the nodes are 911 under the control of the same network manager; the network manager 912 needs to assume that some nodes and communications will be 913 compromised, and build a system to mitigate or minimise attacks even 914 under that assumption. 916 Even airgapped networks can have these issues, as evidenced, for 917 instance, by the Stuxnet worm. The Internet is not the only form of 918 connectivity, as most systems include, for instance, USB ports that 919 proved to be the achilles heel of the targets in the Stuxnet case. 920 More commonly, every system runs large amount of software, and it is 921 often not practical or even possible to prevent compromised code even 922 in a high-security setting, let alone in commercial or private 923 networks. Installation media, physical ports, both open source and 924 proprietary programs, firmware, or even innocent-looking components 925 on a circuit board can be suspect. In addition, complex underlying 926 computing platforms, such as modern CPUs with underlying security and 927 management tools are prone to problems. 929 In general, this means that one cannot entirely trust even a closed 930 system where you picked all the components yourself. Analysis for 931 the security of many interesting real-world systems now commonly 932 needs to include cross-component attacks, e.g., the use of car radios 933 and other externally communicating devices as part of attacks 934 launched against the control components such as brakes in a car 935 [Savage]. 937 3.3. Balancing Threats 939 Note that not all information needs to be protected, and not all 940 threats can be protected against. But it is important that the main 941 threats are understood and protected against. 943 Sometimes there are higher-level mechanisms that provide safeguards 944 for failures. For instance, it is very difficult in general to 945 protect against denial-of-service against compromised nodes on a 946 communications path. However, it may be possible to detect that a 947 service has failed. 949 Another example is from packet-carrying networks. Payload traffic 950 that has been properly protected with encryption does not provide 951 much value to an attacker. For instance, it does not always make 952 sense to encrypt every packet transmission in a packet-carrying 953 system where the traffic is already encrypted at other layers. But 954 it almost always makes sense to protect control communications and to 955 understand the impacts of compromised nodes, particularly control 956 nodes. 958 4. Areas requiring more study 960 In addition to the guidelines in (Section 5), we suggest there may be 961 value in further study on the topics below, with the goal of 962 producing more concrete guidelines. 964 1. Isolation: Sophisticated users can sometimes deal with 965 adversarial behaviours in applications by using different 966 instances of those applications, for example, differently 967 configured web browsers for use in different contexts. 968 Applications (including web browsers) and operating systems are 969 also building in isolation via use of different processes or 970 sandboxing. Protocol artefacts that relate to uses of such 971 isolation mechanisms might be worth considering. To an extent, 972 the IETF has in practice already recognised some of these issues 973 as being in-scope, e.g. when considering the linkability issues 974 with mechanisms such as TLS session tickets, or QUIC connection 975 identifiers. 977 2. Controlling Tracking: Web browsers have a central role in terms 978 of the deployment of anti-tracking technologies. A number of 979 browsers have started adding these technologies [Mozilla2019] 980 but this is a rapidly moving field, so is difficult to fully 981 characterize in this memo. The mechanisms used can be as simple 982 as blocking communication with known trackers, or more complex, 983 such identifying trackers and suppressing their ability to store 984 and access cookies and other state. Browsers may also treat 985 each third party load on different first party sites as a 986 different context, thereby isolating cookies and other state, 987 such as TLS layer information (this technique is called "Double 988 Keying" [DoubleKey]). The further development of browser-based 989 anti-tracking technology is important, but it is also important 990 to ensure that browsers themselves do not themselves enable new 991 data collection points, e.g., via search, DNS, or other 992 functions. 994 3. Transparency: Certificate transparency (CT) [RFC6962] has been 995 an effective countermeasure for X.509 certificate mis-issuance, 996 which used be a known application layer misbehaviour in the 997 public web PKI. CT can also help with post-facto detection of 998 some infrastructure attacks where BGP or DNS weaknesses have 999 been leveraged so that some certification authority is tricked 1000 into issuing a certificate for the wrong entity. While the 1001 context in which CT operates is very constrained (essentially to 1002 the public CAs trusted by web browsers), similar approaches 1003 could perhaps be useful for other protocols or technologies. In 1004 addition, legislative requirements such as those imposed by the 1005 GDPR [GDPRAccess] could lead to a desire to handle internal data 1006 structures and databases in ways that are reminiscent of CT, 1007 though clearly with significant authorisation being required and 1008 without the append-only nature of a CT log. 1010 4. Same-Origin Policy: The Same-Origin Policy (SOP) [RFC6454] 1011 perhaps already provides an example of how going beyond the RFC 1012 3552 threat model can be useful. Arguably, the existence of the 1013 SOP demonstrates that at least web browsers already consider the 1014 3552 model as being too limited. (Clearly, differentiating 1015 between same and not-same origins implicitly assumes that some 1016 origins are not as trustworthy as others.) 1018 5. Greasing: The TLS protocol [RFC8446] now supports the use of 1019 GREASE [I-D.ietf-tls-grease] as a way to mitigate on-path 1020 ossification. While this technique is not likely to prevent any 1021 deliberate misbehaviours, it may provide a proof-of-concept that 1022 network protocol mechanisms can have impact in this space, if we 1023 spend the time to try analyse the incentives of the various 1024 parties. 1026 6. Generalise OAuth Threat Model: The OAuth threat model [RFC6819] 1027 provides an extensive list of threats and security 1028 considerations for those implementing and deploying OAuth 1029 version 2.0 [RFC6749]. It could be useful to attempt to derive 1030 a more abstract threat model from that RFC that considers 1031 threats in more generic multi-party contexts. That document is 1032 perhaps too detailed to serve as useful generic guidance but 1033 does go beyond the Internet threat model from RFC3552, for 1034 example it says: 1036 two of the three parties involved in the OAuth protocol may 1037 collude to mount an attack against the 3rd party. For 1038 example, the client and authorization server may be under 1039 control of an attacker and collude to trick a user to gain 1040 access to resources. 1042 7. Look again at how well we're securing infrastructure: Some 1043 attacks (e.g. against DNS or routing infrastructure) appear to 1044 benefit from current infrastructure mechanisms not being 1045 deployed, e.g. DNSSEC, RPKI. In the case of DNSSEC, deployment 1046 is still minimal despite much time having elapsed. This 1047 suggests a number of different possible avenues for 1048 investigation: 1050 * For any protocol dependent on infrastructure like DNS or BGP, 1051 we ought analyse potential outcomes in the event the relevant 1052 infrastructure has been compromised 1054 * Protocol designers perhaps ought consider post-facto 1055 detection compromise mechanisms in the event that it is 1056 infeasible to mitigate attacks on infrastructure that is not 1057 under local control 1059 * Despite the sunk costs, it may be worth re-considering 1060 infrastructure security mechanisms that have not been 1061 deployed, and hence are ineffective. 1063 8. Trusted Computing: Various trusted computing mechanisms allow 1064 placing some additional trust on a particular endpoint. This 1065 can be useful to address some of the issues in this memo: 1067 * A network manager of a set of devices may be assured that the 1068 devices have not been compromised. 1070 * An outside party may be assured that someone who runs a 1071 device employs a particular software installation in that 1072 device, and that the software runs in a protected 1073 environment. 1075 IETF work such as TEEP [I-D.ietf-teep-architecture] 1076 [I-D.ietf-teep-protocol] and RATS [I-D.ietf-rats-eat] may be 1077 helpful in providing attestations to other nodes about a 1078 particular endpoint, or lifecycle management of such endpoints. 1080 One should note, however, that it is often not possible to fully 1081 protect endpoints (see, e.g., [Kocher2019] [Lipp2018] 1082 [I-D.taddei-smart-cless-introduction] 1083 [I-D.mcfadden-smart-endpoint-taxonomy-for-cless]). And of 1084 course, a trusted computing may be set up and controlled by a 1085 party that itself is not trusted; a client that contacts a 1086 server that the server's owner runs in a trusted computing 1087 setting does not change the fact that the client and the 1088 server's owner may have different interests. As a result, there 1089 is a need to prepare for the possibility that another party in a 1090 communication is not entirely trusted. 1092 9. Trust Boundaries: Traditional forms of communication equipment 1093 have morphed into today's virtualized environments, where new 1094 trust boundaries exist, e.g., between different virtualisation 1095 layers. And an application might consider itself trusted while 1096 not entirely trusting the underlying operating system. A 1097 browser application wants to protect itself against Javascript 1098 loaded from a website, while the website considers itself and 1099 the Javascript an application that it wants to protect from the 1100 browser. In general, there are multiple parties even in a 1101 single device, with differing interests, including some that 1102 have (or claim to) the interest of the human user in mind. 1104 10. Develop a BCP for privacy considerations: It may be time for the 1105 IETF to develop a BCP for privacy considerations, possibly 1106 starting from [RFC6973]. 1108 11. Re-consider protocol design "lore": It could be that this 1109 discussion demonstrates that it is timely to reconsider some 1110 protocol design "lore" as for example is done in 1111 [I-D.iab-protocol-maintenance]. More specifically, protocol 1112 extensibility mechanisms may inadvertently create vectors for 1113 abuse-cases, given that designers cannot fully analyse their 1114 impact at the time a new protocol is defined or standardised. 1115 One might conclude that a lack of extensibility could be a 1116 virtue for some new protocols, in contrast to earlier 1117 assumptions. As pointed out by one commenter though, people can 1118 find ways to extend things regardless, if they feel the need. 1120 12. Consider the user perspective: [I-D.nottingham-for-the-users] 1121 argues that, in relevant cases where there are conflicting 1122 requirements, the "IETF considers end users as its highest 1123 priority concern." Doing so seems consistent with the expanded 1124 threat model being argued for here, so may indicate that a BCP 1125 in that space could also be useful. 1127 13. Have explicit agreements: When users and their devices provide 1128 information to network entities, it would be beneficial to have 1129 an opportunity for the users to state their requirements 1130 regarding the use of the information provided in this way. 1131 While the actual use of such requirements and the willingness of 1132 network entities to agree to them remains to be seen, at the 1133 moment even the technical means of doing this are limited. For 1134 instance, it would be beneficial to be able to embed usage 1135 requirements within popular data formats. 1137 As appropriate, users should be made aware of the choices made 1138 in a particular design, and avoid designs or products that 1139 protect against some threats but are wide open to other serious 1140 issues. (SF doesn't know what that last bit means;-) 1142 14. Perform end-to-end protection via other parties: Information 1143 passed via another party who does not intrinsically need the 1144 information to perform its function should be protected end-to- 1145 end to its intended recipient. This guideline is general, and 1146 holds equally for sending TCP/IP packets, TLS connections, or 1147 application-layer interactions. As [RFC8546] notes, it is a 1148 useful design rule to avoid "accidental invariance" (the 1149 deployment of on-path devices that over-time start to make 1150 assumptions about protocols). However, it is also a necessary 1151 security design rule to avoid "accidental disclosure" where 1152 information originally thought to be benign and untapped over- 1153 time becomes a significant information leak. This guideline can 1154 also be applied for different aspects of security, e.g., 1155 confidentiality and integrity protection, depending on what the 1156 specific need for information is in the other parties. 1158 The main reason that further study is needed here is that the 1159 key management consequences can be significant here - once one 1160 enters into a multi-party world, securely managing keys for all 1161 entities can be so burdonsome that deployment just doesn't 1162 happen. 1164 5. Guidelines 1166 As [RFC3935] says: 1168 We embrace technical concepts such as decentralized control, edge- 1169 user empowerment and sharing of resources, because those concepts 1170 resonate with the core values of the IETF community. 1172 To be more specific, this memo suggests the following guidelines for 1173 protocol designers: 1175 1. Consider first principles in protecting information and systems, 1176 rather than following a specific pattern such as protecting 1177 information in a particular way or only at a particular protocol 1178 layer. It is necessary to understand what components can be 1179 compromised, where interests may or may not be aligned, and what 1180 parties have a legitimate role in being a party to a specific 1181 information or a control task. 1183 2. Consider how you depend on infrastructure. For any protocol 1184 directly or indirectly dependent on infrastructure like DNS or 1185 BGP, analyse potential outcomes in the event that the relevant 1186 infrastructure has been compromised. Such attacks occur in the 1187 wild. [DeepDive] 1189 3. Protocol endpoints are commonly no longer executed on what used 1190 be understood as a host system. [StackEvo] The web and 1191 Javascript model clearly differs from traditional host models, 1192 but so do many server-side deployments, thanks to 1193 virtualisation. At protocol design time assume that all 1194 endpoints will be run in virtualised environments where co- 1195 tenants and (sometimes) hypervisors are adversaries, and then 1196 analyse such scenarios. 1198 4. Once you have something, do not pass it onto others without 1199 serious consideration. In other words, minimize information 1200 passed to others to guard against the potential compromise of 1201 that party. As recommended in [RFC6973] data minimisation and 1202 additional encryption can be helpful - if applications don't 1203 ever see data, or a cleartext form of data, then they should 1204 have a harder time misbehaving. Similarly, not defining new 1205 long-term identifiers, and not exposing existing ones, help in 1206 minimising risk. 1208 5. Minimize passing of control functions to others. Any passing of 1209 control functions to other parties should be minimized to guard 1210 against the potential misuse of those control functions. This 1211 applies to both technical (e.g., nodes that assign resources) 1212 and process control functions (e.g., the ability to allocate 1213 number or develop extensions). Control functions of all kinds 1214 can become a matter of contention and power struggle, even in 1215 cases where their actual function is minimal, as we saw with the 1216 IANA transition debates. 1218 6. Where possible, avoid centralized resources. While centralized 1219 components, resources, and functions are often simpler, there 1220 can be grave issues associated with them, for example meta-data 1221 leakage. Designers should balance the benefits of centralized 1222 resources or control points against the threats arising. If it 1223 is not possible to avoid, find a way to allow the centralized 1224 resources to be selectable, depending on context and user 1225 settings. 1227 7. Treat parties with which your protocol endpoints interact with 1228 suspicion, even if the communications are encrypted. Other 1229 endpoints may misuse any information or control opportunity in 1230 the communication. Similarly, even endpoints within your own 1231 system need to be treated with suspicion, as some may become 1232 compromised. 1234 8. Consider abuse-cases. Protocol developers are typically most 1235 interested in a few specific use-cases for which they need 1236 solutions. Expanding the threat model to consider adversarial 1237 behaviours [AbuseCases] calls for significant attention to be 1238 paid to potential abuses of whatever new or re-purposed 1239 technology is being considered. 1241 9. Consider recovery from compromse or attack during protocol 1242 design - all widely used protocols will at some time be subject 1243 to successful attack, whether that is due to deployment or 1244 implementation error, or, less commonly, due to protocol design 1245 flaws. For example, recent work on multiparty messaging 1246 security primitives [I-D.ietf-mls-architecture] considers "post- 1247 compromise security" as an inherent part of the design of that 1248 protocol. 1250 10. Consider linkability. As discussed in [RFC6973] the ability to 1251 link or correlate different protocol messages with one another, 1252 or with external sources of information (e.g. public or private 1253 databases) can create privacy or security issues. As an 1254 example, re-use of TLS session tickets can enable an observer to 1255 associate multiple TLS sessions regardless of changes in source 1256 or destination addressing, which may reduce privacy or help a 1257 bad actor in targeting an attack. The same effects may result 1258 regardless of how protocol exchanges can be linked to one 1259 another. Protocol designs that aim to prevent such linkage may 1260 produce have fewer unexpected or unwanted side-effects when 1261 deployed. 1263 But when applying these guidelines, don't take this as blanket reason 1264 to provide no information to anyone, or (impractically) insist on 1265 encrypting everything, or other extreme measures. Designers need to 1266 be aware of the different threats facing their system, and deal with 1267 the most serious ones (of which there are typically many) within 1268 their applicable resource constraints. 1270 6. Potential changes in BCP 72/RFC 3552 1272 BCP 72/RFC 3553 [RFC3552] defines an "Internet Threat Model" and 1273 provides guidance on writing Security Considerations sections in 1274 other RFCs. 1276 [RFC3552] also provided a description of classic issues for the 1277 development of communications security protocols. However, in the 1278 nearly 20 years since the publication of RFC 3552, the practice of 1279 protocol design has moved on to a fair extent. 1281 It is important to note that BCP 72 is (or should be:-) used by all 1282 IETF participants when developing protocols. Potential changes to 1283 RFC 3552 therefore need to be brief - IETF participants cannot in 1284 general be expected to devote huge amounts of time to developing 1285 their security considerations text. Potential changes also need to 1286 be easily understood as IETF participants from all backgrounds need 1287 to be able to use BCP 72. 1289 In this section we provide a few initial suggested changes to BCP 72 1290 that will need to be further developed as part of this work. (For 1291 example, it may be possible to include some of the guidelines from 1292 Section 5 as those are further developed.) 1294 There are a range of possible updates. We could propose adding a 1295 simple observation (Section 6.1), or additionally propose further 1296 discussion about endpoint compromises and the need for system-level 1297 security analysis (Section 6.2). 1299 Another possibility would be to add more guidance covering areas of 1300 concern, and recommendations of broadly-applicable techniques to use. 1301 One suggestion (due to others) for such material is provided in 1302 Section 6.3. 1304 The authors of this memo believe that any updates to RFC 3552 should 1305 be relatively high-level and short. Additional documents may be 1306 needed to provide further detail. 1308 6.1. Simple change 1310 This is the simple addition we are suggesting. As evidenced in the 1311 OAuth quote in Section 4 - it can be useful to consider the effect of 1312 compromised endpoints on those that are not compromised. It may 1313 therefore be interesting to consider the consequences that would 1314 follow from a change to [RFC3552] that recognises how the landscape 1315 has changed since 2003. 1317 One initial, draft proposal for such a change could be: 1319 OLD: 1321 In general, we assume that the end-systems engaging in a protocol 1322 exchange have not themselves been compromised. Protecting against 1323 an attack when one of the end-systems has been compromised is 1324 extraordinarily difficult. It is, however, possible to design 1325 protocols which minimize the extent of the damage done under these 1326 circumstances. 1328 NEW: 1330 In general, we assume that the end-system engaging in a protocol 1331 exchange has not itself been compromised. Protecting against an 1332 attack of a protocol implementation itself is extraordinarily 1333 difficult. It is, however, possible to design protocols which 1334 minimize the extent of the damage done when the other parties in a 1335 protocol become compromised or do not act in the best interests 1336 the end-system implementing a protocol. 1338 6.2. Additional discussion of compromises 1340 The following new section could be added to discuss the capabilities 1341 required to mount an attack: 1343 NEW: 1345 3.x. Other endpoint compromise 1347 In this attack, the other endpoints in the protocol become 1348 compromised. As a result, they can, for instance, misuse any 1349 information that the end-system implementing a protocol has sent 1350 to the compromised endpoint. 1352 System and architecture aspects definitely also need more attention 1353 from Internet technology developers and standards organizations. 1354 Here is one possible addition: 1356 NEW: 1358 The design of any Internet technology should start from an 1359 understanding of the participants in a system, their roles, and 1360 the extent to which they should have access to information and 1361 ability to control other participants. 1363 6.3. Guidance with regards to communications security 1365 The following discusses some of the aspects that should be considered 1366 when designing a communications security protocol that are not 1367 covered in detail in RFC 3552. 1369 6.3.1. Limiting time scope of compromise 1371 [RFC3552] Section 3 says: 1373 The Internet environment has a fairly well understood threat 1374 model. In general, we assume that the end-systems engaging in a 1375 protocol exchange have not themselves been compromised. 1376 Protecting against an attack when one of the end-systems has been 1377 compromised is extraordinarily difficult. It is, however, 1378 possible to design protocols which minimize the extent of the 1379 damage done under these circumstances. 1381 Although this text is technically correct, modern protocol designs 1382 such as TLS 1.3 and MLS often try to provide a fair amount of defense 1383 against various kinds of temporary compromise. Specifically: 1385 NEW: 1387 Forward Security: Many protocols are designed so that compromise 1388 of an endpoint at time T does not lead to compromise of data 1389 transmitted prior to some time T' < T. For instance, if a 1390 protocol is based on Diffie-Hellman key establishment, then 1391 compromise of the long-term keys does not lead to compromise of 1392 traffic sent prior to compromise if the DH ephemerals and traffic 1393 keys have been deleted. 1395 Post-Compromise Security: Conversely, if an endpoint is 1396 compromised at time T, it is often desirable to have the protocol 1397 "self-heal" so that a purely passive adversary cannot access 1398 traffic after a certain time T' > T. MLS, for instance, is 1399 designed with this property. 1401 Containing Partial Authentication Key Compromise: If an endpoint 1402 is stolen and its authentication secret is stolen, then an 1403 attacker can impersonate that endpoint. However, there a number 1404 of scenarios in which an attacker can obtain use of an 1405 authentication key but not the secret itself (see, for instance 1406 [Jager2015]). It is often desirable to limit the impact of such 1407 compromises (for instance, by avoiding unlimited delegation from 1408 such keys). 1410 Short-lived keys: Typical TLS certificates last for months or 1411 years. There is a trend towards shorter certificate lifetimes so 1412 as to minimize risk of exposure in the event of key compromise. 1413 Relatedly, delegated credentials are short-lived keys the 1414 certificate's owner has delegated for use in TLS. These help 1415 reduce private key lifetimes without compromising or sacrificing 1416 reliability. 1418 6.3.2. Forcing active attack 1420 [RFC3552] Section 3.2 notes that it is important to consider passive 1421 attacks. This is still valid, but needs further elaboration: 1423 NEW: 1425 In general, it is much harder to mount an active attack, both in 1426 terms of the capabilities required and the chance of being 1427 detected. A theme in recent IETF protocols design is to build 1428 systems which might have limited defense against active attackers 1429 but are strong against passive attackers, thus forcing the 1430 attacker to go active. 1432 Examples include DTLS-SRTP and the trend towards opportunistic 1433 security. However, ideally protocols are built with strong defenses 1434 against active attackers. One prominent example is QUIC, which takes 1435 steps to ensure that off-path connection resets are intractable in 1436 practice. 1438 6.3.3. Traffic analysis 1440 [RFC3552] Section 3.2.1 describes how the absence of TLS or other 1441 transport-layer encryption may lead to obvious confidentiality 1442 violations against passive attackers. This too is still valid, but 1443 does not take into account additional aspects: 1445 NEW: 1447 However, recent trends in traffic analysis indicate encryption 1448 alone may be insufficient protection for some types of application 1449 data [I-D.wood-pearg-website-fingerprinting]. Encrypted traffic 1450 metadata, especially message size, can leak information about the 1451 underlying plaintext. DNS queries and responses are particularly 1452 at risk given their size distributions. Recent protocols account 1453 for this leakage by supporting padding. 1455 Some examples of recent work in this area include support for padding 1456 either generically in the transport protocol (QUIC 1457 [I-D.ietf-quic-transport] and TLS [RFC8446]), or specifically in the 1458 application protocol (EDNS(0) padding option for DNS messages 1459 [RFC7830]). 1461 6.3.4. Containing compromise of trust points 1463 Many protocols are designed to depend on trusted third parties (the 1464 WebPKI is perhaps the canonical example); if those trust points 1465 misbehave, the security of the protocol can be completely 1466 compromised. 1468 Some additional guidance in RFC 3552 might be needed to remind 1469 protocol readers of this. 1471 NEW: 1473 A number of recent protocols have attempted to reduce the power of 1474 trust points that the protocol or application depends on. For 1475 instance, Certificate Transparency attempts to ensure that a CA 1476 cannot issue valid certificates without publishing them, allowing 1477 third parties to detect certain classes of misbehavior by those 1478 CA. Similarly, Key Transparency attempts to ensure that (public) 1479 keys associated with a given entity are publicly visible and 1480 auditable in tamper-proof logs. This allows users of these keys 1481 to check them for correctness. 1483 In the realm of software, Reproducible Builds and Binary Transparency 1484 are intended to allow a user to determine that they have received a 1485 valid copy of the binary that matches the auditable source code. 1486 Blockchain protocols such as Bitcoin and Ethereum also employ this 1487 principle of transparency and are intended to detect misbehavior by 1488 members of the network. 1490 7. Potential Changes in BCP 188/RFC 7258 1492 Other additional guidelines may be necessary also in BCP 188/RFC 1493 7258[RFC7258], which specifies how IETF work should take into account 1494 pervasive monitoring. 1496 An initial, draft suggestion for starting point of those changes 1497 could be adding the following paragraph after the 2nd paragraph in 1498 Section 2: 1500 NEW: 1502 PM attacks include those cases where information collected by a 1503 legitimate protocol participant is misused for PM purposes. The 1504 attacks also include those cases where a protocol or network 1505 architecture results in centralized data storage or control 1506 functions relating to many users, raising the risk of said misuse. 1508 8. Conclusions 1510 At this stage we don't think it appropriate to claim that any strong 1511 conclusion can be reached based on the above. We do however, claim 1512 that the is a topic that could be worth discussion and more work. 1514 To start with, Internet technology developers need to be better aware 1515 of the issues beyond communications security, and consider them in 1516 design. At the IETF it would be beneficial to include some of these 1517 considerations in the usual systematic security analysis of 1518 technologies under development. 1520 In particular, when the IETF develops infrastructure technology for 1521 the Internet (such as routing or naming systems), considering the 1522 impacts of data generated by those technologies is important. 1523 Minimising data collection from users, minimising the parties who get 1524 exposed to user data, and protecting data that is relayed or stored 1525 in systems should be a priority. 1527 A key focus area at the IETF has been the security of transport 1528 protocols, and how transport layer security can be best used to 1529 provide the right security for various applications. However, more 1530 work is needed in equivalently broadly deployed tools for minimising 1531 or obfuscating information provided by users to other entities, and 1532 the use of end-to-end security through entities that are involved in 1533 the protocol exchange but who do not need to know everything that is 1534 being passed through them. 1536 Comments on the issues discussed in this memo are gladly taken either 1537 privately or on the model-t mailing list 1538 (https://www.ietf.org/mailman/listinfo/Model-t). 1540 Some further work includes items listed in Section 5 and Section 4, 1541 as well as compiling categories of vulnerabilities that need to be 1542 addressed, examples of specific attacks, and continuing the analysis 1543 of the situation and possible new remedies. 1545 It is also necessary find suitable use cases that the IETF can 1546 address by further work in this space. A completely adversial 1547 situation is not really workable, but there are situations where some 1548 parties are trustworthy, and wish to co-operate to show to each other 1549 that this is really the case. In these situations data minimisation 1550 can be beneficial to both, attestation can provide additional trust, 1551 detection of incidents can alert the parties to action, and so on. 1553 9. Informative References 1555 [AbuseCases] 1556 McDermott, J. and C. Fox, "Using abuse case models for 1557 security requirements analysis", IEEE Annual Computer 1558 Security Applications Conference (ACSAC'99), 1559 https://www.acsac.org/1999/papers/wed-b-1030-john.pdf , 1560 1999. 1562 [AmIUnique] 1563 INRIA, ., "Am I Unique?", https://amiunique.org , 2020. 1565 [Attitude] 1566 "User Perceptions of Sharing, Advertising, and Tracking", 1567 Symposium on Usable Privacy and Security (SOUPS), 1568 https://www.usenix.org/conference/soups2015/proceedings/ 1569 presentation/chanchary , 2015. 1571 [avleak] Cox, J., "Leaked Documents Expose the Secretive Market for 1572 Your Web Browsing Data", 1573 https://www.vice.com/en_us/article/qjdkq7/ 1574 avast-antivirus-sells-user-browsing-data-investigation , 1575 2020. 1577 [BgpHijack] 1578 Sermpezis, P., Kotronis, V., Dainotti, A., and X. 1579 Dimitropoulos, "A survey among network operators on BGP 1580 prefix hijacking", ACM SIGCOMM Computer Communication 1581 Review 48, no. 1 (2018): 64-69, 1582 https://arxiv.org/pdf/1801.02918.pdf , 2018. 1584 [Bloatware] 1585 Gamba, G., Rashed, M., Razaghpanah, A., Tapiado, J., and 1586 N. Vallina, "An Analysis of Pre-installed Android 1587 Software", arXiv preprint arXiv:1905.02713 (2019) , 2019. 1589 [Boix2018] 1590 Gomez-Boix, A., Laperdrix, P., and B. Baudry, "Hiding in 1591 the crowd: an analysis of the effectiveness of browser 1592 fingerprinting at large scale", Proceedings of the 2018 1593 world wide web conference , 2018. 1595 [Cambridge] 1596 Isaak, J. and M. Hanna, "User Data Privacy: Facebook, 1597 Cambridge Analytica, and Privacy Protection", Computer 1598 51.8 (2018): 56-59, https://ieeexplore.ieee.org/stamp/ 1599 stamp.jsp?arnumber=8436400 , 2018. 1601 [CommandAndControl] 1602 Botnet, ., "Creating botnet C&C server. What architecture 1603 should I use? IRC? HTTP?", Stackexchange.com question, 1604 https://security.stackexchange.com/questions/100577/ 1605 creating-botnet-cc-server-what-architecture-should-i-use- 1606 irc-http , 2014. 1608 [Curated] Hammad, M., Garcia, J., and S. MaleK, "A large-scale 1609 empirical study on the effects of code obfuscations on 1610 Android apps and anti-malware products", ACM International 1611 Conference on Software Engineering 2018, 1612 https://www.ics.uci.edu/~seal/ 1613 publications/2018ICSE_Hammad.pdf , 2018. 1615 [DeepDive] 1616 Krebs on Security, ., "A Deep Dive on the Recent 1617 Widespread DNS Hijacking Attacks", krebsonsecurity.com 1618 blog, https://krebsonsecurity.com/2019/02/a-deep-dive-on- 1619 the-recent-widespread-dns-hijacking-attacks/ , 2019. 1621 [DoubleKey] 1622 Witte, D., "Thirdparty", 1623 https://wiki.mozilla.org/Thirdparty , June 2010. 1625 [DynDDoS] York, K., "Dyn's Statement on the 10/21/2016 DNS DDoS 1626 Attack", Company statement: https://dyn.com/blog/ 1627 dyn-statement-on-10212016-ddos-attack/ , 2016. 1629 [GDPRAccess] 1630 EU, ., "Right of access by the data subject", Article 15, 1631 GDPR, https://gdpr-info.eu/art-15-gdpr/ , n.d.. 1633 [HijackDet] 1634 Schlamp, J., Holz, R., Gasser, O., Korste, A., Jacquemart, 1635 Q., Carle, G., and E. Biersack, "Investigating the nature 1636 of routing anomalies: Closing in on subprefix hijacking 1637 attacks", International Workshop on Traffic Monitoring and 1638 Analysis, pp. 173-187. Springer, Cham, 1639 https://www.net.in.tum.de/fileadmin/bibtex/publications/ 1640 papers/schlamp_TMA_1_2015.pdf , 2015. 1642 [Home] Nthala, N. and I. Flechais, "Rethinking home network 1643 security", European Workshop on Usable Security 1644 (EuroUSEC), https://ora.ox.ac.uk/objects/ 1645 uuid:e2460f50-579b-451b-b14e-b7be2decc3e1/download_file?sa 1646 fe_filename=bare_conf_EuroUSEC2018.pdf&file_format=applica 1647 tion%2Fpdf&type_of_work=Conference+item , 2018. 1649 [I-D.arkko-arch-dedr-report] 1650 Arkko, J. and T. Hardie, "Report from the IAB workshop on 1651 Design Expectations vs. Deployment Reality in Protocol 1652 Development", draft-arkko-arch-dedr-report-00 (work in 1653 progress), November 2019. 1655 [I-D.arkko-arch-infrastructure-centralisation] 1656 Arkko, J., "Centralised Architectures in Internet 1657 Infrastructure", draft-arkko-arch-infrastructure- 1658 centralisation-00 (work in progress), November 2019. 1660 [I-D.arkko-arch-internet-threat-model] 1661 Arkko, J., "Changes in the Internet Threat Model", draft- 1662 arkko-arch-internet-threat-model-01 (work in progress), 1663 July 2019. 1665 [I-D.farrell-etm] 1666 Farrell, S., "We're gonna need a bigger threat model", 1667 draft-farrell-etm-03 (work in progress), July 2019. 1669 [I-D.iab-protocol-maintenance] 1670 Thomson, M., "The Harmful Consequences of the Robustness 1671 Principle", draft-iab-protocol-maintenance-04 (work in 1672 progress), November 2019. 1674 [I-D.ietf-httpbis-expect-ct] 1675 estark@google.com, e., "Expect-CT Extension for HTTP", 1676 draft-ietf-httpbis-expect-ct-08 (work in progress), 1677 December 2018. 1679 [I-D.ietf-mls-architecture] 1680 Omara, E., Beurdouche, B., Rescorla, E., Inguva, S., Kwon, 1681 A., and A. Duric, "The Messaging Layer Security (MLS) 1682 Architecture", draft-ietf-mls-architecture-04 (work in 1683 progress), January 2020. 1685 [I-D.ietf-quic-transport] 1686 Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed 1687 and Secure Transport", draft-ietf-quic-transport-27 (work 1688 in progress), February 2020. 1690 [I-D.ietf-rats-eat] 1691 Mandyam, G., Lundblade, L., Ballesteros, M., and J. 1692 O'Donoghue, "The Entity Attestation Token (EAT)", draft- 1693 ietf-rats-eat-03 (work in progress), February 2020. 1695 [I-D.ietf-teep-architecture] 1696 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 1697 "Trusted Execution Environment Provisioning (TEEP) 1698 Architecture", draft-ietf-teep-architecture-07 (work in 1699 progress), March 2020. 1701 [I-D.ietf-teep-protocol] 1702 Tschofenig, H., Pei, M., Wheeler, D., and D. Thaler, 1703 "Trusted Execution Environment Provisioning (TEEP) 1704 Protocol", draft-ietf-teep-protocol-00 (work in progress), 1705 December 2019. 1707 [I-D.ietf-tls-esni] 1708 Rescorla, E., Oku, K., Sullivan, N., and C. Wood, 1709 "Encrypted Server Name Indication for TLS 1.3", draft- 1710 ietf-tls-esni-05 (work in progress), November 2019. 1712 [I-D.ietf-tls-grease] 1713 Benjamin, D., "Applying GREASE to TLS Extensibility", 1714 draft-ietf-tls-grease-04 (work in progress), August 2019. 1716 [I-D.lazanski-smart-users-internet] 1717 Lazanski, D., "An Internet for Users Again", draft- 1718 lazanski-smart-users-internet-00 (work in progress), July 1719 2019. 1721 [I-D.mcfadden-smart-endpoint-taxonomy-for-cless] 1722 McFadden, M., "Endpoint Taxonomy for CLESS", draft- 1723 mcfadden-smart-endpoint-taxonomy-for-cless-01 (work in 1724 progress), February 2020. 1726 [I-D.nottingham-for-the-users] 1727 Nottingham, M., "The Internet is for End Users", draft- 1728 nottingham-for-the-users-09 (work in progress), July 2019. 1730 [I-D.taddei-smart-cless-introduction] 1731 Taddei, A., Wueest, C., Roundy, K., and D. Lazanski, 1732 "Capabilities and Limitations of an Endpoint-only Security 1733 Solution", draft-taddei-smart-cless-introduction-02 (work 1734 in progress), January 2020. 1736 [I-D.wood-pearg-website-fingerprinting] 1737 Goldberg, I., Wang, T., and C. Wood, "Network-Based 1738 Website Fingerprinting", draft-wood-pearg-website- 1739 fingerprinting-00 (work in progress), November 2019. 1741 [Jager2015] 1742 Jager, T., Schwenk, J., and J. Somorovsky, "On the 1743 Security of TLS 1.3 and QUIC Against Weaknesses in PKCS#1 1744 v1.5 Encryption", Proceedings of ACM CCS 2015, DOI 1745 10.1145/2810103.2813657, https://www.nds.rub.de/media/nds/ 1746 veroeffentlichungen/2015/08/21/Tls13QuicAttacks.pdf , 1747 October 2015. 1749 [Kocher2019] 1750 Kocher, P., Horn, J., Fogh, A., Genkin, D., Gruss, D., 1751 Haas, W., Hamburg, M., Lipp, M., Mangard, S., Prescher, 1752 T., Schwarz, M., and Y. Yarom, "Spectre Attacks: 1753 Exploiting Speculative Execution", 40th IEEE Symposium on 1754 Security and Privacy (S&P'19) , 2019. 1756 [LeakyBuckets] 1757 Chickowski, E., "Leaky Buckets: 10 Worst Amazon S3 1758 Breaches", Bitdefender blog, 1759 https://businessinsights.bitdefender.com/ 1760 worst-amazon-breaches , 2018. 1762 [Leith2020] 1763 Leith, D., "Web Browser Privacy: What Do Browsers Say When 1764 They Phone Home?", In submission, 1765 https://www.scss.tcd.ie/Doug.Leith/pubs/ 1766 browser_privacy.pdf , March 2020. 1768 [Lipp2018] 1769 Lipp, M., Schwarz, M., Gruss, D., Prescher, T., Haas, W., 1770 Fogh, A., Horn, J., Mangard, S., Kocher, P., Genkin, D., 1771 Yarom, Y., and M. Hamburg, "Meltdown: Reading Kernel 1772 Memory from User Space", 27th USENIX Security Symposium 1773 (USENIX Security 18) , 2018. 1775 [Mailbug] Englehardt, S., Han, J., and A. Narayanan, "I never signed 1776 up for this! Privacy implications of email tracking", 1777 Proceedings on Privacy Enhancing Technologies 2018.1 1778 (2018): 109-126, https://www.degruyter.com/downloadpdf/j/ 1779 popets.2018.2018.issue-1/popets-2018-0006/ 1780 popets-2018-0006.pdf , 2018. 1782 [MeltdownAndSpectre] 1783 CISA, ., "Meltdown and Spectre Side-Channel Vulnerability 1784 Guidance", Alert (TA18-004A), 1785 https://www.us-cert.gov/ncas/alerts/TA18-004A , 2018. 1787 [Mozilla2019] 1788 Camp, D., "Firefox Now Available with Enhanced Tracking 1789 Protection by Default Plus Updates to Facebook Container, 1790 Firefox Monitor and Lockwise", The Mozilla Blog, 1791 https://blog.mozilla.org/blog/2019/06/04/firefox-now- 1792 available-with-enhanced-tracking-protection-by-default/ , 1793 June 2019. 1795 [Passwords] 1796 com, haveibeenpwned., "Pwned Passwords", Website 1797 https://haveibeenpwned.com/Passwords , 2019. 1799 [RFC1958] Carpenter, B., Ed., "Architectural Principles of the 1800 Internet", RFC 1958, DOI 10.17487/RFC1958, June 1996, 1801 . 1803 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 1804 Text on Security Considerations", BCP 72, RFC 3552, 1805 DOI 10.17487/RFC3552, July 2003, 1806 . 1808 [RFC3935] Alvestrand, H., "A Mission Statement for the IETF", 1809 BCP 95, RFC 3935, DOI 10.17487/RFC3935, October 2004, 1810 . 1812 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 1813 Element (PCE)-Based Architecture", RFC 4655, 1814 DOI 10.17487/RFC4655, August 2006, 1815 . 1817 [RFC6265] Barth, A., "HTTP State Management Mechanism", RFC 6265, 1818 DOI 10.17487/RFC6265, April 2011, 1819 . 1821 [RFC6454] Barth, A., "The Web Origin Concept", RFC 6454, 1822 DOI 10.17487/RFC6454, December 2011, 1823 . 1825 [RFC6480] Lepinski, M. and S. Kent, "An Infrastructure to Support 1826 Secure Internet Routing", RFC 6480, DOI 10.17487/RFC6480, 1827 February 2012, . 1829 [RFC6749] Hardt, D., Ed., "The OAuth 2.0 Authorization Framework", 1830 RFC 6749, DOI 10.17487/RFC6749, October 2012, 1831 . 1833 [RFC6797] Hodges, J., Jackson, C., and A. Barth, "HTTP Strict 1834 Transport Security (HSTS)", RFC 6797, 1835 DOI 10.17487/RFC6797, November 2012, 1836 . 1838 [RFC6819] Lodderstedt, T., Ed., McGloin, M., and P. Hunt, "OAuth 2.0 1839 Threat Model and Security Considerations", RFC 6819, 1840 DOI 10.17487/RFC6819, January 2013, 1841 . 1843 [RFC6962] Laurie, B., Langley, A., and E. Kasper, "Certificate 1844 Transparency", RFC 6962, DOI 10.17487/RFC6962, June 2013, 1845 . 1847 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 1848 Morris, J., Hansen, M., and R. Smith, "Privacy 1849 Considerations for Internet Protocols", RFC 6973, 1850 DOI 10.17487/RFC6973, July 2013, 1851 . 1853 [RFC7231] Fielding, R., Ed. and J. Reschke, Ed., "Hypertext Transfer 1854 Protocol (HTTP/1.1): Semantics and Content", RFC 7231, 1855 DOI 10.17487/RFC7231, June 2014, 1856 . 1858 [RFC7258] Farrell, S. and H. Tschofenig, "Pervasive Monitoring Is an 1859 Attack", BCP 188, RFC 7258, DOI 10.17487/RFC7258, May 1860 2014, . 1862 [RFC7469] Evans, C., Palmer, C., and R. Sleevi, "Public Key Pinning 1863 Extension for HTTP", RFC 7469, DOI 10.17487/RFC7469, April 1864 2015, . 1866 [RFC7540] Belshe, M., Peon, R., and M. Thomson, Ed., "Hypertext 1867 Transfer Protocol Version 2 (HTTP/2)", RFC 7540, 1868 DOI 10.17487/RFC7540, May 2015, 1869 . 1871 [RFC7817] Melnikov, A., "Updated Transport Layer Security (TLS) 1872 Server Identity Check Procedure for Email-Related 1873 Protocols", RFC 7817, DOI 10.17487/RFC7817, March 2016, 1874 . 1876 [RFC7830] Mayrhofer, A., "The EDNS(0) Padding Option", RFC 7830, 1877 DOI 10.17487/RFC7830, May 2016, 1878 . 1880 [RFC8240] Tschofenig, H. and S. Farrell, "Report from the Internet 1881 of Things Software Update (IoTSU) Workshop 2016", 1882 RFC 8240, DOI 10.17487/RFC8240, September 2017, 1883 . 1885 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 1886 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 1887 . 1889 [RFC8484] Hoffman, P. and P. McManus, "DNS Queries over HTTPS 1890 (DoH)", RFC 8484, DOI 10.17487/RFC8484, October 2018, 1891 . 1893 [RFC8546] Trammell, B. and M. Kuehlewind, "The Wire Image of a 1894 Network Protocol", RFC 8546, DOI 10.17487/RFC8546, April 1895 2019, . 1897 [RFC8555] Barnes, R., Hoffman-Andrews, J., McCarney, D., and J. 1898 Kasten, "Automatic Certificate Management Environment 1899 (ACME)", RFC 8555, DOI 10.17487/RFC8555, March 2019, 1900 . 1902 [Saltzer] Saltzer, J., Reed, D., and D. Clark, "End-To-End Arguments 1903 in System Design", ACM TOCS, Vol 2, Number 4, pp 277-288 , 1904 November 1984. 1906 [Savage] Savage, S., "Modern Automotive Vulnerabilities: Causes, 1907 Disclosures, and Outcomes", USENIX , 2016. 1909 [SmartTV] Malkin, N., Bernd, J., Johnson, M., and S. Egelman, "What 1910 Can't Data Be Used For? Privacy Expectations about Smart 1911 TVs in the U.S.", European Workshop on Usable Security 1912 (Euro USEC), https://www.ndss-symposium.org/wp- 1913 content/uploads/2018/06/ 1914 eurousec2018_16_Malkin_paper.pdf" , 2018. 1916 [StackEvo] 1917 Trammell, B., Thomson, M., Howard, L., and T. Hardie, 1918 "What Is an Endpoint?", Unpublished work, 1919 https://github.com/stackevo/endpoint-draft/blob/master/ 1920 draft-trammell-whats-an-endpoint.md , 2017. 1922 [Sybil] Viswanath, B., Post, A., Gummadi, K., and A. Mislove, "An 1923 analysis of social network-based sybil defenses", ACM 1924 SIGCOMM Computer Communication Review 41(4), 363-374, 1925 https://conferences.sigcomm.org/sigcomm/2010/papers/ 1926 sigcomm/p363.pdf , 2011. 1928 [TargetAttack] 1929 Osborne, C., "How hackers stole millions of credit card 1930 records from Target", ZDNET, 1931 https://www.zdnet.com/article/how-hackers-stole-millions- 1932 of-credit-card-records-from-target/ , 2014. 1934 [Toys] Chu, G., Apthorpe, N., and N. Feamster, "Security and 1935 Privacy Analyses of Internet of Things Childrens' Toys", 1936 IEEE Internet of Things Journal 6.1 (2019): 978-985, 1937 https://arxiv.org/pdf/1805.02751.pdf , 2019. 1939 [Tracking] 1940 Ermakova, T., Fabian, B., Bender, B., and K. Klimek, "Web 1941 Tracking-A Literature Review on the State of Research", 1942 Proceedings of the 51st Hawaii International Conference on 1943 System Sciences, https://scholarspace.manoa.hawaii.edu/ 1944 bitstream/10125/50485/paper0598.pdf , 2018. 1946 [Troll] Stewart, L., Arif, A., and K. Starbird, "Examining trolls 1947 and polarization with a retweet network", ACM Workshop on 1948 Misinformation and Misbehavior Mining on the Web, 1949 https://faculty.washington.edu/kstarbi/ 1950 examining-trolls-polarization.pdf , 2018. 1952 [Unread] Obar, J. and A. Oeldorf, "The biggest lie on the 1953 internet{:} Ignoring the privacy policies and terms of 1954 service policies of social networking services", 1955 Information, Communication and Society (2018): 1-20 , 1956 2018. 1958 [Vpns] Khan, M., DeBlasio, J., Voelker, G., Snoeren, A., Kanich, 1959 C., and N. Vallina, "An empirical analysis of the 1960 commercial VPN ecosystem", ACM Internet Measurement 1961 Conference 2018 (pp. 443-456), 1962 https://eprints.networks.imdea.org/1886/1/ 1963 imc18-final198.pdf , 2018. 1965 Appendix A. Contributors 1967 Eric Rescorla and Chris Wood provided much of the text in 1968 Section 2.3.1.4, item 2 of Section 4 and Section 6.3. 1970 Appendix B. Acknowledgements 1972 The authors would like to thank the IAB: 1974 Alissa Cooper, Wes Hardaker, Ted Hardie, Christian Huitema, Zhenbin 1975 Li, Erik Nordmark, Mark Nottingham, Melinda Shore, Jeff Tantsura, 1976 Martin Thomson, Brian Trammel, Mirja Kuhlewind, and Colin Perkins. 1978 The authors would also like to thank the participants of the IETF 1979 SAAG meeting where this topic was discussed: 1981 Harald Alvestrand, Roman Danyliw, Daniel Kahn Gilmore, Wes Hardaker, 1982 Bret Jordan, Ben Kaduk, Dominique Lazanski, Eliot Lear, Lawrence 1983 Lundblade, Kathleen Moriarty, Kirsty Paine, Eric Rescorla, Ali 1984 Rezaki, Mohit Sethi, Ben Schwartz, Dave Thaler, Paul Turner, David 1985 Waltemire, and Jeffrey Yaskin. 1987 The authors would also like to thank the participants of the IAB 2019 1988 DEDR workshop: 1990 Tuomas Aura, Vittorio Bertola, Carsten Bormann, Stephane Bortzmeyer, 1991 Alissa Cooper, Hannu Flinck, Carl Gahnberg, Phillip Hallam-Baker, Ted 1992 Hardie, Paul Hoffman, Christian Huitema, Geoff Huston, Konstantinos 1993 Komaitis, Mirja Kuhlewind, Dirk Kutscher, Zhenbin Li, Julien 1994 Maisonneuve, John Mattson, Moritz Muller, Joerg Ott, Lucas Pardue, 1995 Jim Reid, Jan-Frederik Rieckers, Mohit Sethi, Melinda Shore, Jonne 1996 Soininen, Andrew Sullivan, and Brian Trammell. 1998 The authors would also like to thank the participants of the November 1999 2016 meeting at the IETF: 2001 Carsten Bormann, Randy Bush, Tommy C, Roman Danyliw, Ted Hardie, 2002 Christian Huitema, Ben Kaduk, Dirk Kutscher, Dominique Lazanski, Eric 2003 Rescorla, Ali Rezaki, Mohit Sethi, Melinda Shore, Martin Thomson, and 2004 Robin Wilton ... (missing many people... did we have minutes other 2005 than the list of actions?) ... 2007 Thanks for specific comments on this text to: Ronald van der Pol. 2009 Finally, the authors would like to thank numerous other people for 2010 insightful comments and discussions in this space. 2012 Authors' Addresses 2014 Jari Arkko 2015 Ericsson 2016 Valitie 1B 2017 Kauniainen 2018 Finland 2020 Email: jari.arkko@piuha.net 2022 Stephen Farrell 2023 Trinity College Dublin 2024 College Green 2025 Dublin 2026 Ireland 2028 Email: stephen.farrell@cs.tcd.ie