idnits 2.17.1 draft-arkko-farrell-arch-model-t-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 1731 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) == There are 2 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (6 February 2020) is 1541 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Cambridge' is mentioned on line 460, but not defined == Missing Reference: 'BgpHijack' is mentioned on line 577, but not defined == Missing Reference: 'HijackDet' is mentioned on line 581, but not defined == Missing Reference: 'Passwords' is mentioned on line 620, but not defined == Missing Reference: 'Bloatware' is mentioned on line 623, but not defined == Unused Reference: 'I-D.arkko-arch-dedr-report' is defined on line 1366, but no explicit reference was found in the text == Unused Reference: 'I-D.arkko-arch-internet-threat-model' is defined on line 1381, but no explicit reference was found in the text == Unused Reference: 'I-D.iab-protocol-maintenance' is defined on line 1394, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-httpbis-expect-ct' is defined on line 1401, but no explicit reference was found in the text == Unused Reference: 'I-D.ietf-teep-protocol' is defined on line 1438, but no explicit reference was found in the text == Unused Reference: 'I-D.mcfadden-smart-endpoint-taxonomy-for-cless' is defined on line 1466, but no explicit reference was found in the text == Unused Reference: 'I-D.taddei-smart-cless-introduction' is defined on line 1480, but no explicit reference was found in the text == Outdated reference: A later version (-12) exists of draft-iab-protocol-maintenance-04 == Outdated reference: A later version (-13) exists of draft-ietf-mls-architecture-04 == Outdated reference: A later version (-34) exists of draft-ietf-quic-transport-25 == Outdated reference: A later version (-25) exists of draft-ietf-rats-eat-02 == Outdated reference: A later version (-19) exists of draft-ietf-teep-architecture-05 == Outdated reference: A later version (-18) exists of draft-ietf-teep-protocol-00 == Outdated reference: A later version (-18) exists of draft-ietf-tls-esni-05 == Outdated reference: A later version (-02) exists of draft-mcfadden-smart-endpoint-taxonomy-for-cless-01 == Outdated reference: A later version (-03) exists of draft-taddei-smart-cless-introduction-02 -- Obsolete informational reference (is this intentional?): RFC 6962 (Obsoleted by RFC 9162) -- Obsolete informational reference (is this intentional?): RFC 7540 (Obsoleted by RFC 9113) -- No information found for draft-trammell-whats-an-endpoint - is the name correct? Summary: 1 error (**), 0 flaws (~~), 24 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group J. Arkko 3 Internet-Draft Ericsson 4 Intended status: Informational S. Farrell 5 Expires: 9 August 2020 Trinity College Dublin 6 6 February 2020 8 Challenges and Changes in the Internet Threat Model 9 draft-arkko-farrell-arch-model-t-02 11 Abstract 13 Communications security has been at the center of many security 14 improvements in the Internet. The goal has been to ensure that 15 communications are protected against outside observers and attackers. 17 This memo suggests that the existing RFC 3552 threat model, while 18 important and still valid, is no longer alone sufficient to cater for 19 the pressing security and privacy issues seen on the Internet today. 20 For instance, it is often also necessary to protect against endpoints 21 that are compromised, malicious, or whose interests simply do not 22 align with the interests of users. While such protection is 23 difficult, there are some measures that can be taken and we argue 24 that investigation of these issues is warranted. 26 It is particularly important to ensure that as we continue to develop 27 Internet technology, non-communications security related threats, and 28 privacy issues, are properly understood. 30 Status of This Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at https://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on 9 August 2020. 47 Copyright Notice 49 Copyright (c) 2020 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents (http://trustee.ietf.org/ 54 license-info) in effect on the date of publication of this document. 55 Please review these documents carefully, as they describe your rights 56 and restrictions with respect to this document. Code Components 57 extracted from this document must include Simplified BSD License text 58 as described in Section 4.e of the Trust Legal Provisions and are 59 provided without warranty as described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction 64 2. Observations 65 2.1. Communications Security Improvements 66 2.2. Beyond Communications Security 67 2.3. Examples 68 2.3.1. Deliberate adversarial behaviour in 69 applications 70 2.3.2. Inadvertent adversarial behaviours 71 3. Analysis 72 3.1. The Role of End-to-end 73 3.2. Trusted networks 74 3.2.1. Even closed networks can have compromised 75 nodes 76 3.3. Balancing Threats 77 4. Areas requiring more study 78 5. Guidelines 79 6. Potential changes in BCP 72/RFC 3552 80 7. Potential Changes in BCP 188/RFC 7258 81 8. Conclusions 82 9. Informative References 83 Appendix A. Acknowledgements 84 Authors' Addresses 86 1. Introduction 88 Communications security has been at the center of many security 89 improvements in the Internet. The goal has been to ensure that 90 communications are protected against outside observers and attackers. 91 At the IETF, this approach has been formalized in BCP 72 [RFC3552], 92 which defined the Internet threat model in 2003. 94 The purpose of a threat model is to outline what threats exist in 95 order to assist the protocol designer. But RFC 3552 also ruled some 96 threats to be in scope and of primary interest, and some threats out 97 of scope [RFC3552]: 99 The Internet environment has a fairly well understood threat 100 model. In general, we assume that the end-systems engaging in a 101 protocol exchange have not themselves been compromised. 102 Protecting against an attack when one of the end-systems has been 103 compromised is extraordinarily difficult. It is, however, 104 possible to design protocols which minimize the extent of the 105 damage done under these circumstances. 107 By contrast, we assume that the attacker has nearly complete 108 control of the communications channel over which the end-systems 109 communicate. This means that the attacker can read any PDU 110 (Protocol Data Unit) on the network and undetectably remove, 111 change, or inject forged packets onto the wire. 113 However, the communications-security -only threat model is becoming 114 outdated. Some of the causes for this are: 116 * Success! Advances in protecting most of our communications with 117 strong cryptographic means. This has resulted in much improved 118 communications security, but also highlights the need for 119 addressing other, remaining issues. This is not to say that 120 communications security is not important, it still is: 121 improvements are still needed. Not all communications have been 122 protected, and even out of the already protected communications, 123 not all of their aspects have been fully protected. Fortunately, 124 there are ongoing projects working on improvements. 126 * Adversaries have increased their pressure against other avenues of 127 attack, from supply-channel attacks, to compromising devices to 128 legal coercion of centralized endpoints in conversations. 130 * New adversaries and risks have arisen, e.g., due to creation of 131 large centralized information sources. 133 * While communications-security does seem to be required to protect 134 privacy, more is needed, especially if endpoints choose to act 135 against the interests of their peers or users. 137 In short, attacks are migrating towards the currently easier targets, 138 which no longer necessarily include direct attacks on traffic flows. 139 In addition, trading information about users and ability to influence 140 them has become a common practice for many Internet services, often 141 without users understanding those practices. 143 This memo suggests that the existing threat model, while important 144 and still valid, is no longer alone sufficient to cater for the 145 pressing security and privacy issues on the Internet. For instance, 146 while it continues to be very important to protect Internet 147 communications against outsiders, it is also necessary to protect 148 systems against endpoints that are compromised, malicious, or whose 149 interests simply do not align with the interests of the users. 151 Of course, there are many trade-offs in the Internet on who one 152 chooses to interact with and why or how. It is not the role of this 153 memo to dictate those choices. But it is important that we 154 understand the implications of different practices. It is also 155 important that when it comes to basic Internet infrastructure, our 156 chosen technologies lead to minimal exposure with respect to the non- 157 communications threats. 159 It is particularly important to ensure that non-communications 160 security related threats are properly understood for any new Internet 161 technology. While the consideration of these issues is relatively 162 new in the IETF, this memo provides some initial ideas about 163 potential broader threat models to consider when designing protocols 164 for the Internet or when trying to defend against pervasive 165 monitoring. Further down the road, updated threat models could 166 result in changes in BCP 72 [RFC3552] (guidelines for writing 167 security considerations) and BCP 188 [RFC7258] (pervasive 168 monitoring), to include proper consideration of non-communications 169 security threats. 171 It may also be necessary to have dedicated guidance on how systems 172 design and architecture affect security. The sole consideration of 173 communications security aspects in designing Internet protocols may 174 lead to accidental or increased impact of security issues elsewhere. 175 For instance, allowing a participant to unnecessarily collect or 176 receive information may lead to a similar effect as described in 177 [RFC8546] for protocols: over time, unnecessary information will get 178 used with all the associated downsides, regardless of what deployment 179 expectations there were during protocol design. 181 This memo does not stand alone. To begin with, it is a merge of 182 earlier work by the two authors [I-D.farrell-etm] [I-D.arkko-arch- 183 internet-threat-model]. There are also other documents discussing 184 this overall space, e.g. [I-D.lazanski-smart-users-internet] [I- 185 D.arkko-arch-dedr-report]. 187 The authors of this memo envisage independent development of each of 188 those (and other work) with an eventual goal to extract an updated 189 (but usefully brief!) description of an extended threat model from 190 the collection of works. We consider it an open question whether 191 this memo, or any of the others, would be usefully published as an 192 RFC. 194 The rest of this memo is organized as follows. Section 2 makes some 195 observations about the situation, with respect to communications 196 security and beyond. The section also provides a number of real- 197 world examples. 199 Section 3 discusses some high-level implications that can be drawn, 200 such as the need to consider what the "ends" really are in an "end- 201 to-end" communication. 203 Section 4 lists some areas where additional work is required before 204 we could feel confident in crafting guidelines, whereas Section 5 205 presents what we think are perhaps already credible potential 206 guidelines - both from the point of view of a system design, as well 207 as from the point of IETF procedures and recommended analysis 208 procedures when designing new protocols. Section 6 and Section 7 209 tentatively suggest some changes to current IETF BCPs in this space. 211 Comments are solicited on these and other aspects of this document. 212 The best place for discussion is on the model-t list. 213 (https://www.ietf.org/mailman/listinfo/model-t) 215 Finally, Section 8 draws some conclusions for next steps. 217 2. Observations 219 2.1. Communications Security Improvements 221 Being able to ask about threat model improvements is due to progress 222 already made: The fraction of Internet traffic that is 223 cryptographically protected has grown tremendously in the last few 224 years. Several factors have contributed to this change, from Snowden 225 revelations to business reasons and to better available technology 226 such as HTTP/2 [RFC7540], TLS 1.3 [RFC8446], QUIC [I-D.ietf-quic- 227 transport]. 229 In many networks, the majority of traffic has flipped from being 230 cleartext to being encrypted. Reaching the level of (almost) all 231 traffic being encrypted is no longer something unthinkable but rather 232 a likely outcome in a few years. 234 At the same time, technology developments and policy choices have 235 driven the scope of cryptographic protection from protecting only the 236 pure payload to protecting much of the rest as well, including far 237 more header and meta-data information than was protected before. For 238 instance, efforts are ongoing in the IETF to assist encrypting 239 transport headers [I-D.ietf-quic-transport], server domain name 240 information in TLS [I-D.ietf-tls-esni], and domain name queries 241 [RFC8484]. 243 There have also been improvements to ensure that the security 244 protocols that are in use actually have suitable credentials and that 245 those credentials have not been compromised, see, for instance, Let's 246 Encrypt [RFC8555], HSTS [RFC6797], HPKP [RFC7469], and Expect-CT [I- 247 D.ietf-httpbis-expect-ct]. 249 This is not to say that all problems in communications security have 250 been resolved - far from it. But the situation is definitely 251 different from what it was a few years ago. Remaining issues will be 252 and are worked on; the fight between defense and attack will also 253 continue. Communications security will stay at the top of the agenda 254 in any Internet technology development. 256 2.2. Beyond Communications Security 258 There are, however, significant issues beyond communications security 259 in the Internet. To begin with, it is not necessarily clear that one 260 can trust all the endpoints in any protocol interaction. 262 Of course, client endpoint implemententations were never fully 263 trusted, but the environments in which those endpoints exist are 264 changing. For instance, users may have as much control over their 265 own devices as they used to, due to manufacturer-controlled operating 266 system installations and locked device ecosystems. And within those 267 ecosystems, even the applications that are available tend to have 268 privileges that users by themselves might not desire those 269 applications be granted, such as excessive rights to media, location, 270 and peripherals. There are also designated efforts by various 271 authorities to hack end-user devices as a means of intercepting data 272 about the user. 274 The situation is different, but not necessarily better on the side of 275 servers. The pattern of communications in today's Internet is almost 276 always via a third party that has at least as much information as the 277 other parties have. For instance, these third parties are typically 278 endpoints for any transport layer security connections, and able to 279 see much communications or other messaging in cleartext. There are 280 some exceptions, of course, e.g., messaging applications with end-to- 281 end confidentiatlity protection. 283 With the growth of trading users' information by many of these third 284 parties, it becomes necessary to take precautions against endpoints 285 that are compromised, malicious, or whose interests simply do not 286 align with the interests of the users. 288 Specifically, the following issues need attention: 290 * Security of users' devices and the ability of the user to control 291 their own equipment. 293 * Leaks and attacks related to data at rest. 295 * Coercion of some endpoints to reveal information to authorities or 296 surveillance organizations, sometimes even in an extra-territorial 297 fashion. 299 * Application design patterns that result in cleartext information 300 passing through a third party or the application owner. 302 * Involvement of entities that have no direct need for involvement 303 for the sake of providing the service that the user is after. 305 * Network and application architectures that result in a lot of 306 information collected in a (logically) central location. 308 * Leverage and control points outside the hands of the users or end- 309 user device owners. 311 For instance, while e-mail transport security [RFC7817] has become 312 much more widely deployed in recent years, progress in securing 313 e-mail messages between users has been much slower. This has lead to 314 a situation where e-mail content is considered a critical resource by 315 some mail service providers who use the content for machine learning, 316 advertisement targeting, and other purposes unrelated to message 317 delivery. Equally however, it is unclear how some useful anti-spam 318 techniques could be deployed in an end-to-end encrypted mail universe 319 (with today's end-to-end mail sercurity protocols) and there are many 320 significant challenges should one desire to deploy end-to-end email 321 security at scale. 323 The Domain Name System (DNS) shows signs of ageing but due to the 324 legacy of deployed systems has changed very slowly. Newer technology 325 [RFC8484] developed at the IETF enables DNS queries to be performed 326 with confidentiality and authentication (of a recursive resolver), 327 but its initial deployment is happening mostly in browsers that use 328 global DNS resolver services, such as Cloudflare's 1.1.1.1 or 329 Google's 8.8.8.8. This results in faster evolution and better 330 security for end users. 332 However, if one steps back and considers the potential security and 333 privacy effects of these developments, the outcome could appear 334 different. While the security and confidentiality of the protocol 335 exchanges improves with the introduction of this new technology, at 336 the same time this could lead to a move from using (what appears to 337 be) a large worldwide distributed set of DNS resolvers into a far 338 smaller set of centralised global resolvers. While these resolvers 339 are very well maintained (and a great service), they are potential 340 high-value targets for pervasive monitoring and Denial-of-Service 341 (DoS) attacks. In 2016, for example, DoS attacks were launched 342 against Dyn, [DynDDoS] then one of the largest DNS providers, leading 343 to some outages. It is difficult to imagine that DNS resolvers 344 wouldn't be a target in many future attacks or pervasive monitoring 345 projects. 347 Unfortunately, there is little that even large service providers can 348 do to not be a DDoS target, (though anycast and other DDoS 349 mitigations can certainly help when one is targetted), nor to refuse 350 authority-sanctioned pervasive monitoring. As a result it seems that 351 a reasonable defense strategy may be to aim for outcomes where such 352 highly centralised control points are unecessary or don't handle 353 sensitive data. (Recalling that with the DNS, meta-data about the 354 requestor and the act of requesting an answer are what is potentially 355 sensitive, rather than the content of the answer.) 357 There are other examples of the perils of centralised solutions in 358 Internet infrastructure. The DNS example involves an interesting 359 combination of information flows (who is asking for what domain 360 names) as well as a potential ability to exert control (what domains 361 will actually resolve to an address). Routing systems are primarily 362 about control. While there are intra-domain centralized routing 363 solutions (such as PCE [RFC4655]), a control within a single 364 administrative domain is usually not the kind of centralization that 365 we would be worried about. Global centralization would be much more 366 concerning. Fortunately, global Internet routing is performed among 367 peers. However, controls could be introduced even in this global, 368 distributed system. To secure some of the control exchanges, the 369 Resource Public Key Infrastructure (RPKI) system ([RFC6480]) allows 370 selected Certification Authorities (CAs) to help drive decisions 371 about which participants in the routing infrastructure can make what 372 claims. If this system were globally centralized, it would be a 373 concern, but again, fortunately, current designs involve at least 374 regional distribution. 376 In general, many recent attacks relate more to information than 377 communications. For instance, personal information leaks typically 378 happen via information stored on a compromised server rather than 379 capturing communications. There is little hope that such attacks can 380 be prevented entirely. Again, the best course of action seems to be 381 avoid the disclosure of information in the first place, or at least 382 to not perform that in a manner that makes it possible that others 383 can readily use the information. 385 2.3. Examples 387 2.3.1. Deliberate adversarial behaviour in applications 389 In this section we describe some documented examples of deliberate 390 adversarial behaviour by applications that could affect Internet 391 protocol development. The adversarial behaviours described below 392 involve various kinds of attack, varying from simple fraud, to 393 credential theft, surveillance and contributing to DDoS attacks. 394 This is not intended to be a comprehensive nor complete survey, but 395 to motivate us to consider deliberate adversarial behaviour by 396 applications. 398 While we have these examples of deliberate adversarial behaviour, 399 there are also many examples of application developers doing their 400 best to protect the security and privacy of their users or customers. 401 That's just the same as the case today where we need to consider in- 402 network actors as potential adversaries despite the many examples of 403 network operators who do act primarily in the best interests of their 404 users. 406 2.3.1.1. Malware in curated application stores 408 Despite the best efforts of curators, so-called App-Stores frequently 409 distribute malware of many kinds and one recent study [Curated] 410 claims that simple obfuscation enables malware to avoid detection by 411 even sophisticated operators. Given the scale of these deployments, 412 distribution of even a small percentage of malware-infected 413 applictions can affect a huge number of people. 415 2.3.1.2. Virtual private networks (VPNs) 417 Virtual private networks (VPNs) are supposed to hide user traffic to 418 various degrees depending on the particular technology chosen by the 419 VPN provider. However, not all VPNs do what they say, some for 420 example misrepresenting the countries in which they provide vantage 421 points [Vpns]. 423 2.3.1.3. Compromised (home) networks 425 What we normally might consider network devices such as home routers 426 do also run applications that can end up being adversarial, for 427 example running DNS and DHCP attacks from home routers targeting 428 other devices in the home. One study [Home] reports on a 2011 attack 429 that affected 4.5 million DSL modems in Brazil. The absence of 430 software update [RFC8240] has been a major cause of these issues and 431 rises to the level that considering this as intentional behaviour by 432 device vendors who have chosen this path is warranted. 434 2.3.1.4. Web browsers 436 Tracking of users in order to support advertising based business 437 models is ubiquitous on the Internet today. HTTP header fields (such 438 as cookies) are commonly used for such tracking, as are structures 439 within the content of HTTP responses such as links to 1x1 pixel 440 images and (ab)use of Javascript APIs offered by browsers [Tracking]. 442 While some people may be sanguine about this kind of tracking, others 443 consider this behaviour unwelcome, when or if they are informed that 444 it happens, [Attitude] though the evidence here seems somewhat harder 445 to interpret and many studies (that we have found to date) involve 446 small numbers of users. Historically, browsers have not made this 447 kind of tracking visible and have enabled it by default, though some 448 recent browser versions are starting to enable visibility and 449 blocking of some kinds of tracking. Browsers are also increasingly 450 imposing more stringent requirements on plug-ins for varied security 451 reasons. 453 2.3.1.5. Web site policy deception 455 Many web sites today provide some form of privacy policy and terms of 456 service, that are known to be mostly unread [Unread]. This implies 457 that, legal fiction aside, users of those sites have not in reality 458 agreed to the specific terms published and so users are therefore 459 highly exposed to being exploited by web sites, for example 460 [Cambridge] is a recent well-publicised case where a service provider 461 abused the data of 87 million users via a partnership. While many 462 web site operators claim that they care deeply about privacy, it 463 seems prudent to assume that some (or most?) do not in fact care 464 about user privacy, or at least not in ways with which many of their 465 users would agree. And of course, today's web sites are actually 466 mostly fairly complex web applications and are no longer static sets 467 of HTML files, so calling these "web sites" is perhaps a misnomer, 468 but considered as web applications, that may for example link in 469 advertising networks, it seems clear that many exist that are 470 adversarial. 472 2.3.1.6. Tracking bugs in mail 474 Some mail user agents (MUAs) render HTML content by default (with a 475 subset not allowing that to be turned off, perhaps particularly on 476 mobile devices) and thus enable the same kind of adversarial tracking 477 seen on the web. Attempts at such intentional tracking are also seen 478 many times per day by email users - in one study [Mailbug] the 479 authors estimated that 62% of leakage to third parties was 480 intentional, for example if leaked data included a hash of the 481 recipient email address. 483 2.3.1.7. Troll farms in online social networks 485 Online social network applications/platforms are well-known to be 486 vulnerable to troll farms, sometimes with tragic consequences where 487 organised/paid sets of users deliberately abuse the application 488 platform for reasons invisible to a normal user. For-profit 489 companies building online social networks are well aware that subsets 490 of their "normal" users are anything but. In one US study, [Troll] 491 sets of troll accounts were roughly equally distributed on both sides 492 of a controversial discussion. While Internet protocol designers do 493 sometimes consider sybil attacks [Sybil], arguably we have not 494 provided mechanisms to handle such attacks sufficiently well, 495 especially when they occur within walled-gardens. Equally, one can 496 make the case that some online social networks, at some points in 497 their evolution, appear to have prioritised counts of active users so 498 highly that they have failed to invest sufficient effort for 499 detection of such troll farms. 501 2.3.1.8. Smart televisions 503 There have been examples of so-called "smart" televisions spying on 504 their owners and one survey of user attitudes [SmartTV] found "broad 505 agreement was that it is unacceptable for the data to be repurposed 506 or shared" although the level of user understanding may be 507 questionable. What is clear though is that such devices generally 508 have not provided controls for their owners that would allow them to 509 meaningfully make a decision as to whether or not they want to share 510 such data. 512 2.3.1.9. Internet of things 514 Internet of Things (IoT) devices (which might be "so-called Internet 515 of Things" as all devices were already things:-) have been found 516 deficient when their security and privacy aspects were analysed, for 517 example children's toys [Toys]. While in some cases this may be due 518 to incompetence rather than being deliberately adversarial behaviour, 519 the levels of incompetence frequently seen imply these aspects have 520 simply not been considered a priority. 522 2.3.1.10. Attacks leveraging compromised high-level DNS infrastructure 524 Recent attacks [DeepDive] against DNS infrastructure enable 525 subsequent targetted attacks on specific application layer sources or 526 destinations. The general method appears to be to attack DNS 527 infrastructure, in these cases infrastructure that is towards the top 528 of the DNS naming hierarchy and "far" from the presumed targets, in 529 order to be able to fake DNS responses to a PKI, thereby acquiring 530 TLS server certificates so as to subsequently attack TLS connections 531 from clients to services (with clients directed to an attacker-owned 532 server via additional fake DNS responses). 534 Attackers in these cases seem well resourced and patient - with 535 "practice" runs over months and with attack durations being 536 infrequent and short (e.g. 1 hour) before the attacker withdraws. 538 These are sophisticated multi-protocol attacks, where weaknesses 539 related to deployment of one protocol (DNS) bootstrap attacks on 540 another protocol (e.g. IMAP/TLS), via abuse of a 3rd protocol 541 (ACME), partly in order to capture user IMAP login credentials, so as 542 to be able to harvest message store content from a real message 543 store. 545 The fact that many mail clients regularly poll their message store 546 means that a 1-hour attack is quite likely to harvest many cleartext 547 passwords or crackable password hashes. The real IMAP server in such 548 a case just sees fewer connections during the "live" attack, and some 549 additional connections later. Even heavy email users in such cases 550 that might notice a slight gap in email arrivals, would likely 551 attribute that to some network or service outage. 553 In many of these cases the paucity of DNSSEC-signed zones (about 1% 554 of existing zones) and the fact that many resolvers do not enforce 555 DNSSEC validation (e.g., in some mobile operating systems) assisted 556 the attackers. 558 It is also notable that some of the personnel dealing with these 559 attacks against infrastructure entites are authors of RFCs and 560 Internet-drafts. That we haven't provided protocol tools that better 561 protect against these kinds of attack ought hit "close to home" for 562 the IETF. 564 In terms of the overall argument being made here, the PKI and DNS 565 interactions, and the last step in the "live" attack all involve 566 interaction with a deliberately adversarial application. Later, use 567 of acquired login credentials to harvest message store content 568 involves an adversarial client application. It all cases, a TLS 569 implementation's PKI and TLS protocol code will see the fake 570 endpoints as protocol-valid, even if, in the real world, they are 571 clearly fake. This appears to be a good argument that our current 572 threat model is lacking in some respect(s), even as applied to our 573 currently most important security protocol (TLS). 575 2.3.1.11. BGP hijacking 577 There is a clear history of BGP hijacking [BgpHijack] being used to 578 ensure endpoints connect to adversarial applications. As in the 579 previous example, such hijacks can be used to trick a PKI into 580 issuing a certificate for a fake entity. Indeed one study 581 [HijackDet] used the emergence of new web server TLS key pairs during 582 the event, (detected via Internet-wide scans), as a distinguisher 583 between one form of deliberate BGP hijacking and indadvertent route 584 leaks. 586 2.3.1.12. Anti-virus vendor selling user clickstream data 588 An anti-virus product vendor was feeding user clickstream data to a 589 subsidiary that then sold on supposedly "anonymised" but highly 590 detailed data to unrelated parties. [avleak] After browser makers 591 had removed that vendor's browser extension from their online stores, 592 the anti-virus product itself apparently took over data collection 593 initially only offering users an opt-out, with the result that 594 apparently few users were even aware of the data collection, never 595 mind the subsequent clickstream sales. Very shortly after 596 publication of [avleak], the anti-virus vendor announced they were 597 closing down the subsidiary. 599 2.3.2. Inadvertent adversarial behaviours 601 Not all adversarial behaviour by applications is deliberate, some is 602 likely due to various levels of carelessness (some quite 603 understandable, others not) and/or due to erroneous assumptions about 604 the environments in which those applications (now) run. 606 We very briefly list some such cases: 608 * Application abuse for command and control, for example, use of IRC 609 or apache logs for [CommandAndControl] 611 * Carelessly leaky data stores [LeakyBuckets], for example, lots of 612 Amazon S3 leaks showing that careless admins can too easily cause 613 application server data to become available to adversaries 615 * Virtualisation exposing secrets, for example, Meltdown and Spectre 616 [MeltdownAndSpectre] [Kocher2019] [Lipp2018] and other similar 617 side-channel attacks. 619 * Compromised badly-maintained web sites, that for example, have led 620 to massive online [Passwords]. 622 * Supply-chain attacks, for example, the [TargetAttack] or malware 623 within pre-installed applications on Android phones [Bloatware]. 625 * Breaches of major service providers, that many of us might have 626 assumed would be sufficiently capable to be the best large-scale 627 "Identity providers", for example: 629 - 3 billion accounts: https://www.wired.com/story/yahoo-breach- 630 three-billion-accounts/ 632 - "up to 600M" account passwords stored in clear: 633 https://www.pcmag.com/news/367319/facebook-stored-up-to-600m- 634 user-passwords-in-plain-text 636 - many millions at risk: https://www.zdnet.com/article/us-telcos- 637 caught-selling-your-location-data-again-senator-demands-new- 638 laws/ 640 - 50 million accounts: https://www.cnet.com/news/facebook-breach- 641 affected-50-million-people/ 643 - 14 million accounts: https://www.zdnet.com/article/millions- 644 verizon-customer-records-israeli-data/ 646 - "hundreds of thousands" of accounts: 647 https://www.wsj.com/articles/google-exposed-user-data-feared- 648 repercussions-of-disclosing-to-public-1539017194 650 - unknown numbers, some email content exposed: 651 https://motherboard.vice.com/en_us/article/ywyz3x/hackers- 652 could-read-your-hotmail-msn-outlook-microsoft-customer-support 654 * Breaches of smaller service providers: Too many to enumerate, 655 sadly 657 3. Analysis 659 3.1. The Role of End-to-end 661 [RFC1958] notes that "end-to-end functions can best be realised by 662 end-to-end protocols": 664 The basic argument is that, as a first principle, certain required 665 end-to-end functions can only be performed correctly by the end- 666 systems themselves. A specific case is that any network, however 667 carefully designed, will be subject to failures of transmission at 668 some statistically determined rate. The best way to cope with 669 this is to accept it, and give responsibility for the integrity of 670 communication to the end systems. Another specific case is end- 671 to-end security. 673 The "end-to-end argument" was originally described by Saltzer et al 674 [Saltzer]. They said: 676 The function in question can completely and correctly be 677 implemented only with the knowledge and help of the application 678 standing at the endpoints of the communication system. Therefore, 679 providing that questioned function as a feature of the 680 communication system itself is not possible. 682 These functional arguments align with other, practical arguments 683 about the evolution of the Internet under the end-to-end model. The 684 endpoints evolve quickly, often with simply having one party change 685 the necessary software on both ends. Whereas waiting for network 686 upgrades would involve potentially a large number of parties from 687 application owners to multiple network operators. 689 The end-to-end model supports permissionless innovation where new 690 innovation can flourish in the Internet without excessive wait for 691 other parties to act. 693 But the details matter. What is considered an endpoint? What 694 characteristics of Internet are we trying to optimize? This memo 695 makes the argument that, for security purposes, there is a 696 significant distinction between actual endpoints from a user's 697 interaction perspective (e.g., another user) and from a system 698 perspective (e.g., a third party relaying a message). 700 This memo proposes to focus on the distinction between "real ends" 701 and other endpoints to guide the development of protocols. A 702 conversation between one "real end" to another "real end" has 703 necessarily different security needs than a conversation between, 704 say, one of the "real ends" and a component in a larger system. The 705 end-to-end argument is used primarily for the design of one protocol. 706 The security of the system, however, depends on the entire system and 707 potentially multiple storage, compute, and communication protocol 708 aspects. All have to work properly together to obtain security. 710 For instance, a transport connection between two components of a 711 system is not an end-to-end connection even if it encompasses all the 712 protocol layers up to the application layer. It is not end-to-end, 713 if the information or control function it carries actually extends 714 beyond those components. For instance, just because an e-mail server 715 can read the contents of an e-mail message does not make it a 716 legitimate recipient of the e-mail. 718 This memo also proposes to focus on the "need to know" aspect in 719 systems. Information should not be disclosed, stored, or routed in 720 cleartext through parties that do not absolutely need to have that 721 information. 723 The proposed argument about real ends is as follows: 725 Application functions are best realised by the entities directly 726 serving the users, and when more than one entity is involved, by 727 end-to-end protocols. The role and authority of any additional 728 entities necessary to carry out a function should match their part 729 of the function. No information or control roles should be 730 provided to these additional entities unless it is required by the 731 function they provide. 733 For instance, a particular piece of information may be necessary for 734 the other real endpoint, such as message contents for another user. 735 The same piece of information may not be necessary for any additional 736 parties, unless the information had to do with, say, routing 737 information for the message to reach the other user. When 738 information is only needed by the actual other endpoint, it should be 739 protected and be only relayed to the actual other endpoint. Protocol 740 design should ensure that the additional parties do not have access 741 to the information. 743 Note that it may well be that the easiest design approach is to send 744 all information to a third party and have majority of actual 745 functionality reside in that third party. But this is a case of a 746 clear tradeoff between ease of change by evolving that third party 747 vs. providing reasonable security against misuse of information. 749 Note that the above "real ends" argument is not limited to 750 communication systems. Even an application that does not communicate 751 with anyone else than its user may be implemented on top of a 752 distributed system where some information about the user is exposed 753 to untrusted parties. 755 The implications of the system security also extend beyond 756 information and control aspects. For instance, poorly design 757 component protocols can become DoS vectors which are then used to 758 attack other parts of the system. Availability is an important 759 aspect to consider in the analysis along other aspects. 761 3.2. Trusted networks 763 Some systems are thought of as being deployed only in a closed 764 setting, where all the relevant nodes are under direct control of the 765 network administrators. Technologies developed for such networks 766 tend to be optimized, at least initially, for these environments, and 767 may lack security features necessary for different types of 768 deployments. 770 It is well known that many such systems evolve over time, grow, and 771 get used and connected in new ways. For instance, the collaboration 772 and mergers between organizations, and new services for customers may 773 change the system or its environment. A system that used to be truly 774 within an administrative domain may suddenly need to cross network 775 boundaries or even run over the Internet. As a result, it is also 776 well known that it is good to ensure that underlying technologies 777 used in such systems can cope with that evolution, for instance, by 778 having the necessary security capabilities to operate in different 779 environments. 781 In general, the outside vs. inside security model is outdated for 782 most situations, due to the complex and evolving networks and the 783 need to support mixture of devices from different sources (e.g., BYOD 784 networks). Network virtualization also implies that previously clear 785 notions of local area networks and physical proximity may create an 786 entirely different reality from what appears from a simple notion of 787 a local network. 789 Similarly, even trusted, well-managed parties can be problematic, 790 even when operating openly in the Internet. Systems that collect 791 data from a large number of Internet users, or that are used by a 792 large number of devices have some inherent issues: large data stores 793 attract attempts to use that data in a manner that is not consistent 794 with the users' interests. They can also become single points of 795 failure through network management, software, or business failures. 796 See also [I-D.arkko-arch-infrastructure-centralisation]. 798 3.2.1. Even closed networks can have compromised nodes 800 This memo argues that the situation is even more dire than what was 801 explained above. It is impossible to ensure that all components in a 802 network are actually trusted. Even in a closed network with 803 carefully managed components there may be compromised components, and 804 this should be factored into the design of the system and protocols 805 used in the system. 807 For instance, during the Snowden revelations it was reported that 808 internal communication flows of large content providers were 809 compromised in an effort to acquire information from large numbers of 810 end users. This shows the need to protect not just communications 811 targeted to go over the Internet, but in many cases also internal and 812 control communications. 814 Furthermore, there is a danger of compromised nodes, so 815 communications security alone will be insufficient to protect against 816 this. The defences against this include limiting information within 817 networks to the parties that have a need to know, as well as limiting 818 control capabilities. This is necessary even when all the nodes are 819 under the control of the same network manager; the network manager 820 needs to assume that some nodes and communications will be 821 compromised, and build a system to mitigate or minimise attacks even 822 under that assumption. 824 Even airgapped networks can have these issues, as evidenced, for 825 instance, by the Stuxnet worm. The Internet is not the only form of 826 connectivity, as most systems include, for instance, USB ports that 827 proved to be the achilles heel of the targets in the Stuxnet case. 828 More commonly, every system runs large amount of software, and it is 829 often not practical or even possible to prevent compromised code even 830 in a high-security setting, let alone in commercial or private 831 networks. Installation media, physical ports, both open source and 832 proprietary programs, firmware, or even innocent-looking components 833 on a circuit board can be suspect. In addition, complex underlying 834 computing platforms, such as modern CPUs with underlying security and 835 management tools are prone to problems. 837 In general, this means that one cannot entirely trust even a closed 838 system where you picked all the components yourself. Analysis for 839 the security of many interesting real-world systems now commonly 840 needs to include cross-component attacks, e.g., the use of car radios 841 and other externally communicating devices as part of attacks 842 launched against the control components such as breaks in a car 843 [Savage]. 845 3.3. Balancing Threats 847 Note that not all information needs to be protected, and not all 848 threats can be protected against. But it is important that the main 849 threats are understood and protected against. 851 Sometimes there are higher-level mechanisms that provide safeguards 852 for failures. For instance, it is very difficult in general to 853 protect against denial-of-service against compromised nodes on a 854 communications path. However, it may be possible to detect that a 855 service has failed. 857 Another example is from packet-carrying networks. Payload traffic 858 that has been properly protected with encryption does not provide 859 much value to an attacker. For instance, it does not always make 860 sense to encrypt every packet transmission in a packet-carrying 861 system where the traffic is already encrypted at other layers. But 862 it almost always makes sense to protect control communications and to 863 understand the impacts of compromised nodes, particularly control 864 nodes. 866 4. Areas requiring more study 868 In addition to the guidelines in (Section 5), we suggest there may be 869 value in further study on the topics balow, with the goal of 870 producing more concrete guidelines. 872 1. Isolation: Sophisticated users can sometimes deal with 873 adversarial behaviours in applications by using different 874 instances of those applications, for example, differently 875 configured web browsers for use in different contexts. 876 Applications (including web browsers) and operating systems are 877 also building in isolation via use of different processes or 878 sandboxing. Protocol artefacts that relate to uses of such 879 isolation mechanisms might be worth considering. To an extent, 880 the IETF has in practice already recognised some of these issues 881 as being in-scope, e.g. when considering the linkability issues 882 with mechanisms such as TLS session tickets, or QUIC connection 883 identifiers. 885 2. Transparency: Certificate transparency (CT) [RFC6962] has been 886 an effective countermeasure for X.509 certificate mis-issuance, 887 which used be a known application layer misbehaviour in the 888 public web PKI. CT can also help with post-facto detection of 889 some infrastructure attacks where BGP or DNS weakenesses have 890 been leveraged so that some certification authority is tricked 891 into issuing a certificate for the wrong entity. While the 892 context in which CT operates is very constrained (essentially to 893 the public CAs trusted by web browsers), similar approaches 894 could perhaps be useful for other protocols or technologies. In 895 addition, legislative requirements such as those imposed by the 896 GDPR [GDPRAccess] could lead to a desire to handle internal data 897 structures and databases in ways that are reminiscent of CT, 898 though clearly with significant authorisation being required and 899 without the append-only nature of a CT log. 901 3. Same-Origin Policy: The Same-Origin Policy (SOP) [RFC6454] 902 perhaps already provides an example of how going beyond the RFC 903 3552 threat model can be useful. Arguably, the existence of the 904 SOP demonstrates that at least web browsers already consider the 905 3552 model as being too limited. (Clearly, differentiating 906 between same and not-same origins implicitly assumes that some 907 origins are not as trustworthy as others.) 909 4. Greasing: The TLS protocol [RFC8446] now supports the use of 910 GREASE [I-D.ietf-tls-grease] as a way to mitigate on-path 911 ossification. While this technique is not likely to prevent any 912 deliberate misbehaviours, it may provide a proof-of-concept that 913 network protocol mechanisms can have impact in this space, if we 914 spend the time to try analyse the incentives of the various 915 parties. 917 5. Generalise OAuth Threat Model: The OAuth threat model [RFC6819] 918 provides an extensive list of threats and security 919 considerations for those implementing and deploying OAuth 920 version 2.0 [RFC6749]. It could be useful to attempt to derive 921 a more abstract threat model from that RFC that considers 922 threats in more generic multi-party contexts. That document is 923 perhaps too detailed to serve as useful generic guidance but 924 does go beyond the Internet threat model from RFC3552, for 925 example it says: 927 two of the three parties involved in the OAuth protocol may 928 collude to mount an attack against the 3rd party. For 929 example, the client and authorization server may be under 930 control of an attacker and collude to trick a user to gain 931 access to resources. 933 6. Look again at how well we're securing infrastructure: Some 934 attacks (e.g. against DNS or routing infrastructure) appear to 935 benefit from current infrastructure mechanisms not being 936 deployed, e.g. DNSSEC, RPKI. In the case of DNSSEC, deployment 937 is still minimal despite much time having elapsed. This 938 suggests a number of different possible avenues for 939 investigation: 941 * For any protocol dependent on infrastructure like DNS or BGP, 942 we ought analysse potential outcomes in the event the 943 relevant infrastructure has been compromised 945 * Protocol designers perhaps ought consider post-facto 946 detection compromise mechanisms in the event that it is 947 infeasible to mitigate attacks on infrastructure that is not 948 under local control 950 * Despite the sunk costs, it may be worth re-considering 951 infrastructure security mechanisms that have not been 952 deployed, and hence are ineffective. 954 7. Trusted Computing: Various trusted computing mechanisms allow 955 placing some additional trust on a particular endpoint. This 956 can be useful to address some of the issues in this memo: 958 * A network manager of a set of devices may be assured that the 959 devices have not been compromised. 961 * An outside party may be assured that someone who runs a 962 device employs a particular software installation in that 963 device, and that the software runs in a protected 964 environment. 966 IETF work such as TEEP [I-D.ietf-teep-architecture] [I-D.ietf- 967 teep-protocol] and RATS [I-D.ietf-rats-eat] may be helpful in 968 providing attestations to other nodes about a particular 969 endpoint, or lifecycle management of such endpoints. 971 One should note, however, that it is often not possible to fully 972 protect endpoints (see, e.g., [Kocher2019] [Lipp2018] [I- 973 D.taddei-smart-cless-introduction] [I-D.mcfadden-smart-endpoint- 974 taxonomy-for-cless]). And of course, a trusted computing may be 975 set up and controlled by a party that itself is not trusted; a 976 client that contacts a server that the server's owner runs in a 977 trusted computing setting does not change the fact that the 978 client and the server's owner may have different interests. As 979 a result, there is a need to prepare for the possibility that 980 another party in a communication is not entirely trusted. 982 8. Trust Boundaries: Traditional forms of communication equipment 983 have morphed into today's virtualized environments, where new 984 trust boundaries exist, e.g., between different virtualisation 985 layers. And an application might consider itself trusted while 986 not entirely trusting the underlying operating system. A 987 browser application wants to protect itself against Javascript 988 loaded from a website, while the website considers itself and 989 the Javascript an application that it wants to protect from the 990 browser. In general, there are multiple parties even in a 991 single device, with differing interests, including some that 992 have (or claim to) the interest of the human user in mind. 994 9. Develop a BCP for privacy considerations: It may be time for the 995 IETF to develop a BCP for privacy considerations, possibly 996 starting from [RFC6973]. 998 10. Re-consider protocol design "lore": It could be that this 999 discussion demonstrates that it is timely to reconsider some 1000 protocol design "lore" as for example is done in [I-D.iab- 1001 protocol-maintenance]. More specifically, protocol 1002 extensibility mechanisms may inadvertently create vectors for 1003 abuse-cases, given that designers cannot fully analyse their 1004 impact at the time a new protocol is defined or standardised. 1005 One might conclude that a lack of extensibility could be a 1006 virtue for some new protocols, in contrast to earlier 1007 assumptions. As pointed out by one commenter though, people can 1008 find ways to extend things regardless, if they feel the need. 1010 11. Consider the user perspective: [I-D.nottingham-for-the-users] 1011 argues that, in relevant cases where there are conflicting 1012 requirements, the "IETF considers end users as its highest 1013 priority concern." Doing so seems consistent with the expanded 1014 threat model being argued for here, so may indicate that a BCP 1015 in that space could also be useful. 1017 12. Have explicit agreements: When users and their devices provide 1018 information to network entities, it would be beneficial to have 1019 an opportunity for the users to state their requirements 1020 regarding the use of the information provided in this way. 1021 While the actual use of such requirements and the willingness of 1022 network entities to agree to them remains to be seen, at the 1023 moment even the technical means of doing this are limited. For 1024 instance, it would be beneficial to be able to embed usage 1025 requirements within popular data formats. 1027 As appropriate, users should be made aware of the choices made 1028 in a particular design, and avoid designs or products that 1029 protect against some threats but are wide open to other serious 1030 issues. (SF doesn't know what that last bit means;-) 1032 13. Perform end-to-end protection via other parties: Information 1033 passed via another party who does not intrinsically need the 1034 information to perform its function should be protected end-to- 1035 end to its intended recipient. This guideline is general, and 1036 holds equally for sending TCP/IP packets, TLS connections, or 1037 application-layer interactions. As [RFC8546] notes, it is a 1038 useful design rule to avoid "accidental invariance" (the 1039 deployment of on-path devices that over-time start to make 1040 assumptions about protocols). However, it is also a necessary 1041 security design rule to avoid "accidental disclosure" where 1042 information originally thought to be benign and untapped over- 1043 time becomes a significant information leak. This guideline can 1044 also be applied for different aspects of security, e.g., 1045 confidentiality and integrity protection, depending on what the 1046 specific need for information is in the other parties. 1048 The main reason that further study is needed here is that the 1049 key management consequences can be significant here - once one 1050 enters into a multi-party world, securely managing keys for all 1051 entities can be so burdonsome that deployment just doesn't 1052 happen. 1054 5. Guidelines 1056 As [RFC3935] says: 1058 We embrace technical concepts such as decentralized control, edge- 1059 user empowerment and sharing of resources, because those concepts 1060 resonate with the core values of the IETF community. 1062 To be more specific, this memo suggests the following guidelines for 1063 protocol designers: 1065 1. Consider first principles in protecting information and systems, 1066 rather than following a specific pattern such as protecting 1067 information in a particular way or only at a particular protocol 1068 layer. It is necessary to understand what components can be 1069 compromised, where interests may or may not be aligned, and what 1070 parties have a legitimate role in being a party to a specific 1071 information or a control task. 1073 2. Consider how you depend on infrastructure. For any protocol 1074 directly or indirectly dependent on infrastructure like DNS or 1075 BGP, analyse potential outcomes in the event that the relevant 1076 infrastructure has been compromised. Such attacks occur in the 1077 wild. [DeepDive] 1079 3. Protocol endpoints are commonly no longer executed on what used 1080 be understood as a host system. [StackEvo] The web and 1081 Javascript model clearly differs from traditional host models, 1082 but so do many server-side deployments, thanks to 1083 virtualisation. At protocol design time assume that all 1084 endpoints will be run in virtualised environments where co- 1085 tenants and (sometimes) hypervisors are adversaries, and then 1086 analyse such scenarios. 1088 4. Once you have something, do not pass it onto others without 1089 serious consideration. In other words, minimize information 1090 passed to others to guard against the potential compromise of 1091 that party. As recommended in [RFC6973] data minimisation and 1092 additional encryption can be helpful - if applications don't 1093 ever see data, or a cleartext form of data, then they should 1094 have a harder time misbehaving. Similarly, not defining new 1095 long-term identifiers, and not exposing existing ones, help in 1096 minimising risk. 1098 5. Minimize passing of control functions to others. Any passing of 1099 control functions to other parties should be minimized to guard 1100 against the potential misuse of those control functions. This 1101 applies to both technical (e.g., nodes that assign resources) 1102 and process control functions (e.g., the ability to allocate 1103 number or develop extensions). Control functions of all kinds 1104 can become a matter of contention and power struggle, even in 1105 cases where their actual function is minimal, as we saw with the 1106 IANA transition debates. 1108 6. Where possible, avoid centralized resources. While centralized 1109 components, resources, and functions are often simpler, there 1110 can be grave issues associated with them, for example meta-data 1111 leakage. Designers should balance the benefits of centralized 1112 resources or control points against the threats arising. If it 1113 is not possible to avoid, find a way to allow the centralized 1114 resources to be selectable, depending on context and user 1115 settings. 1117 7. Treat parties with which your protocol endpoints interact with 1118 suspicion, even if the communications are encrypted. Other 1119 endpoints may misuse any information or control opportunity in 1120 the communication. Similarly, even endpoints within your own 1121 system need to be treated with suspicision, as some may become 1122 compromised. 1124 8. Consider abuse-cases. Protocol developers are typically most 1125 interested in a few specific use-cases for which they need 1126 solutions. Expanding the threat model to consider adversarial 1127 behaviours [AbuseCases] calls for significant attention to be 1128 paid to potential abuses of whatever new or re-purposed 1129 technology is being considered. 1131 9. Consider recovery from compromse or attack during protocol 1132 design - all widely used protocols will at some time be subject 1133 to successful attack, whether that is due to deployment or 1134 implementation error, or, less commonly, due to protocol design 1135 flaws. For example, recent work on multiparty messaging 1136 security primitives [I-D.ietf-mls-architecture] considers "post- 1137 compromise security" as an inherent part of the design of that 1138 protocol. 1140 10. Consider linkability. As discussed in [RFC6973] the ability to 1141 link or correlate different protocol messages with one another, 1142 or with external sources of information (e.g. public or private 1143 databases) can create privacy or security issues. As an 1144 example, re-use of TLS session tickets can enable an observer to 1145 associate multiple TLS sessions regardless of changes in source 1146 or destination addressing, which may reduce privacy or help a 1147 bad actor in targetting an attack. The same effects may result 1148 regardless of how protocol exchanges can be linked to one 1149 another. Protocol designs that aim to prevent such linkage may 1150 produce have fewer unexpected or unwanted side-effects when 1151 deployed. 1153 But when applying these guidelines, don't take this as blanket reason 1154 to provide no information to anyone, or (impractically) insist on 1155 encrypting everything, or other extreme measures. Designers need to 1156 be aware of the different threats facing their system, and deal with 1157 the most serious ones (of which there are typically many) within 1158 their applicable resource constraints. 1160 6. Potential changes in BCP 72/RFC 3552 1162 BCP 72/RFC 3553 [RFC3552] defines an "Internet Threat Model" and 1163 provides guidance on writing Security Considerations sections in 1164 other RFCs. It is important to note that BCP 72 is (or should be:-) 1165 used by all IETF participants when developing protocols. Potential 1166 changes to RFC 3552 therefore need to be brief - IETF participants 1167 cannot in general be expected to devote huge amounts of time to 1168 developing their security considerations text. Potential changes 1169 also need to be easily understood as IETF participants from all 1170 backgrounds need to be able to use BCP 72. In this section we 1171 provide a couple of initial suggested changes to BCP 72 that will 1172 need to be further developed as part of this work. (For example, it 1173 may be possible to include some of the guidelines from Section 5 as 1174 those are further developed.) 1176 As evidenced in the OAuth quote in Section 4 - it can be useful to 1177 conside the effect of compromised endpoints on those that are not 1178 compromised. It may therefore be interesting to consider the 1179 consequeneces that would follow from a change to [RFC3552] that 1180 recognises how the landscape has changed since 2003. 1182 One initial, draft proposal for such a change could be: 1184 OLD: 1186 In general, we assume that the end-systems engaging in a protocol 1187 exchange have not themselves been compromised. Protecting against 1188 an attack when one of the end-systems has been compromised is 1189 extraordinarily difficult. It is, however, possible to design 1190 protocols which minimize the extent of the damage done under these 1191 circumstances. 1193 NEW: 1195 In general, we assume that the end-system engaging in a protocol 1196 exchange has not itself been compromised. Protecting against an 1197 attack of a protocol implementation itself is extraordinarily 1198 difficult. It is, however, possible to design protocols which 1199 minimize the extent of the damage done when the other parties in a 1200 protocol become compromised or do not act in the best interests 1201 the end-system implementing a protocol. 1203 In addition, the following new section could be added to discuss the 1204 capabilities required to mount an attack: 1206 NEW: 1208 3.x. Other endpoint compromise 1210 In this attack, the other endpoints in the protocol become 1211 compromised. As a result, they can, for instance, misuse any 1212 information that the end-system implementing a protocol has sent 1213 to the compromised endpoint. 1215 System and architecture aspects definitely also need more attention 1216 from Internet technology developers and standards organizations. 1217 Here is one possible addition: 1219 NEW: 1221 The design of any Internet technology should start from an 1222 understanding of the participants in a system, their roles, and 1223 the extent to which they should have access to information and 1224 ability to control other participants. 1226 7. Potential Changes in BCP 188/RFC 7258 1228 Other additional guidelines may be necessary also in BCP 188/RFC 1229 7258[RFC7258], which specifies how IETF work should take into account 1230 pervasive monitoring. 1232 An initial, draft suggestion for starting point of those changes 1233 could be adding the following paragraph after the 2nd paragraph in 1234 Section 2: 1236 NEW: 1238 PM attacks include those cases where information collected by a 1239 legitimate protocol participant is misused for PM purposes. The 1240 attacks also include those cases where a protocol or network 1241 architecture results in centralized data storage or control 1242 functions relating to many users, raising the risk of said misuse. 1244 8. Conclusions 1246 At this stage we don't think it approriate to claim that any strong 1247 conclusion can be reached based on the above. We do however, claim 1248 that the is a topic that could be worth discussion and more work. 1250 To start with, Internet technology developers need to be better aware 1251 of the issues beyond communications security, and consider them in 1252 design. At the IETF it would be beneficial to include some of these 1253 considerations in the usual systematic security analysis of 1254 technologies under development. 1256 In particular, when the IETF develops infrastructure technology for 1257 the Internet (such as routing or naming systems), considering the 1258 impacts of data generated by those technologies is important. 1259 Minimising data collection from users, minimising the parties who get 1260 exposed to user data, and protecting data that is relayed or stored 1261 in systems should be a priority. 1263 A key focus area at the IETF has been the security of transport 1264 protocols, and how transport layer security can be best used to 1265 provide the right security for various applications. However, more 1266 work is needed in equivalently broadly deployed tools for minimising 1267 or obfuscating information provided by users to other entities, and 1268 the use of end-to-end security through entities that are involved in 1269 the protocol exchange but who do not need to know everything that is 1270 being passed through them. 1272 Comments on the issues discussed in this memo are gladly taken either 1273 privately or on the model-t mailing list 1274 (https://www.ietf.org/mailman/listinfo/Model-t). 1276 Some further work includes items listed in Section 5 and Section 4, 1277 as well as compiling categories of vulnerabilities that need to be 1278 addressed, examples of specific attacks, and continuing the analysis 1279 of the situation and possible new remedies. 1281 It is also necessary find suitable use cases that the IETF can 1282 address by further work in this space. A completely adversial 1283 situation is not really workable, but there are situations where some 1284 parties are trustworthy, and wish to co-operate to show to each other 1285 that this is really the case. In these situations data minimisation 1286 can be beneficial to both, attestation can provide additional trust, 1287 detection of incidents can alert the parties to action, and so on. 1289 9. Informative References 1291 [AbuseCases] 1292 McDermott, J. and C. Fox, "Using abuse case models for 1293 security requirements analysis", IEEE Annual Computer 1294 Security Applications Conference (ACSAC'99), 1295 https://www.acsac.org/1999/papers/wed-b-1030-john.pdf , 1296 1999. 1298 [Attitude] "User Perceptions of Sharing, Advertising, and Tracking", 1299 Symposium on Usable Privacy and Security (SOUPS), 1300 https://www.usenix.org/conference/soups2015/proceedings/ 1301 presentation/chanchary , 2015. 1303 [avleak] Cox, J., "Leaked Documents Expose the Secretive Market for 1304 Your Web Browsing Data", 1305 https://www.vice.com/en_us/article/qjdkq7/ 1306 avast-antivirus-sells-user-browsing-data-investigation , 1307 February 2020. 1309 [BgpHijack]Sermpezis, P., Kotronis, V., Dainotti, A., and X. 1310 Dimitropoulos, "A survey among network operators on BGP 1311 prefix hijacking", ACM SIGCOMM Computer Communication 1312 Review 48, no. 1 (2018): 64-69, 1313 https://arxiv.org/pdf/1801.02918.pdf , 2018. 1315 [Bloatware]Gamba, G., Rashed, M., Razaghpanah, A., Tapiado, J., and 1316 N. Vallina, "An Analysis of Pre-installed Android 1317 Software", arXiv preprint arXiv:1905.02713 (2019) , 2019. 1319 [Cambridge]Isaak, J. and M. Hanna, "User Data Privacy: Facebook, 1320 Cambridge Analytica, and Privacy Protection", Computer 1321 51.8 (2018): 56-59, https://ieeexplore.ieee.org/stamp/ 1322 stamp.jsp?arnumber=8436400 , 2018. 1324 [CommandAndControl] 1325 Botnet, ., "Creating botnet C&C server. What architecture 1326 should I use? IRC? HTTP?", Stackexchange.com question, 1327 https://security.stackexchange.com/questions/100577/ 1328 creating-botnet-cc-server-what-architecture-should-i-use- 1329 irc-http , 2014. 1331 [Curated] Hammad, M., Garcia, J., and S. MaleK, "A large-scale 1332 empirical study on the effects of code obfuscations on 1333 Android apps and anti-malware products", ACM International 1334 Conference on Software Engineering 2018, 1335 https://www.ics.uci.edu/~seal/ 1336 publications/2018ICSE_Hammad.pdf , 2018. 1338 [DeepDive] Krebs on Security, ., "A Deep Dive on the Recent 1339 Widespread DNS Hijacking Attacks", krebsonsecurity.com 1340 blog, https://krebsonsecurity.com/2019/02/a-deep-dive-on- 1341 the-recent-widespread-dns-hijacking-attacks/ , 2019. 1343 [DynDDoS] York, K., "Dyn's Statement on the 10/21/2016 DNS DDoS 1344 Attack", Company statement: https://dyn.com/blog/ 1345 dyn-statement-on-10212016-ddos-attack/ , 2016. 1347 [GDPRAccess] 1348 EU, ., "Right of access by the data subject", Article 15, 1349 GDPR, https://gdpr-info.eu/art-15-gdpr/ , February 2020. 1351 [HijackDet]Schlamp, J., Holz, R., Gasser, O., Korste, A., Jacquemart, 1352 Q., Carle, G., and E. Biersack, "Investigating the nature 1353 of routing anomalies: Closing in on subprefix hijacking 1354 attacks", International Workshop on Traffic Monitoring and 1355 Analysis, pp. 173-187. Springer, Cham, 1356 https://www.net.in.tum.de/fileadmin/bibtex/publications/ 1357 papers/schlamp_TMA_1_2015.pdf , 2015. 1359 [Home] Nthala, N. and I. Flechais, "Rethinking home network 1360 security", European Workshop on Usable Security 1361 (EuroUSEC), https://ora.ox.ac.uk/objects/ 1362 uuid:e2460f50-579b-451b-b14e-b7be2decc3e1/download_file?sa 1363 fe_filename=bare_conf_EuroUSEC2018.pdf&file_format=applica 1364 tion%2Fpdf&type_of_work=Conference+item , 2018. 1366 [I-D.arkko-arch-dedr-report] 1367 Arkko, J. and T. Hardie, "Report from the IAB workshop on 1368 Design Expectations vs. Deployment Reality in Protocol 1369 Development", draft-arkko-arch-dedr-report-00 (work in 1370 progress), 4 November 2019, 1371 . 1374 [I-D.arkko-arch-infrastructure-centralisation] 1375 Arkko, J., "Centralised Architectures in Internet 1376 Infrastructure", draft-arkko-arch-infrastructure- 1377 centralisation-00 (work in progress), 4 November 2019, 1378 . 1381 [I-D.arkko-arch-internet-threat-model] 1382 Arkko, J., "Changes in the Internet Threat Model", draft- 1383 arkko-arch-internet-threat-model-01 (work in progress), 8 1384 July 2019, 1385 . 1388 [I-D.farrell-etm] 1389 Farrell, S., "We're gonna need a bigger threat model", 1390 draft-farrell-etm-03 (work in progress), 6 July 2019, 1391 . 1394 [I-D.iab-protocol-maintenance] 1395 Thomson, M., "The Harmful Consequences of the Robustness 1396 Principle", draft-iab-protocol-maintenance-04 (work in 1397 progress), 3 November 2019, 1398 . 1401 [I-D.ietf-httpbis-expect-ct] 1402 estark@google.com, e., "Expect-CT Extension for HTTP", 1403 draft-ietf-httpbis-expect-ct-08 (work in progress), 9 1404 December 2018, 1405 . 1408 [I-D.ietf-mls-architecture] 1409 Omara, E., Beurdouche, B., Rescorla, E., Inguva, S., Kwon, 1410 A., and A. Duric, "The Messaging Layer Security (MLS) 1411 Architecture", draft-ietf-mls-architecture-04 (work in 1412 progress), 26 January 2020, 1413 . 1416 [I-D.ietf-quic-transport] 1417 Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed 1418 and Secure Transport", draft-ietf-quic-transport-25 (work 1419 in progress), 21 January 2020, 1420 . 1423 [I-D.ietf-rats-eat] 1424 Mandyam, G., Lundblade, L., Ballesteros, M., and J. 1425 O'Donoghue, "The Entity Attestation Token (EAT)", draft- 1426 ietf-rats-eat-02 (work in progress), 9 January 2020, 1427 . 1430 [I-D.ietf-teep-architecture] 1431 Pei, M., Tschofenig, H., Thaler, D., and D. Wheeler, 1432 "Trusted Execution Environment Provisioning (TEEP) 1433 Architecture", draft-ietf-teep-architecture-05 (work in 1434 progress), 12 December 2019, 1435 . 1438 [I-D.ietf-teep-protocol] 1439 Tschofenig, H., Pei, M., Wheeler, D., and D. Thaler, 1440 "Trusted Execution Environment Provisioning (TEEP) 1441 Protocol", draft-ietf-teep-protocol-00 (work in progress), 1442 12 December 2019, 1443 . 1446 [I-D.ietf-tls-esni] 1447 Rescorla, E., Oku, K., Sullivan, N., and C. Wood, 1448 "Encrypted Server Name Indication for TLS 1.3", draft- 1449 ietf-tls-esni-05 (work in progress), 4 November 2019, 1450 . 1453 [I-D.ietf-tls-grease] 1454 Benjamin, D., "Applying GREASE to TLS Extensibility", 1455 draft-ietf-tls-grease-04 (work in progress), 22 August 1456 2019, . 1459 [I-D.lazanski-smart-users-internet] 1460 Lazanski, D., "An Internet for Users Again", draft- 1461 lazanski-smart-users-internet-00 (work in progress), 8 1462 July 2019, 1463 . 1466 [I-D.mcfadden-smart-endpoint-taxonomy-for-cless] 1467 McFadden, M., "Endpoint Taxonomy for CLESS", draft- 1468 mcfadden-smart-endpoint-taxonomy-for-cless-01 (work in 1469 progress), 5 February 2020, 1470 . 1473 [I-D.nottingham-for-the-users] 1474 Nottingham, M., "The Internet is for End Users", draft- 1475 nottingham-for-the-users-09 (work in progress), 22 July 1476 2019, 1477 . 1480 [I-D.taddei-smart-cless-introduction] 1481 Taddei, A., Wueest, C., Roundy, K., and D. Lazanski, 1482 "Capabilities and Limitations of an Endpoint-only Security 1483 Solution", draft-taddei-smart-cless-introduction-02 (work 1484 in progress), 9 January 2020, 1485 . 1488 [Kocher2019] 1489 Kocher, P., Horn, J., Fogh, A., Genkin, D., Gruss, D., 1490 Haas, W., Hamburg, M., Lipp, M., Mangard, S., Prescher, 1491 T., Schwarz, M., and Y. Yarom, "Spectre Attacks: 1492 Exploiting Speculative Execution", 40th IEEE Symposium on 1493 Security and Privacy (S&P'19) , 2019. 1495 [LeakyBuckets] 1496 Chickowski, E., "Leaky Buckets: 10 Worst Amazon S3 1497 Breaches", Bitdefender blog, 1498 https://businessinsights.bitdefender.com/ 1499 worst-amazon-breaches , 2018. 1501 [Lipp2018] Lipp, M., Schwarz, M., Gruss, D., Prescher, T., Haas, W., 1502 Fogh, A., Horn, J., Mangard, S., Kocher, P., Genkin, D., 1503 Yarom, Y., and M. Hamburg, "Meltdown: Reading Kernel 1504 Memory from User Space", 27th USENIX Security Symposium 1505 (USENIX Security 18) , 2018. 1507 [Mailbug] Englehardt, S., Han, J., and A. Narayanan, "I never signed 1508 up for this! Privacy implications of email tracking", 1509 Proceedings on Privacy Enhancing Technologies 2018.1 1510 (2018): 109-126, https://www.degruyter.com/downloadpdf/j/ 1511 popets.2018.2018.issue-1/popets-2018-0006/ 1512 popets-2018-0006.pdf , 2018. 1514 [MeltdownAndSpectre] 1515 CISA, ., "Meltdown and Spectre Side-Channel Vulnerability 1516 Guidance", Alert (TA18-004A), 1517 https://www.us-cert.gov/ncas/alerts/TA18-004A , 2018. 1519 [Passwords]com, haveibeenpwned., "Pwned Passwords", Website 1520 https://haveibeenpwned.com/Passwords , 2019. 1522 [RFC1958] Carpenter, B., Ed., "Architectural Principles of the 1523 Internet", RFC 1958, DOI 10.17487/RFC1958, June 1996, 1524 . 1526 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 1527 Text on Security Considerations", BCP 72, RFC 3552, 1528 DOI 10.17487/RFC3552, July 2003, 1529 . 1531 [RFC3935] Alvestrand, H., "A Mission Statement for the IETF", 1532 BCP 95, RFC 3935, DOI 10.17487/RFC3935, October 2004, 1533 . 1535 [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path 1536 Computation Element (PCE)-Based Architecture", RFC 4655, 1537 DOI 10.17487/RFC4655, August 2006, 1538 . 1540 [RFC6454] Barth, A., "The Web Origin Concept", RFC 6454, 1541 DOI 10.17487/RFC6454, December 2011, 1542 . 1544 [RFC6480] Lepinski, M. and S. Kent, "An Infrastructure to Support 1545 Secure Internet Routing", RFC 6480, DOI 10.17487/RFC6480, 1546 February 2012, . 1548 [RFC6749] Hardt, D., Ed., "The OAuth 2.0 Authorization Framework", 1549 RFC 6749, DOI 10.17487/RFC6749, October 2012, 1550 . 1552 [RFC6797] Hodges, J., Jackson, C., and A. Barth, "HTTP Strict 1553 Transport Security (HSTS)", RFC 6797, 1554 DOI 10.17487/RFC6797, November 2012, 1555 . 1557 [RFC6819] Lodderstedt, T., Ed., McGloin, M., and P. Hunt, "OAuth 2.0 1558 Threat Model and Security Considerations", RFC 6819, 1559 DOI 10.17487/RFC6819, January 2013, 1560 . 1562 [RFC6962] Laurie, B., Langley, A., and E. Kasper, "Certificate 1563 Transparency", RFC 6962, DOI 10.17487/RFC6962, June 2013, 1564 . 1566 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 1567 Morris, J., Hansen, M., and R. Smith, "Privacy 1568 Considerations for Internet Protocols", RFC 6973, 1569 DOI 10.17487/RFC6973, July 2013, 1570 . 1572 [RFC7258] Farrell, S. and H. Tschofenig, "Pervasive Monitoring Is an 1573 Attack", BCP 188, RFC 7258, DOI 10.17487/RFC7258, May 1574 2014, . 1576 [RFC7469] Evans, C., Palmer, C., and R. Sleevi, "Public Key Pinning 1577 Extension for HTTP", RFC 7469, DOI 10.17487/RFC7469, April 1578 2015, . 1580 [RFC7540] Belshe, M., Peon, R., and M. Thomson, Ed., "Hypertext 1581 Transfer Protocol Version 2 (HTTP/2)", RFC 7540, 1582 DOI 10.17487/RFC7540, May 2015, 1583 . 1585 [RFC7817] Melnikov, A., "Updated Transport Layer Security (TLS) 1586 Server Identity Check Procedure for Email-Related 1587 Protocols", RFC 7817, DOI 10.17487/RFC7817, March 2016, 1588 . 1590 [RFC8240] Tschofenig, H. and S. Farrell, "Report from the Internet 1591 of Things Software Update (IoTSU) Workshop 2016", 1592 RFC 8240, DOI 10.17487/RFC8240, September 2017, 1593 . 1595 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 1596 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 1597 . 1599 [RFC8484] Hoffman, P. and P. McManus, "DNS Queries over HTTPS 1600 (DoH)", RFC 8484, DOI 10.17487/RFC8484, October 2018, 1601 . 1603 [RFC8546] Trammell, B. and M. Kuehlewind, "The Wire Image of a 1604 Network Protocol", RFC 8546, DOI 10.17487/RFC8546, April 1605 2019, . 1607 [RFC8555] Barnes, R., Hoffman-Andrews, J., McCarney, D., and J. 1608 Kasten, "Automatic Certificate Management Environment 1609 (ACME)", RFC 8555, DOI 10.17487/RFC8555, March 2019, 1610 . 1612 [Saltzer] Saltzer, J.H., Reed, D.P., and D.D. Clark, "End-To-End 1613 Arguments in System Design", ACM TOCS, Vol 2, Number 4, pp 1614 277-288 , November 1984. 1616 [Savage] Savage, S., "Modern Automotive Vulnerabilities: Causes, 1617 Disclosures, and Outcomes", USENIX , 2016. 1619 [SmartTV] Malkin, N., Bernd, J., Johnson, M., and S. Egelman, "What 1620 Can't Data Be Used For? Privacy Expectations about Smart 1621 TVs in the U.S.", European Workshop on Usable Security 1622 (Euro USEC), https://www.ndss-symposium.org/wp- 1623 content/uploads/2018/06/ 1624 eurousec2018_16_Malkin_paper.pdf" , 2018. 1626 [StackEvo] Trammell, B., Thomson, M., Howard, L., and T. Hardie, 1627 "What Is an Endpoint?", Unpublished work, 1628 https://github.com/stackevo/endpoint-draft/blob/master/ 1629 draft-trammell-whats-an-endpoint.md , 2017. 1631 [Sybil] Viswanath, B., Post, A., Gummadi, K., and A. Mislove, "An 1632 analysis of social network-based sybil defenses", ACM 1633 SIGCOMM Computer Communication Review 41(4), 363-374, 1634 https://conferences.sigcomm.org/sigcomm/2010/papers/ 1635 sigcomm/p363.pdf , 2011. 1637 [TargetAttack] 1638 Osborne, C., "How hackers stole millions of credit card 1639 records from Target", ZDNET, 1640 https://www.zdnet.com/article/how-hackers-stole-millions- 1641 of-credit-card-records-from-target/ , 2014. 1643 [Toys] Chu, G., Apthorpe, N., and N. Feamster, "Security and 1644 Privacy Analyses of Internet of Things Childrens' Toys", 1645 IEEE Internet of Things Journal 6.1 (2019): 978-985, 1646 https://arxiv.org/pdf/1805.02751.pdf , 2019. 1648 [Tracking] Ermakova, T., Fabian, B., Bender, B., and K. Klimek, "Web 1649 Tracking-A Literature Review on the State of Research", 1650 Proceedings of the 51st Hawaii International Conference on 1651 System Sciences, https://scholarspace.manoa.hawaii.edu/ 1652 bitstream/10125/50485/paper0598.pdf , 2018. 1654 [Troll] Stewart, L., Arif, A., and K. Starbird, "Examining trolls 1655 and polarization with a retweet network", ACM Workshop on 1656 Misinformation and Misbehavior Mining on the Web, 1657 https://faculty.washington.edu/kstarbi/ 1658 examining-trolls-polarization.pdf , 2018. 1660 [Unread] Obar, J. and A. Oeldorf, "The biggest lie on the 1661 internet{:} Ignoring the privacy policies and terms of 1662 service policies of social networking services", 1663 Information, Communication and Society (2018): 1-20 , 1664 2018. 1666 [Vpns] Khan, M., DeBlasio, J., Voelker, G., Snoeren, A., Kanich, 1667 C., and N. Vallina, "An empirical analysis of the 1668 commercial VPN ecosystem", ACM Internet Measurement 1669 Conference 2018 (pp. 443-456), 1670 https://eprints.networks.imdea.org/1886/1/ 1671 imc18-final198.pdf , 2018. 1673 Appendix A. Acknowledgements 1675 The authors would like to thank the IAB: 1677 Alissa Cooper, Wes Hardaker, Ted Hardie, Christian Huitema, Zhenbin 1678 Li, Erik Nordmark, Mark Nottingham, Melinda Shore, Jeff Tantsura, 1679 Martin Thomson, Brian Trammel, Mirja Kuhlewind, and Colin Perkins. 1681 The authors would also like to thank the participants of the IETF 1682 SAAG meeting where this topic was discussed: 1684 Harald Alvestrand, Roman Danyliw, Daniel Kahn Gilmore, Wes Hardaker, 1685 Bret Jordan, Ben Kaduk, Dominique Lazanski, Eliot Lear, Lawrence 1686 Lundblade, Kathleen Moriarty, Kirsty Paine, Eric Rescorla, Ali 1687 Rezaki, Mohit Sethi, Ben Schwartz, Dave Thaler, Paul Turner, David 1688 Waltemire, and Jeffrey Yaskin. 1690 The authors would also like to thank the participants of the IAB 2019 1691 DEDR workshop: 1693 Tuomas Aura, Vittorio Bertola, Carsten Bormann, Stephane Bortzmeyer, 1694 Alissa Cooper, Hannu Flinck, Carl Gahnberg, Phillip Hallam-Baker, Ted 1695 Hardie, Paul Hoffman, Christian Huitema, Geoff Huston, Konstantinos 1696 Komaitis, Mirja Kuhlewind, Dirk Kutscher, Zhenbin Li, Julien 1697 Maisonneuve, John Mattson, Moritz Muller, Joerg Ott, Lucas Pardue, 1698 Jim Reid, Jan-Frederik Rieckers, Mohit Sethi, Melinda Shore, Jonne 1699 Soininen, Andrew Sullivan, and Brian Trammell. 1701 The authors would also like to thank the participants of the November 1702 2016 meeting at the IETF: 1704 Carsten Bormann, Randy Bush, Tommy C, Roman Danyliw, Ted Hardie, 1705 Christian Huitema, Ben Kaduk, Dirk Kutscher, Dominique Lazanski, Eric 1706 Rescorla, Ali Rezaki, Mohit Sethi, Melinda Shore, Martin Thomson, and 1707 Robin Wilton ... (missing many people... did we have minutes other 1708 than the list of actions?) ... 1710 Finally, the authors would like to thank numerous other people for 1711 insightful comments and discussions in this space. 1713 Authors' Addresses 1715 Ericsson 1716 Jari Arkko 1717 FI- 1718 Finland 1720 Email: jari.arkko@piuha.net 1722 Stephen Farrell 1723 Trinity College Dublin 1724 Ireland 1726 Email: stephen.farrell@cs.tcd.ie