idnits 2.17.1 draft-farrell-etm-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 6, 2019) is 1755 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '1' on line 835 -- Looks like a reference, but probably isn't: '2' on line 837 -- Looks like a reference, but probably isn't: '3' on line 839 -- Looks like a reference, but probably isn't: '4' on line 841 -- Looks like a reference, but probably isn't: '5' on line 844 -- Looks like a reference, but probably isn't: '6' on line 848 -- Looks like a reference, but probably isn't: '7' on line 851 -- Looks like a reference, but probably isn't: '8' on line 854 -- Looks like a reference, but probably isn't: '9' on line 856 -- Looks like a reference, but probably isn't: '10' on line 858 -- Looks like a reference, but probably isn't: '11' on line 860 -- Looks like a reference, but probably isn't: '12' on line 863 -- Looks like a reference, but probably isn't: '13' on line 865 -- Looks like a reference, but probably isn't: '14' on line 868 -- Looks like a reference, but probably isn't: '15' on line 871 -- Looks like a reference, but probably isn't: '16' on line 874 -- Looks like a reference, but probably isn't: '17' on line 877 -- Looks like a reference, but probably isn't: '18' on line 880 -- Looks like a reference, but probably isn't: '19' on line 883 -- Looks like a reference, but probably isn't: '20' on line 885 == Outdated reference: A later version (-01) exists of draft-arkko-arch-internet-threat-model-00 == Outdated reference: A later version (-12) exists of draft-iab-protocol-maintenance-03 == Outdated reference: A later version (-13) exists of draft-ietf-mls-architecture-02 == Outdated reference: A later version (-04) exists of draft-ietf-tls-grease-02 == Outdated reference: A later version (-09) exists of draft-nottingham-for-the-users-08 -- Obsolete informational reference (is this intentional?): RFC 6962 (Obsoleted by RFC 9162) Summary: 0 errors (**), 0 flaws (~~), 6 warnings (==), 22 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group S. Farrell 3 Internet-Draft Trinity College Dublin 4 Intended status: Informational July 6, 2019 5 Expires: January 7, 2020 7 We're gonna need a bigger threat model 8 draft-farrell-etm-03 10 Abstract 12 We argue that an expanded threat model is needed for Internet 13 protocol development as protocol endpoints can no longer be 14 considered to be generally trustworthy for any general definition of 15 "trustworthy." 17 Status of This Memo 19 This Internet-Draft is submitted in full conformance with the 20 provisions of BCP 78 and BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF). Note that other groups may also distribute 24 working documents as Internet-Drafts. The list of current Internet- 25 Drafts is at https://datatracker.ietf.org/drafts/current/. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 This Internet-Draft will expire on January 7, 2020. 34 Copyright Notice 36 Copyright (c) 2019 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (https://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with respect 44 to this document. Code Components extracted from this document must 45 include Simplified BSD License text as described in Section 4.e of 46 the Trust Legal Provisions and are provided without warranty as 47 described in the Simplified BSD License. 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 52 2. Examples of deliberate adversarial behaviour in applications 4 53 2.1. Malware in curated application stores . . . . . . . . . . 4 54 2.2. Virtual private networks (VPNs) . . . . . . . . . . . . . 5 55 2.3. Compromised (home) networks . . . . . . . . . . . . . . . 5 56 2.4. Web browsers . . . . . . . . . . . . . . . . . . . . . . 5 57 2.5. Web site policy deception . . . . . . . . . . . . . . . . 5 58 2.6. Tracking bugs in mail . . . . . . . . . . . . . . . . . . 6 59 2.7. Troll farms in online social networks . . . . . . . . . . 6 60 2.8. Smart televisions . . . . . . . . . . . . . . . . . . . . 6 61 2.9. So-called Internet of things . . . . . . . . . . . . . . 7 62 2.10. Attacks leveraging compromised high-level DNS 63 infrastructure . . . . . . . . . . . . . . . . . . . . . 7 64 2.11. BGP hijacking . . . . . . . . . . . . . . . . . . . . . . 8 65 3. Inadvertent adversarial behaviours . . . . . . . . . . . . . 8 66 4. Possible directions for an expanded threat model . . . . . . 9 67 4.1. Develop a BCP for privacy considerations . . . . . . . . 10 68 4.2. Consider the user perspective . . . . . . . . . . . . . . 10 69 4.3. Consider ABuse-cases as well as use-cases . . . . . . . . 10 70 4.4. Re-consider protocol design "lore" . . . . . . . . . . . 10 71 4.5. Isolation . . . . . . . . . . . . . . . . . . . . . . . . 10 72 4.6. Transparency . . . . . . . . . . . . . . . . . . . . . . 11 73 4.7. Minimise . . . . . . . . . . . . . . . . . . . . . . . . 11 74 4.8. Same-Origin Policy . . . . . . . . . . . . . . . . . . . 11 75 4.9. Greasing . . . . . . . . . . . . . . . . . . . . . . . . 11 76 4.10. Generalise OAuth Threat Model . . . . . . . . . . . . . . 12 77 4.11. One (or more) endpoint may be compromised . . . . . . . . 12 78 4.12. Look again at how well we're securing infrastructure . . 12 79 4.13. Consider recovery from attack as part of protocol design 13 80 4.14. Don't think in terms of hosts . . . . . . . . . . . . . . 13 81 5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 13 82 6. Security Considerations . . . . . . . . . . . . . . . . . . . 14 83 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 84 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 14 85 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 14 86 9.1. Informative References . . . . . . . . . . . . . . . . . 14 87 9.2. URIs . . . . . . . . . . . . . . . . . . . . . . . . . . 18 88 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 19 89 A.1. Changes from -02 to -03 . . . . . . . . . . . . . . . . . 19 90 A.2. Changes from -01 to -02 . . . . . . . . . . . . . . . . . 19 91 A.3. Changes from -00 to -01 . . . . . . . . . . . . . . . . . 20 92 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 20 94 1. Introduction 96 [[There's a github repo for this -- issues and PRs are welcome there. 97 ]] 99 [RFC3552], Section 3 defines an "Internet Threat Model" which has 100 been commonly used when developing Internet protocols. That assumes 101 that "the end-systems engaging in a protocol exchange have not 102 themselves been compromised." RFC 3552 is a formal part of of the 103 IETF's process as it is also BCP72. 105 Since RFC 3552 was written, we have seen a greater emphasis on 106 considering privacy and [RFC6973] provides privacy guidance for 107 protocol developers. RFC 6973 is not a formal BCP, but appears to 108 have been useful for protocol developers as it is referenced by 38 109 later RFCs at the time of writing [1]. 111 BCP188, [RFC7258] subsequently recognised pervasive monitoring as a 112 particular kind of attack and has also been relatively widely 113 referenced (39 RFCs at the time of writing [2]). To date, perhaps 114 most documents referencing BCP188 have considered state-level or in- 115 network adversaries. 117 In this document, we argue that we need to epxand our threat model to 118 acknowledge that today many applications are themselves rightly 119 considered potential adversaries for at least some relevant actors. 120 However, those (good) actors cannot in general refuse to communicate 121 and will with non-negligible probability encounter applications that 122 are adversarial. 124 We also argue that not recognising this reality causes Internet 125 protocol designs to sometimes fail to protect the systems and users 126 who depend on those. 128 Discussion related to expanding our concept of threat-model ought not 129 (but perhaps inevitably will) involve discussion of weakening how 130 confidentiality is provided in Internet protocols. Whilst it may 131 superficially seem to be the case that encouraging in-network 132 interception could help with detection of adversarial application 133 behaviours, such a position is clearly mistaken once one notes that 134 adding middleboxes that can themselves be adversarial cannot be a 135 solution to the problem of possibly encountering adversarial code on 136 the network. It is also the case that the IETF has rough consensus 137 to provide better, and not weaker, security and privacy, which 138 includes confidentiality services. The IETF has maintained that 139 consensus over three decades, despite repeated (and repetitive;-) 140 debates on the topic. That consensus is represented in [RFC2804], 141 BCP 200 [RFC1984] and more latterly, the above-mentioned BCP 188 as 142 well as in the numerous RFCs referencing those works. The 143 probability that discussion of expanding our threat model leads to a 144 change in that rough consensus seems highly remote. 146 However, it is not clear if the IETF will reach rough consensus on a 147 description of such an expanded threat model. We argue that ignoring 148 this aspect of deployed reality may not bode well for Internet 149 protocol development. 151 Absent such an expanded threat model, we expect to see more of a 152 mismatch between expectaions and the deployment reality for some 153 Internet protocols. 155 Version -02 of this internet-draft was a submission to the IAB's DEDR 156 workshop [3]. We note that another author independently proposed 157 changes to the Internet threat model for related, but different, 158 reasons, [I-D.arkko-arch-internet-threat-model] also as a submission 159 to the DEDR workshop. 161 We are saddened by, and apologise for, the somewhat dystopian 162 impression that this document may impart - hopefully, there's a bit 163 of hope at the end;-) 165 2. Examples of deliberate adversarial behaviour in applications 167 In this section we describe a few documented examples of deliberate 168 adversarial behaviour by applications that could affect Internet 169 protocol development. The adversarial behaviours described below 170 involve various kinds of attack, varying from simple fraud, to 171 credential theft, surveillance and contributing to DDoS attacks. 172 This is not intended to be a comprehensive nor complete survey, but 173 to motivate us to consider deliberate adversarial behaviour by 174 applications. 176 While we have these examples of deliberate adversarial behaviour, 177 there are also many examples of application developers doing their 178 best to protect the security and privacy of their users or customers. 179 That's just the same as the case today where we need to consider in- 180 network actors as potential adversaries despite the many examples of 181 network operators who do act primarily in the best interests of their 182 users. So this section is not intended as a slur on all or some 183 application developers. 185 2.1. Malware in curated application stores 187 Despite the best efforts of curators, so-called App-Stores frequently 188 distribute malware of many kinds and one recent study [curated] 189 claims that simple obfuscation enables malware to avoid detection by 190 even sophisticated operators. Given the scale of these deployments, 191 ditribution of even a small percentage of malware-infected 192 applictions can affect a huge number of people. 194 2.2. Virtual private networks (VPNs) 196 Virtual private networks (VPNs) are supposed to hide user traffic to 197 various degrees depending on the particular technology chosen by the 198 VPN provider. However, not all VPNs do what they say, some for 199 example misrepresenting the countries in which they provide vantage 200 points. [vpns] 202 2.3. Compromised (home) networks 204 What we normally might consider network devices such as home routers 205 do also run applications that can end up being adversarial, for 206 example running DNS and DHCP attacks from home routers targeting 207 other devices in the home. One study [home] reports on a 2011 attack 208 that affected 4.5 million DSL modems in Brazil. The absence of 209 software update [RFC8240] has been a major cause of these issues and 210 rises to the level that considering this as intentional behaviour by 211 device vendors who have chosen this path is warranted. 213 2.4. Web browsers 215 Tracking of users in order to support advertising based business 216 models is ubiquitous on the Internet today. HTTP header fields (such 217 as cookies) are commonly used for such tracking, as are structures 218 within the content of HTTP responses such as links to 1x1 pixel 219 images and (ab)use of Javascript APIs offered by browsers. [tracking] 221 While some people may be sanguine about this kind of tracking, others 222 consider this behaviour unwelcome, when or if they are informed that 223 it happens, [attitude] though the evidence here seems somewhat harder 224 to interpret and many studies (that we have found to date) involve 225 small numbers of users. Historically, browsers have not made this 226 kind of tracking visible and have enabled it by default, though some 227 recent browser versions are starting to enable visibility and 228 blocking of some kinds of tracking. Browsers are also increasingly 229 imposing more stringent requirements on plug-ins for varied security 230 reasons. 232 2.5. Web site policy deception 234 Many web sites today provide some form of privacy policy and terms of 235 service, that are known to be mostly unread. [unread] This implies 236 that, legal fiction aside, users of those sites have not in reality 237 agreed to the specific terms published and so users are therefore 238 highly exposed to being exploited by web sites, for example 239 [cambridge] is a recent well-publicised case where a service provider 240 abused the data of 87 million users via a partnership. While many 241 web site operators claim that they care deeply about privacy, it 242 seems prudent to assume that some (or most?) do not in fact care 243 about user privacy, or at least not in ways with which many of their 244 users would agree. And of course, today's web sites are actually 245 mostly fairly complex web applications and are no longer static sets 246 of HTML files, so calling these "web sites" is perhaps a misnomer, 247 but considered as web applications, that may for example link in 248 advertising networks, it seems clear that many exist that are 249 adversarial. 251 2.6. Tracking bugs in mail 253 Some mail user agents (MUAs) render HTML content by default (with a 254 subset not allowing that to be turned off, perhaps particularly on 255 mobile devices) and thus enable the same kind of adversarial tracking 256 seen on the web. Attempts at such intentional tracking are also seen 257 many times per day by email users - in one study [mailbug] the 258 authors estimated that 62% of leakage to third parties was 259 intentional, for example if leaked data included a hash of the 260 recipient email address. 262 2.7. Troll farms in online social networks 264 Online social network applications/platforms are well-known to be 265 vulnerable to troll farms, sometimes with tragic consequences, [4] 266 where organised/paid sets of users deliberately abuse the application 267 platform for reasons invisible to a normal user. For-profit 268 companies building online social networks are well aware that subsets 269 of their "normal" users are anything but. In one US study, [troll] 270 sets of troll accounts were roughly equally distributed on both sides 271 of a controversial discussion. While Internet protocol designers do 272 sometimes consider sybil attacks [sybil], arguably we have not 273 provided mechanisms to handle such attacks sufficiently well, 274 especially when they occur within walled-gardens. Equally, one can 275 make the case that some online social networks, at some points in 276 their evolution, appear to have prioritised counts of active users so 277 highly that they have failed to invest sufficient effort for 278 detection of such troll farms. 280 2.8. Smart televisions 282 There have been examples of so-called "smart" televisions spying on 283 their owners without permission [5] and one survey of user attitudes 284 [smarttv] found "broad agreement was that it is unacceptable for the 285 data to be repurposed or shared" although the level of user 286 understanding may be questionable. What is clear though is that such 287 devices generally have not provided controls for their owners that 288 would allow them to meaningfully make a decision as to whether or not 289 they want to share such data. 291 2.9. So-called Internet of things 293 Many so-called Internet of Things (IoT) devices ("so-called" as all 294 devices were already things:-) have been found extremely deficient 295 when their security and privacy aspects were analysed, for example 296 children's toys. [toys] While in some cases this may be due to 297 incompetence rather than being deliberately adversarial behaviour, 298 the levels of incompetence frequently seen imply that it is valid to 299 consider such cases as not being accidental. 301 2.10. Attacks leveraging compromised high-level DNS infrastructure 303 Recent attacks [6] against DNS infrastructure enable subsequent 304 targetted attacks on specific application layer sources or 305 destinations. The general method appears to be to attack DNS 306 infrastructure, in these cases infrastructure that is towards the top 307 of the DNS naming hierarchy and "far" from the presumed targets, in 308 order to be able to fake DNS responses to a PKI, thereby acquiring 309 TLS server certificates so as to subsequently attack TLS connections 310 from clients to services (with clients directed to an attacker-owned 311 server via additional fake DNS responses). 313 Attackers in these cases seem well resourced and patient - with 314 "practice" runs over months and with attack durations being 315 infrequent and short (e.g. 1 hour) before the attacker withdraws. 317 These are sophisticated multi-protocol attacks, where weaknesses 318 related to deployment of one protocol (DNS) bootstrap attacks on 319 another protocol (e.g. IMAP/TLS), via abuse of a 3rd protocol 320 (ACME), partly in order to capture user IMAP login credentials, so as 321 to be able to harvest message store content from a real message 322 store. 324 The fact that many mail clients regularly poll their message store 325 means that a 1-hour attack is quite likely to harvest many cleartext 326 passwords or crackable password hashes. The real IMAP server in such 327 a case just sees fewer connections during the "live" attack, and some 328 additional connections later. Even heavy email users in such cases 329 that might notice a slight gap in email arrivals, would likely 330 attribute that to some network or service outage. 332 In many of these cases the paucity of DNSSEC-signed zones (about 1% 333 of existing zones) and the fact that many resolvers do not enforce 334 DNSSEC validation (e.g., in some mobile operating systems) assisted 335 the attackers. 337 It is also notable that some of the personnel dealing with these 338 attacks against infrastructure entites are authors of RFCs and 339 Internet-drafts. That we haven't provided protocol tools that better 340 protect against these kinds of attack ought hit "close to home" for 341 the IETF. 343 In terms of the overall argument being made here, the PKI and DNS 344 interactions, and the last step in the "live" attack all involve 345 interaction with a deliberately adversarial application. Later, use 346 of acquired login credentials to harvest message store content 347 involves an adversarial client application. It all cases, a TLS 348 implementation's PKI and TLS protocol code will see the fake 349 endpoints as protocol-valid, even if, in the real world, they are 350 clearly fake. This appears to be a good argument that our current 351 threat model is lacking in some respect(s), even as applied to our 352 currently most important security protocol (TLS). 354 2.11. BGP hijacking 356 There is a clear history of BGP hijacking [bgphijack] being used to 357 ensure endpoints connect to adversarial applications. As in the 358 previous example, such hijacks can be used to trick a PKI into 359 issuing a certificate for a fake entity. Indeed one study 360 [hijackdet] used the emergence of new web server TLS key pairs during 361 the event, (detected via Internet-wide scans), as a distinguisher 362 between one form of deliberate BGP hijacking and indadvertent route 363 leaks. 365 3. Inadvertent adversarial behaviours 367 Not all adversarial behaviour by applications is deliberate, some is 368 likely due to various levels of carelessness (some quite 369 understandable, others not) and/or due to erroneous assumptions about 370 the environments in which those applications (now) run. 371 We very briefly list some such cases: 373 o Application abuse for command and control, for example, use of IRC 374 or apache logs for malware command and control [7] 376 o Carelessly leaky buckets [8], for example, lots of Amazon S3 leaks 377 showing that careless admins can too easily cause application 378 server data to become available to adversaries 380 o Virtualisation exposing secrets, for example, Meltdown and Spectre 381 [9] and similar side-channels 383 o Compromised badly-maintained web sites, that for example, have led 384 to massive online databases of passwords [10] 386 o Supply-chain attacks, for example, the Target attack [11] or 387 malware within pre-installed applications on Android phones. 388 [bloatware] 390 o Breaches of major service providers, that many of us might have 391 assumed would be sufficiently capable to be the best large-scale 392 "Identity providers", for example: 394 * 3 billion accounts: yahoo [12] 396 * "up to 600M" account passwords stored in clear: facebook [13] 398 * many millions at risk: telcos selling location data [14] 400 * 50 million accounts: facebook [15] 402 * 14 million accounts: verizon [16] 404 * "hundreds of thousands" of accounts: google [17] 406 * unknown numbers, some email content exposed: microsoft [18] 408 o Breaches of smaller service providers: Too many to enumerate, 409 sadly 411 4. Possible directions for an expanded threat model 413 As we believe useful conclusions in this space require community 414 consensus, we won't offer definitive descriptions of an expanded 415 threat model but we will call out some potential directions that 416 could be explored as one follow-up to the DEDR workshop and 417 thereafter, if there is interest in this topic. 419 Before doing so, it is worth calling out one of the justifications 420 for the RFC 3553 definition of the Internet threat model which is 421 that going beyond an assumption that protocol endpoints have not been 422 compromised rapidly introduces complexity into the analysis. We do 423 have plenty of experience that when security and privacy solutions 424 add too much complexity and/or are seen to add risks without 425 benefits, those tend not to be deployed. One of the risks in 426 expanding our threat model that we need to recognise is that the end 427 result could be too complex, might not be applied during protocol 428 design, or worse, could lead to flawed risk analyses. One of the 429 constraints on work on an expanded threat model is therefore that the 430 result has to remain usable by protocol designers who are not 431 security or privacy experts. 433 4.1. Develop a BCP for privacy considerations 435 It may be time for the IETF to develop a BCP for privacy 436 considerations, possibly starting from [RFC6973]. 438 4.2. Consider the user perspective 440 [I-D.nottingham-for-the-users] argues that, in relevant cases where 441 there are conflicting requirements, the "IETF considers end users as 442 its highest priority concern." Doing so seems consistent with the 443 expanded threat model being argued for here, so may indicate that a 444 BCP in that space could also be useful. 446 4.3. Consider ABuse-cases as well as use-cases 448 Protocol developers and those implementing and deploying Internet 449 technologies are typically most interested in a few specific use- 450 cases for which they need solutions. Expanding our threat model to 451 include adversarial application behaviours [abusecases] seems likely 452 to call for significant attention to be paid to potential abuses of 453 whatever new or re-purposed technology is being considered. 455 4.4. Re-consider protocol design "lore" 457 It could be that this discussion demonstrates that it is timely to 458 reconsider some protocol design "lore" as for example is done in 459 [I-D.iab-protocol-maintenance]. More specifically, protocol 460 extensibility mechanisms may inadvertently create vectors for abuse- 461 cases, given that designers cannot fully analyse their impact at the 462 time a new protocol is defined or standardised. One might conclude 463 that a lack of extensibility could be a virtue for some new 464 protocols, in contrast to earlier assumptions. As pointed out by one 465 commenter though, people can find ways to extend things regardless, 466 if they feel the need. 468 4.5. Isolation 470 Sophisticated users can sometimes deal with adversarial behaviours in 471 applications by using different instances of those applications, for 472 example, differently configured web browsers for use in different 473 contexts. Applications (including web browsers) and operating 474 systems are also building in isolation via use of different processes 475 or sandboxing. Protocol artefacts that relate to uses of such 476 isolation mechanisms might be worth considering. To an extent, the 477 IETF has in practice already recognised some of these issues as being 478 in-scope, e.g. when considering the linkability issues with 479 mechanisms such as TLS session tickets, or QUIC connection 480 identifiers. 482 4.6. Transparency 484 Certificate transparency (CT) [RFC6962] has been an effective 485 countermeasure for X.509 certificate mis-issuance, which used be a 486 known application layer misbehaviour in the public web PKI. CT can 487 also help with post-facto detection of some infrastructure attacks 488 where BGP or DNS weakenesses have been leveraged so that some 489 certification authority is tricked into issuing a certificate for the 490 wrong entity. 492 While the context in which CT operates is very constrained 493 (essentially to the public CAs trusted by web browsers), similar 494 approaches could perhaps be useful for other protocols or 495 technologies. 497 In addition, legislative requirements such as those imposed by the 498 GDPR for subject access to data [19] could lead to a desire to handle 499 internal data structures and databases in ways that are reminiscent 500 of CT, though clearly with significant authorisation being required 501 and without the append-only nature of a CT log. 503 4.7. Minimise 505 As recommended in [RFC6973] data minimisation and additional 506 encryption are likely to be helpful - if applications don't ever see 507 data, or a cleartext form of data, then they should have a harder 508 time misbehaving. Similarly, not adding new long-term identifiers, 509 and not exposing existing ones, would seem helpful. 511 4.8. Same-Origin Policy 513 The Same-Origin Policy (SOP) [RFC6454] perhaps already provides an 514 example of how going beyond the RFC 3552 threat model can be useful. 515 Arguably, the existence of the SOP demonstrates that at least web 516 browsers already consider the 3552 model as being too limited. 517 (Clearly, differentiating between same and not-same origins 518 implicitly assumes that some origins are not as trustworthy as 519 others.) 521 4.9. Greasing 523 The TLS protocol [RFC8446] now supports the use of GREASE 524 [I-D.ietf-tls-grease] as a way to mitigate on-path ossification. 525 While this technique is not likely to prevent any deliberate 526 misbehaviours, it may provide a proof-of-concept that network 527 protocol mechanisms can have impact in this space, if we spend the 528 time to try analyse the incentives of the various parties. 530 4.10. Generalise OAuth Threat Model 532 The OAuth threat model [RFC6819] provides an extensive list of 533 threats and security considerations for those implementing and 534 deploying OAuth version 2.0 [RFC6749]. That document is perhaps too 535 detailed to serve as useful generic guidance but does go beyond the 536 Internet threat model from RFC3552, for example it says: 538 two of the three parties involved in the OAuth protocol may 539 collude to mount an attack against the 3rd party. For example, 540 the client and authorization server may be under control of an 541 attacker and collude to trick a user to gain access to resources. 543 It could be useful to attempt to derive a more abstract threat model 544 from that RFC that considers threats in more generic multi-party 545 contexts. 547 4.11. One (or more) endpoint may be compromised 549 The quote from OAuth above also has another aspect - it considers the 550 effect of compromised endpoints on those that are not compromised. 551 It may therefore be interesting to consider the consequeneces that 552 would follow from this OLD/NEW change to RFC3552 554 OLD: In general, we assume that the end-systems engaging in a 555 protocol exchange have not themselves been compromised. 557 NEW: 559 In general, we assume that one of the protocol engines engaging in a 560 protocol exchange has not been compromised at the run-time of the 561 exchange. 563 4.12. Look again at how well we're securing infrastructure 565 Some attacks (e.g. against DNS or routing infrastructure) appear to 566 benefit from current infrastructure mechanisms not being deployed, 567 e.g. DNSSEC, RPKI. In the case of DNSSEC, deployment is still 568 minimal despite much time having elapsed. This suggests a number of 569 different possible avenues for investigation: 571 o For any protocol dependent on infrastructure like DNS or BGP, we 572 ought analysse potential outcomes in the event the relevant 573 infrastructure has been compromised 575 o Protocol designers perhaps ought consider post-facto detection 576 compromise mechanisms in the event that it is infeasible to 577 mitigate attacks on infrastructure that is not under local control 579 o Despite the sunk costs, it may be worth re-considering 580 infrastructure security mechanisms that have not been deployed, 581 and hence are ineffective. 583 4.13. Consider recovery from attack as part of protocol design 585 Recent work on multiparty messaging security primitives 586 [I-D.ietf-mls-architecture] considers "post-compromise security" as 587 an inherent part of the design of that protocol. Perhaps protocol 588 designers ought generally consider recovery from attack during 589 protocol design - we do know that all widely used protocols will at 590 sometime be subject to successful attack, whether that is due to 591 deployment or implementation error, or, as is less common, due to 592 protocol design flaws. 594 4.14. Don't think in terms of hosts 596 More and more, protocol endpoints are not being executed on what used 597 be understood as a host system. The web and Javascript model clearly 598 differs from traditional host models, but so do most server-side 599 deployments these days, thanks to virtualisation. 601 As yes unpublished work on this topic within the IAB stackevo [20] 602 programme, appears to posit the same kind of thesis. In the stackevo 603 case, that work would presumably lead to some new definition of 604 protocol endpoint, but (consensus on) such a definition may not be 605 needed for an expanded threat model. For this work, it may be 606 sufficient to note that protocol endpoints can no longer be 607 considered to be executing on a traditional host, to assume (at 608 protocol design time) that all endpoints will be run in a virtualised 609 environment where co-tenants and (sometimes) hypervisors are 610 adversaries, and to then call for analysis of such scenarios. 612 5. Conclusions 614 At this stage we don't think it approriate to claim that any strong 615 conclusion can be reached based on the above. We do however, claim 616 that the is a topic that could be worth discussion as part of the 617 follow-up to at the DEDR workshop and more generally within the IETF. 619 6. Security Considerations 621 This draft is all about security, and privacy. 623 Encryption is one of the most effective tools in countering network 624 based attackers and will also have a role in protecting against 625 adversarial applications. However, today many existing tools for 626 countering adversarial applications assume they can inspect network 627 traffic to or from potentially adversarial applications. These facts 628 of course cause tensions (e.g. see [RFC8404]). Expanding our threat 629 model could possibly help reduce some of those tensions, if it leads 630 to the development of protocols that make exploitation harder or more 631 transparent for adversarial applications. 633 7. IANA Considerations 635 There are no IANA considerations. 637 8. Acknowledgements 639 With no implication that they agree with some or all of the above, 640 thanks to Jari Arkko, Carsten Bormann, Christian Huitema and Daniel 641 Kahn Gillmor for comments on an earlier version of the text. 643 Thanks to Jari Arkko, Ted Hardie and Brian Trammell for discussions 644 on this topic after they (but not the author) had attended the DEDR 645 workshop. 647 9. References 649 9.1. Informative References 651 [abusecases] 652 McDermott, J. and C. Fox, "Using abuse case models for 653 security requirements analysis", IEEE Annual Computer 654 Security Applications Conference (ACSAC'99) 1999, 1999, 655 . 657 [attitude] 658 Chanchary, F. and S. Chiasson, "User Perceptions of 659 Sharing, Advertising, and Tracking", Symposium on Usable 660 Privacy and Security (SOUPS) 2015, 2015, 661 . 664 [bgphijack] 665 Sermpezis, P., Kotronis, V., Dainotti, A., and X. 666 Dimitropoulos, "A survey among network operators on BGP 667 prefix hijacking", ACM SIGCOMM Computer Communication 668 Review 48, no. 1 (2018): 64-69., 2018, 669 . 671 [bloatware] 672 Gamba, G., Rashed, M., Razaghpanah, A., Tapiado, J., and 673 N. Vallina-Rodriguez, "An Analysis of Pre-installed 674 Android Software", arXiv preprint arXiv:1905.02713 675 (2019)., 2019, . 677 [cambridge] 678 Isaak, J. and M. Hanna, "User Data Privacy: Facebook, 679 Cambridge Analytica, and Privacy Protection", 680 Computer 51.8 (2018): 56-59, 2018, 681 . 684 [curated] Hammad, M., Garcia, J., and S. MaleK, "A large-scale 685 empirical study on the effects of code obfuscations on 686 Android apps and anti-malware products", ACM International 687 Conference on Software Engineering 2018, 2018, 688 . 691 [hijackdet] 692 Schlamp, J., Holz, R., Gasser, O., Korste, A., Jacquemart, 693 Q., Carle, G., and E. Biersack, "Investigating the nature 694 of routing anomalies: Closing in on subprefix hijacking 695 attacks", International Workshop on Traffic Monitoring 696 and Analysis, pp. 173-187. Springer, Cham, 2015., 2015, 697 . 700 [I-D.arkko-arch-internet-threat-model] 701 Arkko, J., "Changes in the Internet Threat Model", draft- 702 arkko-arch-internet-threat-model-00 (work in progress), 703 April 2019. 705 [I-D.iab-protocol-maintenance] 706 Thomson, M., "The Harmful Consequences of the Robustness 707 Principle", draft-iab-protocol-maintenance-03 (work in 708 progress), May 2019. 710 [I-D.ietf-mls-architecture] 711 Omara, E., Beurdouche, B., Rescorla, E., Inguva, S., Kwon, 712 A., and A. Duric, "The Messaging Layer Security (MLS) 713 Architecture", draft-ietf-mls-architecture-02 (work in 714 progress), March 2019. 716 [I-D.ietf-tls-grease] 717 Benjamin, D., "Applying GREASE to TLS Extensibility", 718 draft-ietf-tls-grease-02 (work in progress), January 2019. 720 [I-D.nottingham-for-the-users] 721 Nottingham, M., "The Internet is for End Users", draft- 722 nottingham-for-the-users-08 (work in progress), June 2019. 724 [mailbug] Englehardt, S., Han, J., and A. Narayanan, "I never signed 725 up for this! Privacy implications of email tracking", 726 Proceedings on Privacy Enhancing Technologies 2018.1 727 (2018): 109-126., 2018, 728 . 732 [RFC1984] IAB and IESG, "IAB and IESG Statement on Cryptographic 733 Technology and the Internet", BCP 200, RFC 1984, 734 DOI 10.17487/RFC1984, August 1996, 735 . 737 [RFC2804] IAB and IESG, "IETF Policy on Wiretapping", RFC 2804, 738 DOI 10.17487/RFC2804, May 2000, 739 . 741 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 742 Text on Security Considerations", BCP 72, RFC 3552, 743 DOI 10.17487/RFC3552, July 2003, 744 . 746 [RFC6454] Barth, A., "The Web Origin Concept", RFC 6454, 747 DOI 10.17487/RFC6454, December 2011, 748 . 750 [RFC6749] Hardt, D., Ed., "The OAuth 2.0 Authorization Framework", 751 RFC 6749, DOI 10.17487/RFC6749, October 2012, 752 . 754 [RFC6819] Lodderstedt, T., Ed., McGloin, M., and P. Hunt, "OAuth 2.0 755 Threat Model and Security Considerations", RFC 6819, 756 DOI 10.17487/RFC6819, January 2013, 757 . 759 [RFC6962] Laurie, B., Langley, A., and E. Kasper, "Certificate 760 Transparency", RFC 6962, DOI 10.17487/RFC6962, June 2013, 761 . 763 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 764 Morris, J., Hansen, M., and R. Smith, "Privacy 765 Considerations for Internet Protocols", RFC 6973, 766 DOI 10.17487/RFC6973, July 2013, 767 . 769 [RFC7258] Farrell, S. and H. Tschofenig, "Pervasive Monitoring Is an 770 Attack", BCP 188, RFC 7258, DOI 10.17487/RFC7258, May 771 2014, . 773 [RFC8240] Tschofenig, H. and S. Farrell, "Report from the Internet 774 of Things Software Update (IoTSU) Workshop 2016", 775 RFC 8240, DOI 10.17487/RFC8240, September 2017, 776 . 778 [RFC8404] Moriarty, K., Ed. and A. Morton, Ed., "Effects of 779 Pervasive Encryption on Operators", RFC 8404, 780 DOI 10.17487/RFC8404, July 2018, 781 . 783 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 784 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 785 . 787 [smarttv] Malkin, N., Bernd, J., Johnson, M., and S. Egelman, ""What 788 Can't Data Be Used For?" Privacy Expectations about Smart 789 TVs in the U.S.", European Workshop on Usable Security 790 (Euro USEC) 2018, 2018, . 794 [sybil] Viswanath, B., Post, A., Gummadi, K., and A. Mislove, "An 795 analysis of social network-based sybil defenses", ACM 796 SIGCOMM Computer Communication Review 41(4), 363-374. 797 2011, 2011, 798 . 801 [toys] Chu, G., Apthorpe, N., and N. Feamster, "Security and 802 Privacy Analyses of Internet of Things Childrens' Toys", 803 IEEE Internet of Things Journal 6.1 (2019): 978-985., 804 2019, . 806 [tracking] 807 Ermakova, T., Fabian, B., Bender, B., and K. Klimek, "Web 808 Tracking-A Literature Review on the State of Research", 809 Proceedings of the 51st Hawaii International Conference 810 on System Sciences, 2018, 811 . 814 [troll] Stewart, L., Arif, A., and K. Starbird, "Examining trolls 815 and polarization with a retweet network", ACM Workshop on 816 Misinformation and Misbehavior Mining on the Web 2018, 817 2018, . 820 [unread] Obar, J. and A. Oeldorf-Hirsch, "The biggest lie on the 821 internet: Ignoring the privacy policies and terms of 822 service policies of social networking services", 823 Information, Communication and Society (2018): 1-20, 2018, 824 . 826 [vpns] Khan, M., DeBlasio, J., Voelker, G., Snoeren, A., Kanich, 827 C., and N. Vallina-Rodrigue, "An empirical analysis of the 828 commercial VPN ecosystem", ACM Internet Measurement 829 Conference 2018 (pp. 443-456), 2018, 830 . 833 9.2. URIs 835 [1] https://datatracker.ietf.org/doc/rfc6973/referencedby/ 837 [2] https://datatracker.ietf.org/doc/rfc7258/referencedby/ 839 [3] https://www.iab.org/activities/workshops/dedr-workshop/ 841 [4] https://www.nytimes.com/2018/10/20/us/politics/saudi-image- 842 campaign-twitter.html 844 [5] https://www.welivesecurity.com/2013/11/22/lg-admits-that-its- 845 smart-tvs-have-been-watching-users-and-transmitting-data-without- 846 consent/ 848 [6] https://krebsonsecurity.com/2019/02/a-deep-dive-on-the-recent- 849 widespread-dns-hijacking-attacks/ 851 [7] https://security.stackexchange.com/questions/100577/creating- 852 botnet-cc-server-what-architecture-should-i-use-irc-http 854 [8] https://businessinsights.bitdefender.com/worst-amazon-breaches 856 [9] https://www.us-cert.gov/ncas/alerts/TA18-004A 858 [10] https://haveibeenpwned.com/Passwords 860 [11] https://www.zdnet.com/article/how-hackers-stole-millions-of- 861 credit-card-records-from-target/ 863 [12] https://www.wired.com/story/yahoo-breach-three-billion-accounts/ 865 [13] https://www.pcmag.com/news/367319/facebook-stored-up-to-600m- 866 user-passwords-in-plain-text 868 [14] https://www.zdnet.com/article/us-telcos-caught-selling-your- 869 location-data-again-senator-demands-new-laws/ 871 [15] https://www.cnet.com/news/facebook-breach-affected-50-million- 872 people/ 874 [16] https://www.zdnet.com/article/millions-verizon-customer-records- 875 israeli-data/ 877 [17] https://www.wsj.com/articles/google-exposed-user-data-feared- 878 repercussions-of-disclosing-to-public-1539017194 880 [18] https://motherboard.vice.com/en_us/article/ywyz3x/hackers-could- 881 read-your-hotmail-msn-outlook-microsoft-customer-support 883 [19] https://gdpr-info.eu/art-15-gdpr/ 885 [20] https://github.com/stackevo/endpoint-draft/blob/master/draft- 886 trammell-whats-an-endpoint.md 888 Appendix A. Change Log 890 This isn't gonna end up as an RFC, but may as well be tidy... 892 A.1. Changes from -02 to -03 894 o Integrated some changes based on discussion with Ted, Jari and 895 Brian. 897 A.2. Changes from -01 to -02 899 o Oops - got an RFC number wrong in reference 901 A.3. Changes from -00 to -01 903 o Made a bunch more edits and added more references 905 o I had lots of typos (as always:-) 907 o cabo: PR#1 fixed more typos and noted extensbility danger 909 Author's Address 911 Stephen Farrell 912 Trinity College Dublin 914 Email: stephen.farrell@cs.tcd.ie