idnits 2.17.1 draft-farrell-etm-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 1, 2019) is 1814 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '1' on line 622 -- Looks like a reference, but probably isn't: '2' on line 624 -- Looks like a reference, but probably isn't: '3' on line 626 -- Looks like a reference, but probably isn't: '4' on line 628 -- Looks like a reference, but probably isn't: '5' on line 631 -- Looks like a reference, but probably isn't: '6' on line 635 -- Looks like a reference, but probably isn't: '7' on line 638 -- Looks like a reference, but probably isn't: '8' on line 640 -- Looks like a reference, but probably isn't: '9' on line 642 -- Looks like a reference, but probably isn't: '10' on line 644 -- Looks like a reference, but probably isn't: '11' on line 647 -- Looks like a reference, but probably isn't: '12' on line 649 -- Looks like a reference, but probably isn't: '13' on line 652 -- Looks like a reference, but probably isn't: '14' on line 655 -- Looks like a reference, but probably isn't: '15' on line 658 -- Looks like a reference, but probably isn't: '16' on line 661 -- Looks like a reference, but probably isn't: '17' on line 664 -- Looks like a reference, but probably isn't: '18' on line 667 == Outdated reference: A later version (-01) exists of draft-arkko-arch-internet-threat-model-00 == Outdated reference: A later version (-12) exists of draft-iab-protocol-maintenance-02 == Outdated reference: A later version (-04) exists of draft-ietf-tls-grease-02 == Outdated reference: A later version (-09) exists of draft-nottingham-for-the-users-07 -- Obsolete informational reference (is this intentional?): RFC 6962 (Obsoleted by RFC 9162) Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 20 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group S. Farrell 3 Internet-Draft Trinity College Dublin 4 Intended status: Informational May 1, 2019 5 Expires: November 2, 2019 7 We're gonna need a bigger threat model 8 draft-farrell-etm-02 10 Abstract 12 We argue that an expanded threat model is needed for Internet 13 protocol development as protocol endpoints can no longer be 14 considered to be generally trustworthy for any general definition of 15 "trustworthy." 17 This draft will be a submission to the DEDR IAB workshop. 19 Status of This Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF). Note that other groups may also distribute 26 working documents as Internet-Drafts. The list of current Internet- 27 Drafts is at https://datatracker.ietf.org/drafts/current/. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 This Internet-Draft will expire on November 2, 2019. 36 Copyright Notice 38 Copyright (c) 2019 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents 43 (https://trustee.ietf.org/license-info) in effect on the date of 44 publication of this document. Please review these documents 45 carefully, as they describe your rights and restrictions with respect 46 to this document. Code Components extracted from this document must 47 include Simplified BSD License text as described in Section 4.e of 48 the Trust Legal Provisions and are provided without warranty as 49 described in the Simplified BSD License. 51 Table of Contents 53 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 54 2. Examples of deliberate adversarial behaviour in applications 4 55 2.1. Malware in curated application stores . . . . . . . . . . 4 56 2.2. Virtual private networks (VPNs) . . . . . . . . . . . . . 5 57 2.3. Compromised (home) networks . . . . . . . . . . . . . . . 5 58 2.4. Web browsers . . . . . . . . . . . . . . . . . . . . . . 5 59 2.5. Web site policy deception . . . . . . . . . . . . . . . . 5 60 2.6. Tracking bugs in mail . . . . . . . . . . . . . . . . . . 6 61 2.7. Troll farms in online social networks . . . . . . . . . . 6 62 2.8. Smart televisions . . . . . . . . . . . . . . . . . . . . 6 63 2.9. So-called Internet of things . . . . . . . . . . . . . . 7 64 3. Inadvertent adversarial behaviours . . . . . . . . . . . . . 7 65 4. Possible directions for an expanded threat model . . . . . . 8 66 4.1. Develop a BCP for privacy considerations . . . . . . . . 8 67 4.2. Consider the user perspective . . . . . . . . . . . . . . 8 68 4.3. Consider ABuse-cases as well as use-cases . . . . . . . . 8 69 4.4. Re-consider protocol design "lore" . . . . . . . . . . . 8 70 4.5. Isolation . . . . . . . . . . . . . . . . . . . . . . . . 9 71 4.6. Transparency . . . . . . . . . . . . . . . . . . . . . . 9 72 4.7. Minimise . . . . . . . . . . . . . . . . . . . . . . . . 9 73 4.8. Same-Origin Policy . . . . . . . . . . . . . . . . . . . 9 74 4.9. Greasing . . . . . . . . . . . . . . . . . . . . . . . . 10 75 5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 10 76 6. Security Considerations . . . . . . . . . . . . . . . . . . . 10 77 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 78 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 10 79 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 10 80 9.1. Informative References . . . . . . . . . . . . . . . . . 10 81 9.2. URIs . . . . . . . . . . . . . . . . . . . . . . . . . . 14 82 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 15 83 A.1. Changes from -01 to -02 . . . . . . . . . . . . . . . . . 15 84 A.2. Changes from -00 to -01 . . . . . . . . . . . . . . . . . 15 85 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 15 87 1. Introduction 89 [[There's a github repo for this -- issues and PRs are welcome there. 90 ]] 92 [RFC3552], Section 3 defines an "Internet Threat Model" which has 93 been commonly used when developing Internet protocols. That assumes 94 that "the end-systems engaging in a protocol exchange have not 95 themselves been compromised." RFC 3552 is a formal part of of the 96 IETF's process as it is also BCP72. 98 Since RFC 3552 was written, we have seen a greater emphasis on 99 considering privacy and [RFC6973] provides privacy guidance for 100 protocol developers. RFC 6973 is not a formal BCP, but appears to 101 have been useful for protocol developers as it is referenced by 38 102 later RFCs at the time of writing [1]. 104 BCP188, [RFC7258] subsequently recognised pervasive monitoring as a 105 particular kind of attack and has also been relatively widely 106 referenced (39 RFCs at the time of writing [2]). To date, perhaps 107 most documents referencing BCP188 have considered state-level or in- 108 network adversaries. 110 In this document, we argue that we need to epxand our threat model to 111 acknowledge that today many applications are themselves rightly 112 considered potential adversaries for at least some relevant actors. 113 However, those (good) actors cannot in general refuse to communicate 114 and will with non-negligible probability encounter applications that 115 are adversarial. 117 We also argue that not recognising this reality causes Internet 118 protocol designs to sometimes fail to protect the systems and users 119 who depend on those. 121 Discussion related to expanding our concept of threat-model ought not 122 (but perhaps inevitably will) involve discussion of weakening how 123 confidentiality is provided in Internet protocols. Whilst it may 124 superficially seem to be the case that encouraging in-network 125 interception could help with detection of adversarial application 126 behaviours, such a position is clearly mistaken once one notes that 127 adding middleboxes that can themselves be adversarial cannot be a 128 solution to the problem of possibly encountering adversarial code on 129 the network. It is also the case that the IETF has rough consensus 130 to provide better, and not weaker, security and privacy, which 131 includes confidentiality services. The IETF has maintained that 132 consensus over three decades, despite repeated (and repetitive;-) 133 debates on the topic. That consensus is represented in [RFC2804], 134 BCP 200 [RFC1984] and more latterly, the above-mentioned BCP 188 as 135 well as in the numerous RFCs referencing those works. The 136 probability that discussion of expanding our threat model leads to a 137 change in that rough consensus seems highly remote. 139 However, it is not clear if the IETF will reach rough consensus on a 140 description of such an expanded threat model. We argue that ignoring 141 this aspect of deployed reality may not bode well for Internet 142 protocol development. 144 Absent such an expanded threat model, we expect to see more of a 145 mismatch between expectaions and the deployment reality for some 146 Internet protocols. 148 This internet-draft is a submission to the IAB's DEDR workshop [3] 149 and is not intended to become an RFC. 151 We note that another author has independently proposed changes to the 152 Internet threat model for related, but different, reasons, 153 [I-D.arkko-arch-internet-threat-model] also as a submission to the 154 DEDR workshop. 156 We are saddened by, and apologise for, the somewhat dystopian 157 impression that this document may impart - hopefully, there's a bit 158 of hope at the end;-) 160 2. Examples of deliberate adversarial behaviour in applications 162 In this section we describe a few documented examples of deliberate 163 adversarial behaviour by applications that could affect Internet 164 protocol development. The adversarial behaviours described below 165 involve various kinds of attack, varying from simple fraud, to 166 credential theft, surveillance and contributing to DDoS attacks. 167 This is not intended to be a comprehensive nor complete survey, but 168 to motivate us to consider deliberate adversarial behaviour by 169 applications. 171 While we have these examples of deliberate adversarial behaviour, 172 there are also many examples of application developers doing their 173 best to protect the security and privacy of their users or customers. 174 That's just the same as the case today where we need to consider in- 175 network actors as potential adversaries despite the many examples of 176 network operators who do act primarily in the best interests of their 177 users. So this section is not intended as a slur on all or some 178 application developers. 180 2.1. Malware in curated application stores 182 Despite the best efforts of curators, so-called App-Stores frequently 183 distribute malware of many kinds and one recent study [curated] 184 claims that simple obfuscation enables malware to avoid detection by 185 even sophisticated operators. Given the scale of these deployments, 186 ditribution of even a small percentage of malware-infected 187 applictions can affect a huge number of people. 189 2.2. Virtual private networks (VPNs) 191 Virtual private networks (VPNs) are supposed to hide user traffic to 192 various degrees depending on the particular technology chosen by the 193 VPN provider. However, not all VPNs do what they say, some for 194 example misrepresenting the countries in which they provide vantage 195 points. [vpns] 197 2.3. Compromised (home) networks 199 What we normally might consider network devices such as home routers 200 do also run applications that can end up being adversarial, for 201 example running DNS and DHCP attacks from home routers targeting 202 other devices in the home. One study [home] reports on a 2011 attack 203 that affected 4.5 million DSL modems in Brazil. The absence of 204 software update [RFC8240] has been a major cause of these issues and 205 rises to the level that considering this as intentional behaviour by 206 device vendors who have chosen this path is warranted. 208 2.4. Web browsers 210 Tracking of users in order to support advertising based business 211 models is ubiquitous on the Internet today. HTTP header fields (such 212 as cookies) are commonly used for such tracking, as are structures 213 within the content of HTTP responses such as links to 1x1 pixel 214 images and (ab)use of Javascript APIs offered by browsers. [tracking] 216 While some people may be sanguine about this kind of tracking, others 217 consider this behaviour unwelcome, when or if they are informed that 218 it happens, [attitude] though the evidence here seems somewhat harder 219 to interpret and many studies (that we have found to date) involve 220 small numbers of users. Historically, browsers have not made this 221 kind of tracking visible and have enabled it by default, though some 222 recent browser versions are starting to enable visibility and 223 blocking of some kinds of tracking. Browsers are also increasingly 224 imposing more stringent requirements on plug-ins for varied security 225 reasons. 227 2.5. Web site policy deception 229 Many web sites today provide some form of privacy policy and terms of 230 service, that are known to be mostly unread. [unread] This implies 231 that, legal fiction aside, users of those sites have not in reality 232 agreed to the specific terms published and so users are therefore 233 highly exposed to being exploited by web sites, for example 234 [cambridge] is a recent well-publicised case where a service provider 235 abused the data of 87 million users via a partnership. While many 236 web site operators claim that they care deeply about privacy, it 237 seems prudent to assume that some (or most?) do not in fact care 238 about user privacy, or at least not in ways with which many of their 239 users would agree. And of course, today's web sites are actually 240 mostly fairly complex web applications and are no longer static sets 241 of HTML files, so calling these "web sites" is perhaps a misnomer, 242 but considered as web applications, that may for example link in 243 advertising networks, it seems clear that many exist that are 244 adversarial. 246 2.6. Tracking bugs in mail 248 Some mail user agents (MUAs) render HTML content by default (with a 249 subset not allowing that to be turned off, perhaps particularly on 250 mobile devices) and thus enable the same kind of adversarial tracking 251 seen on the web. Attempts at such intentional tracking are also seen 252 many times per day by email users - in one study [mailbug] the 253 authors estimated that 62% of leakage to third parties was 254 intentional, for example if leaked data included a hash of the 255 recipient email address. 257 2.7. Troll farms in online social networks 259 Online social network applications/platforms are well-known to be 260 vulnerable to troll farms, sometimes with tragic consequences, [4] 261 where organised/paid sets of users deliberately abuse the application 262 platform for reasons invisible to a normal user. For-profit 263 companies building online social networks are well aware that subsets 264 of their "normal" users are anything but. In one US study, [troll] 265 sets of troll accounts were roughly equally distributed on both sides 266 of a controversial discussion. While Internet protocol designers do 267 sometimes consider sybil attacks [sybil], arguably we have not 268 provided mechanisms to handle such attacks sufficiently well, 269 especially when they occur within walled-gardens. Equally, one can 270 make the case that some online social networks, at some points in 271 their evolution, appear to have prioritised counts of active users so 272 highly that they have failed to invest sufficient effort for 273 detection of such troll farms. 275 2.8. Smart televisions 277 There have been examples of so-called "smart" televisions spying on 278 their owners without permission [5] and one survey of user attitudes 279 [smarttv] found "broad agreement was that it is unacceptable for the 280 data to be repurposed or shared" although the level of user 281 understanding may be questionable. What is clear though is that such 282 devices generally have not provided controls for their owners that 283 would allow them to meaningfully make a decision as to whether or not 284 they want to share such data. 286 2.9. So-called Internet of things 288 Many so-called Internet of Things (IoT) devices ("so-called" as all 289 devices were already things:-) have been found extremely deficient 290 when their security and privacy aspects were analysed, for example 291 children's toys. [toys] While in some cases this may be due to 292 incompetence rather than being deliberately adversarial behaviour, 293 the levels of incompetence frequently seen imply that it is valid to 294 consider such cases as not being accidental. 296 3. Inadvertent adversarial behaviours 298 Not all adversarial behaviour by applications is deliberate, some is 299 likely due to various levels of carelessness (some quite 300 understandable, others not) and/or due to erroneous assumptions about 301 the environments in which those applications (now) run. 302 We very briefly list some such cases: 304 o Application abuse for command and control, for example, use of IRC 305 or apache logs for malware command and control [6] 307 o Carelessly leaky buckets [7], for example, lots of Amazon S3 leaks 308 showing that careless admins can too easily cause application 309 server data to become available to adversaries 311 o Virtualisation exposing secrets, for example, Meltdown and Spectre 312 [8] and similar side-channels 314 o Compromised badly-maintained web sites, that for example, have led 315 to massive online databases of passwords [9] 317 o Supply-chain attacks, for example, the Target attack [10] 319 o Breaches of major service providers, that many of us might have 320 assumed would be sufficiently capable to be the best large-scale 321 "Identity providers", for example: 323 * 3 billion accounts: yahoo [11] 325 * "up to 600M" account passwords stored in clear: facebook [12] 327 * many millions at risk: telcos selling location data [13] 329 * 50 million accounts: facebook [14] 331 * 14 million accounts: verizon [15] 333 * "hundreds of thousands" of accounts: google [16] 334 * unknown numbers, some email content exposed: microsoft [17] 336 o Breaches of smaller service providers: Too many to enumerate, 337 sadly 339 4. Possible directions for an expanded threat model 341 As we believe useful conclusions in this space require community 342 consensus, we won't offer definitive descriptions of an expanded 343 threat model but we will call out some potential directions that 344 could be explored at the DEDR workshop and thereafter, if there is 345 interest in this topic. 347 4.1. Develop a BCP for privacy considerations 349 It may be time for the IETF to develop a BCP for privacy 350 considerations, possibly starting from [RFC6973]. 352 4.2. Consider the user perspective 354 [I-D.nottingham-for-the-users] argues that, in relevant cases where 355 there are conflicting requirements, the "IETF considers end users as 356 its highest priority concern." Doing so seems consistent with the 357 expanded threat model being argued for here, so may indicate that a 358 BCP in that space could also be useful. 360 4.3. Consider ABuse-cases as well as use-cases 362 Protocol developers and those implementing and deploying Internet 363 technologies are typically most interested in a few specific use- 364 cases for which they need solutions. Expanding our threat model to 365 include adversarial application behaviours [abusecases] seems likely 366 to call for significant attention to be paid to potential abuses of 367 whatever new or re-purposed technology is being considered. 369 4.4. Re-consider protocol design "lore" 371 It could be that this discussion demonstrates that it is timely to 372 reconsider some protocol design "lore" as for example is done in 373 [I-D.iab-protocol-maintenance]. More specifically, protocol 374 extensibility mechanisms may inadvertently create vectors for abuse- 375 cases, given that designers cannot fully analyse their impact at the 376 time a new protocol is defined or standardised. One might conclude 377 that a lack of extensibility could be a virtue for some new 378 protocols, in contrast to earlier assumptions. As pointed out by one 379 commenter though, people can find ways to extend things regardless, 380 if they feel the need. 382 4.5. Isolation 384 Sophisticated users can sometimes deal with adversarial behaviours in 385 applications by using different instances of those applications, for 386 example, differently configured web browsers for use in different 387 contexts. Applications (including web browsers) and operating 388 systems are also building in isolation via use of different processes 389 or sandboxing. Protocol artefacts that relate to uses of such 390 isolation mechanisms might be worth considering. To an extent, the 391 IETF has in practice already recognised some of these issues as being 392 in-scope, e.g. when considering the linkability issues with 393 mechanisms such as TLS session tickets, or QUIC connection 394 identifiers. 396 4.6. Transparency 398 Certificate transparency (CT) [RFC6962] has been an effective 399 countermeasure for X.509 certificate mis-issuance, which used be a 400 known application layer misbehaviour in the public web PKI. While 401 the context in which CT operates is very constrained (essentially to 402 the public CAs trusted by web browsers), similar approaches could be 403 useful for other protocols or technologies. 405 In addition, legislative requirements such as those imposed by the 406 GDPR for subject access to data [18] could lead to a desire to handle 407 internal data structures and databases in ways that are reminiscent 408 of CT, though clearly with significant authorisation being required 409 and without the append-only nature of a CT log. 411 4.7. Minimise 413 As recommended in [RFC6973] data minimisation and additional 414 encryption are likely to be helpful - if applications don't ever see 415 data, or a cleartext form of data, then they should have a harder 416 time misbehaving. Similarly, not adding new long-term identifiers, 417 and not exposing existing ones, would seem helpful. 419 4.8. Same-Origin Policy 421 The Same-Origin Policy (SOP) [RFC6454] perhaps already provides an 422 example of how going beyond the RFC 3552 threat model can be useful. 423 Arguably, the existence of the SOP demonstrates that at least web 424 browsers already consider the 3552 model as being too limited. 425 (Clearly, differentiating between same and not-same origins 426 implicitly assumes that some origins are not as trustworthy as 427 others.) 429 4.9. Greasing 431 The TLS protocol [RFC8446] now supports the use of GREASE 432 [I-D.ietf-tls-grease] as a way to mitigate on-path ossification. 433 While this technique is not likely to prevent any deliberate 434 misbehaviours, it may provide a proof-of-concept that network 435 protocol mechanisms can have impact in this space, if we spend the 436 time to try analyse the incentives of the various parties. 438 5. Conclusions 440 At this stage we don't think it approriate to claim that any strong 441 conclusion can be reached based on the above. We do however, claim 442 that the is a topic that could be worth discussion at the DEDR 443 workshop and elsewhere. 445 6. Security Considerations 447 This draft is all about security, and privacy. 449 Encryption is one of the most effective tools in countering network 450 based attackers and will also have a role in protecting against 451 adversarial applications. However, today many existing tools for 452 countering adversarial applications assume they can inspect network 453 traffic to or from potentially adversarial applications. These facts 454 of course cause tensions (e.g. see [RFC8404]). Expanding our threat 455 model could possibly help reduce some of those tensions, if it leads 456 to the development of protocols that make exploitation harder or more 457 transparent for adversarial applications. 459 7. IANA Considerations 461 There are no IANA considerations. 463 8. Acknowledgements 465 We'll happily ack anyone who's interested enough to read and comment 466 on this. With no implication that they agree with some or all of the 467 above, thanks to Jari Arkko, Carsten Bormann, Christian Huitema and 468 Daniel Kahn Gillmor for comments on an earlier version of the text. 470 9. References 472 9.1. Informative References 474 [abusecases] 475 McDermott, J. and C. Fox, "Using abuse case models for 476 security requirements analysis", IEEE Annual Computer 477 Security Applications Conference (ACSAC'99) 1999, 1999, 478 . 480 [attitude] 481 Chanchary, F. and S. Chiasson, "User Perceptions of 482 Sharing, Advertising, and Tracking", Symposium on Usable 483 Privacy and Security (SOUPS) 2015, 2015, 484 . 487 [cambridge] 488 Isaak, J. and M. Hanna, "User Data Privacy: Facebook, 489 Cambridge Analytica, and Privacy Protection", 490 Computer 51.8 (2018): 56-59, 2018, 491 . 494 [curated] Hammad, M., Garcia, J., and S. MaleK, "A large-scale 495 empirical study on the effects of code obfuscations on 496 Android apps and anti-malware products", ACM International 497 Conference on Software Engineering 2018, 2018, 498 . 501 [I-D.arkko-arch-internet-threat-model] 502 Arkko, J., "Changes in the Internet Threat Model", draft- 503 arkko-arch-internet-threat-model-00 (work in progress), 504 April 2019. 506 [I-D.iab-protocol-maintenance] 507 Thomson, M., "The Harmful Consequences of the Robustness 508 Principle", draft-iab-protocol-maintenance-02 (work in 509 progress), March 2019. 511 [I-D.ietf-tls-grease] 512 Benjamin, D., "Applying GREASE to TLS Extensibility", 513 draft-ietf-tls-grease-02 (work in progress), January 2019. 515 [I-D.nottingham-for-the-users] 516 Nottingham, M., "The Internet is for End Users", draft- 517 nottingham-for-the-users-07 (work in progress), March 518 2019. 520 [mailbug] Englehardt, S., Han, J., and A. Narayanan, "I never signed 521 up for this! Privacy implications of email tracking", 522 Proceedings on Privacy Enhancing Technologies 2018.1 523 (2018): 109-126., 2018, 524 . 528 [RFC1984] IAB and IESG, "IAB and IESG Statement on Cryptographic 529 Technology and the Internet", BCP 200, RFC 1984, 530 DOI 10.17487/RFC1984, August 1996, 531 . 533 [RFC2804] IAB and IESG, "IETF Policy on Wiretapping", RFC 2804, 534 DOI 10.17487/RFC2804, May 2000, 535 . 537 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 538 Text on Security Considerations", BCP 72, RFC 3552, 539 DOI 10.17487/RFC3552, July 2003, 540 . 542 [RFC6454] Barth, A., "The Web Origin Concept", RFC 6454, 543 DOI 10.17487/RFC6454, December 2011, 544 . 546 [RFC6962] Laurie, B., Langley, A., and E. Kasper, "Certificate 547 Transparency", RFC 6962, DOI 10.17487/RFC6962, June 2013, 548 . 550 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 551 Morris, J., Hansen, M., and R. Smith, "Privacy 552 Considerations for Internet Protocols", RFC 6973, 553 DOI 10.17487/RFC6973, July 2013, 554 . 556 [RFC7258] Farrell, S. and H. Tschofenig, "Pervasive Monitoring Is an 557 Attack", BCP 188, RFC 7258, DOI 10.17487/RFC7258, May 558 2014, . 560 [RFC8240] Tschofenig, H. and S. Farrell, "Report from the Internet 561 of Things Software Update (IoTSU) Workshop 2016", 562 RFC 8240, DOI 10.17487/RFC8240, September 2017, 563 . 565 [RFC8404] Moriarty, K., Ed. and A. Morton, Ed., "Effects of 566 Pervasive Encryption on Operators", RFC 8404, 567 DOI 10.17487/RFC8404, July 2018, 568 . 570 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 571 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 572 . 574 [smarttv] Malkin, N., Bernd, J., Johnson, M., and S. Egelman, ""What 575 Can't Data Be Used For?" Privacy Expectations about Smart 576 TVs in the U.S.", European Workshop on Usable Security 577 (Euro USEC) 2018, 2018, . 581 [sybil] Viswanath, B., Post, A., Gummadi, K., and A. Mislove, "An 582 analysis of social network-based sybil defenses", ACM 583 SIGCOMM Computer Communication Review 41(4), 363-374. 584 2011, 2011, 585 . 588 [toys] Chu, G., Apthorpe, N., and N. Feamster, "Security and 589 Privacy Analyses of Internet of Things Childrens' Toys", 590 IEEE Internet of Things Journal 6.1 (2019): 978-985., 591 2019, . 593 [tracking] 594 Ermakova, T., Fabian, B., Bender, B., and K. Klimek, "Web 595 Tracking-A Literature Review on the State of Research", 596 Proceedings of the 51st Hawaii International Conference 597 on System Sciences, 2018, 598 . 601 [troll] Stewart, L., Arif, A., and K. Starbird, "Examining trolls 602 and polarization with a retweet network", ACM Workshop on 603 Misinformation and Misbehavior Mining on the Web 2018, 604 2018, . 607 [unread] Obar, J. and A. Oeldorf-Hirsch, "The biggest lie on the 608 internet: Ignoring the privacy policies and terms of 609 service policies of social networking services", 610 Information, Communication and Society (2018): 1-20, 2018, 611 . 613 [vpns] Khan, M., DeBlasio, J., Voelker, G., Snoeren, A., Kanich, 614 C., and N. Vallina-Rodrigue, "An empirical analysis of the 615 commercial VPN ecosystem", ACM Internet Measurement 616 Conference 2018 (pp. 443-456), 2018, 617 . 620 9.2. URIs 622 [1] https://datatracker.ietf.org/doc/rfc6973/referencedby/ 624 [2] https://datatracker.ietf.org/doc/rfc7258/referencedby/ 626 [3] https://www.iab.org/activities/workshops/dedr-workshop/ 628 [4] https://www.nytimes.com/2018/10/20/us/politics/saudi-image- 629 campaign-twitter.html 631 [5] https://www.welivesecurity.com/2013/11/22/lg-admits-that-its- 632 smart-tvs-have-been-watching-users-and-transmitting-data-without- 633 consent/ 635 [6] https://security.stackexchange.com/questions/100577/creating- 636 botnet-cc-server-what-architecture-should-i-use-irc-http 638 [7] https://businessinsights.bitdefender.com/worst-amazon-breaches 640 [8] https://www.us-cert.gov/ncas/alerts/TA18-004A 642 [9] https://haveibeenpwned.com/Passwords 644 [10] https://www.zdnet.com/article/how-hackers-stole-millions-of- 645 credit-card-records-from-target/ 647 [11] https://www.wired.com/story/yahoo-breach-three-billion-accounts/ 649 [12] https://www.pcmag.com/news/367319/facebook-stored-up-to-600m- 650 user-passwords-in-plain-text 652 [13] https://www.zdnet.com/article/us-telcos-caught-selling-your- 653 location-data-again-senator-demands-new-laws/ 655 [14] https://www.cnet.com/news/facebook-breach-affected-50-million- 656 people/ 658 [15] https://www.zdnet.com/article/millions-verizon-customer-records- 659 israeli-data/ 661 [16] https://www.wsj.com/articles/google-exposed-user-data-feared- 662 repercussions-of-disclosing-to-public-1539017194 664 [17] https://motherboard.vice.com/en_us/article/ywyz3x/hackers-could- 665 read-your-hotmail-msn-outlook-microsoft-customer-support 667 [18] https://gdpr-info.eu/art-15-gdpr/ 669 Appendix A. Change Log 671 This isn't gonna end up as an RFC, but may as well be tidy... 673 A.1. Changes from -01 to -02 675 o Oops - got an RFC number wrong in reference 677 A.2. Changes from -00 to -01 679 o Made a bunch more edits and added more references 681 o I had lots of typos (as always:-) 683 o cabo: PR#1 fixed more typos and noted extensbility danger 685 Author's Address 687 Stephen Farrell 688 Trinity College Dublin 690 Email: stephen.farrell@cs.tcd.ie