idnits 2.17.1 draft-barnes-pervasive-problem-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document is more than 15 pages and seems to lack a Table of Contents. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 01, 2014) is 3588 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 6962 (Obsoleted by RFC 9162) -- Obsolete informational reference (is this intentional?): RFC 5246 (Obsoleted by RFC 8446) -- Obsolete informational reference (is this intentional?): RFC 4306 (Obsoleted by RFC 5996) -- Obsolete informational reference (is this intentional?): RFC 5750 (Obsoleted by RFC 8550) Summary: 3 errors (**), 0 flaws (~~), 1 warning (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Barnes 3 Internet-Draft B. Schneier 4 Intended status: Informational C. Jennings 5 Expires: January 2, 2015 T. Hardie 6 July 01, 2014 8 Pervasive Attack: A Threat Model and Problem Statement 9 draft-barnes-pervasive-problem-01 11 Abstract 13 Documents published in 2013 have revealed several classes of 14 "pervasive" attack on Internet communications. In this document, we 15 review the main attacks that have been published, and develop a 16 threat model that describes these pervasive attacks. Based on this 17 threat model, we discuss the techniques that can be employed in 18 Internet protocol design to increase the protocols robustness to 19 pervasive attacks. 21 Status of this Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on January 2, 2015. 38 Copyright Notice 40 Copyright (c) 2014 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 1. Introduction 55 Starting in the June 2013, documents released to the press by Edward 56 Snowden have revealed several operations undertaken by intelligence 57 agencies to exploit Internet communications for intelligence 58 purposes. These attacks were largely based on protocol 59 vulnerabilities that were already known to exist. The attacks were 60 nonetheless striking in their pervasive nature, both in terms of the 61 amount of Internet communications targeted, and in terms of the 62 diversity of attack techniques employed. 64 To ensure that the Internet can be trusted by users, it is necessary 65 for the Internet technical community to address the vulnerabilities 66 exploited in these attacks [I-D.farrell-perpass-attack]. The goal of 67 this document is to describe more precisely the threats posed by 68 these pervasive attacks, and based on those threats, lay out the 69 problems that need to be solved in order to secure the Internet in 70 the face of those threats. 72 The remainder of this document is structured as follows. In 73 Section 3, we provide a brief summary of the attacks that have been 74 disclosed. Section 4 describes a threat model based on these 75 attacks, focusing on classes of attack that have not been a focus of 76 Internet engineering to date. Section 5 provides some high-level 77 guidance on how Internet protocols can defend against the threats 78 described here. 80 2. Terminology 82 This document makes extensive use of standard security terminology; 83 see, for example, [RFC4949]. In addition, we use a few terms that 84 are specific to the attacks discussed here: 86 Pervasive Attack: An attack on Internet protocols that makes use of 87 access at a large number of points in the network, or otherwise 88 provides the attacker with access to a large amount of Internet 89 traffic. 91 Collaborator: An entity that is a legitimate participant in a 92 protocol, but who provides information about that interaction 93 (keys or data) to an attacker. 95 Key Exfiltration: The transmission of keying material for an 96 encrypted communication from a collaborator to an attacker 98 Content Exfiltration: The transmission of the content of a 99 communication from a collaborator to an attacker 101 Unwitting Collaborator: A collaborator that provides information to 102 the attacker not deliberately, but because the attacker has 103 exploited some technology used by the collaborator. 105 3. Reported Instances of Large-Scale Attacks 107 Through recent revelations of sensitive documents in several media 108 outlets, the Internet community has been made aware of several 109 intelligence activities conducted by US and UK national intelligence 110 agencies, particularly the US National Security Agency (NSA) and the 111 UK Government Communications Headquarters (GCHQ). These documents 112 have revealed the methods that these agencies use to attack Internet 113 applications and obtain sensitive user information. Theses documents 114 suggest the following types of attacks have occurred: 116 o Large scale passive collection of Internet traffic 117 [pass1][pass2][pass3][pass4]. For example: 119 * The NSA XKEYSCORE system accesses data from multiple access 120 points and searches for "selectors" such as email addresses, at 121 the scale of tens of terabytes of data per day. 123 * The GCHQ Tempora system appears to have access to around 1,500 124 major cables passing through the UK. 126 * The NSA MUSCULAR program tapped cables between data centers 127 belonging to major service providers. 129 * Several programs appear perform wide-scale collection of 130 cookies in web traffic and location data from location-aware 131 portable devices such as smartphones. 133 o Decryption of TLS-protected Internet sessions [dec1][dec2][dec3]. 134 For example, the NSA BULLRUN project appears to have had a budget 135 of around $250M per year to undermine encryption through multiple 136 approaches. 138 o Insertion of NSA devices as a man in the middle of Internet 139 transactions [TOR1][TOR2]. For example, the NSA QUANTUM system 140 appears to use several different techniques to hijack HTTP 141 connections, ranging from DNS response injection to HTTP 302 142 redirects. 144 o Direct acquisition of bulk data and metadata from service 145 providers [dir1][dir2][dir3]. For example, the NSA PRISM program 146 provides the agency with access to many types of user data (e.g., 147 email, chat, VoIP). 149 o Use of implants (covert modifications or malware) to undermine 150 security and anonymity features [dec2][TOR1][TOR2]. For example: 152 * NSA appears to use the QUANTUM man-in-the-middle system to 153 direct users to a FOXACID server, which delivers an implant 154 that makes the TOR anonymity service less effective. 156 * The BULLRUN program mentioned above includes the addition of 157 covert modifications to software as one means to undermine 158 encryption. 160 * There is also some suspicion that NSA modifications to the 161 DUAL_EC_DRBG random number generator were made to ensure that 162 keys generated using that generator could be predicted by NSA. 163 These suspicions have been reinforced by reports that RSA 164 Security was paid roughly $10M to make DUAL_EC_DRBG the default 165 in their products. 167 We use the term "pervasive attack" to collectively describe these 168 operations. The term "pervasive" is used because the attacks are 169 designed to gather as much data as possible and to apply selective 170 analysis on targets after the fact. This means that all, or nearly 171 all, Internet communications are targets for these attacks. To 172 achieve this scale, the attacks are physically pervasive; they affect 173 a large number of Internet communications. They are pervasive in 174 content, consuming and exploiting any information revealed by the 175 protocol. And they are pervasive in technology, exploiting many 176 different vulnerabilities in many different protocols. 178 It's important to note that although the attacks mentioned above were 179 executed by NSA and GCHQ, there are many other organizations that can 180 mount pervasive attacks. Because of the resources required to 181 achieve pervasive scale, pervasive attacks are most commonly 182 undertaken by nation-state actors. For example, the Chinese Internet 183 filtering system known as the "Great Firewall of China" uses several 184 techniques that are similar to the QUANTUM program, and which have a 185 high degree of pervasiveness with regard to the Internet in China. 187 4. Threat Model 189 Pervasive surveillance aims to collect information across a large 190 number of Internet communications, analyzing the collected 191 communications to identify information of interest within individual 192 communications or implied by correlated communications. This 193 analysis sometimes benefits from decryption of encrypted 194 communications and deanonymization of anonymized communications. As 195 a result, these attackers desire both access to the bulk of Internet 196 traffic and to the keying material required to decrypt any traffic 197 which has been encrypted (though the presence of a communication and 198 the fact that it is encrypted may both be inputs to an analysis, even 199 if the attacker cannot decrypt the communication). 201 The attacks listed above highlight new avenues both for access to 202 traffic and for access to relevant encryption keys. They further 203 indicate that the scale of surveillance is sufficient to provide a 204 general capability to cross-correlate communications, a threat not 205 previously thought to be relevant at the scale of all Internet 206 communications. 208 4.1. Attacker Capabilities 210 +--------------------------+-------------------------------------+ 211 | Attack Class | Capability | 212 +--------------------------+-------------------------------------+ 213 | Passive | Capture data in transit | 214 | | | 215 | Active | Manipulate / inject data in transit | 216 | | | 217 | Static key exfiltration | Obtain key material once / rarely | 218 | | | 219 | Dynamic key exfiltration | Obtain per-session key material | 220 | | | 221 | Content exfiltration | Access data at rest | 222 +--------------------------+-------------------------------------+ 224 Security analyses of Internet protocols commonly consider two classes 225 of attacker: Passive attackers, who can simply listen in on 226 communications as they transit the network, and "active attackers", 227 who can modify or delete packets in addition to simply collecting 228 them. 230 In the context of pervasive attack, these attacks take on an even 231 greater significance. In the past, these attackers are often assumed 232 to operate near the edge of the network, where attacks can be 233 simpler. For exmaple, in some LANs, it is simple for any node to 234 engage in passive listening to other nodes' traffic or inject packets 235 to accomplish active attacks. In the pervasive attack case, however, 236 both passive and active attacks are undertaken closer to the core of 237 the network, greatly expanding the scope and capability of the 238 attacker. 240 A passive attacker with access to a large portion of the Internet can 241 analyze collected traffic to create a much more detailed view of user 242 behavior than an attacker that collects at a single point. Even the 243 usual claim that encryption defeats passive attackers is weakened, 244 since a pervasive passive attacker can examine correlations over 245 large numbers of sessions, e.g., pairing encrypted sessions with 246 unencrypted sessions from the same host. The reports on the NSA 247 XKEYSCORE system would make it an example of such an attacker. 249 A pervasive active attacker likewise has capabilities beyond those of 250 a localized active attacker. Active attacks are often limited by 251 network topology, for example by a requirement that the attacker be 252 able to see a targeted session as well as inject packets into it. A 253 pervasive active attacker with multiple accesses at core points of 254 the Internet is able to overcome these topological limitations and 255 apply attacks over a much broader scope. Being positioned in the 256 core of the network rather than the edge can also enable a pervasive 257 active attacker to reroute targeted traffic. Pervasive active 258 attackers can also benefit from pervasive passive collection to 259 identify vulnerable hosts. 261 While not directly related to pervasiveness, attackers that are in a 262 position to mount a pervasive active attack are also often in a 263 position to subvert authentication, the traditional response to 264 active attack. Authentication in the Internet is often achieved via 265 trusted third party authorities such as the Certificate Authorities 266 (CAs) that provide web sites with authentication credentials. An 267 attacker with sufficient resources for pervasive attack may also be 268 able to induce an authority to grant credentials for an identity of 269 the attacker's choosing. If the parties to a communication will 270 trust multiple authorities to certify a specific identity, this 271 attack may be mounted by suborning any one of the authorities (the 272 proverbial "weakest link"). Subversion of authorities in this way 273 can allow an active attack to succeed in spite of an authentication 274 check. 276 Beyond these two classes (active and passive), reports on the BULLRUN 277 effort to defeat encryption and the PRISM effort to obtain data from 278 service providers suggest three more classes of attack: 280 o Static key exfiltration 281 o Dynamic key exfiltration 283 o Content exfiltration 285 These attacks all rely on a "collaborator" endpoint providing the 286 attacker with some information, either keys or data. These attacks 287 have not traditionally been considered in security analyses of 288 protocols, since they happen outside of the protocol. 290 The term "key exfiltration" refers to the transfer of keying material 291 for an encrypted communication from the collaborator to the attacker. 292 By "static", we mean that the transfer of keys happens once, or 293 rarely, typically of a long-lived key. For example, this case would 294 cover a web site operator that provides the private key corresponding 295 to its HTTPS certificate to an intelligence agency. 297 "Dynamic" key exfiltration, by contrast, refers to attacks in which 298 the collaborator delivers keying material to the attacker frequently, 299 e.g., on a per-session basis. This does not necessarily imply 300 frequent communications with the attacker; the transfer of keying 301 material may be virtual. For example, if an endpoint were modified 302 in such a way that the attacker could predict the state of its 303 psuedorandom number generator, then the attacker would be able to 304 derive per-session keys even without per-session communications. 306 Finally, content exfiltration is the attack in which the collaborator 307 simply provides the attacker with the desired data or metadata. 308 Unlike the key exfiltration cases, this attack does not require the 309 attacker to capture the desired data as it flows through the network. 310 The risk is to data at rest as opposed to data in transit. This 311 increases the scope of data that the attacker can obtain, since the 312 attacker can access historical data - the attacker does not have to 313 be listening at the time the communication happens. 315 Exfiltration attacks can be accomplished via attacks against one of 316 the parties to a communication, i.e., by the attacker stealing the 317 keys or content rather than the party providing them willingly. In 318 these cases, the party may not be aware that they are collaborating, 319 at least at a human level. Rather, the subverted technical assets 320 are "collaborating" with the attacker (by providing keys/content) 321 without their owner's knowledge or consent. 323 Any party that has access to encryption keys or unencrypted data can 324 be a collaborator. While collaborators are typically the endpoints 325 of a communication (with encryption securing the links), 326 intermediaries in an unencrypted communication can also facilitate 327 content exfiltration attacks as collaborators by providing the 328 attacker access to those communications. For example, documents 329 describing the NSA PRISM program claim that NSA is able to access 330 user data directly from servers, where it was stored unencrypted. In 331 these cases, the operator of the server would be a collaborator 332 (wittingly or unwittingly). By contrast, in the NSA MUSCULAR 333 program, a set of collaborators enabled attackers to access the 334 cables connecting data centers used by service providers such as 335 Google and Yahoo. Because communications among these data centers 336 were not encrypted, the collaboration by an intermediate entity 337 allowed NSA to collect unencrypted user data. 339 4.2. Attacker Costs 341 +--------------------------+-----------------------------------+ 342 | Attack Class | Cost / Risk to Attacker | 343 +--------------------------+-----------------------------------+ 344 | Passive | Passive data access | 345 | | | 346 | Active | Active data access + processing | 347 | | | 348 | Static key exfiltration | One-time interaction | 349 | | | 350 | Dynamic key exfiltration | Ongoing interaction / code change | 351 | | | 352 | Content exfiltration | Ongoing, bulk interaction | 353 +--------------------------+-----------------------------------+ 355 In order to realize an attack of each of the types discussed above, 356 the attacker has to incur certain costs and undertake certain risks. 357 These costs differ by attack, and can be helpful in guiding response 358 to pervasive attack. 360 Depending on the attack, the attacker may be exposed to several types 361 of risk, ranging from simply losing access to arrest or prosecution. 362 In order for any of these negative consequences to happen, however, 363 the attacker must first be discovered and identified. So the primary 364 risk we focus on here is the risk of discovery and attribution. 366 A passive attack is the simplest attack to mount in some ways. The 367 base requirement is that the attacker obtain physical access to a 368 communications medium and extract communications from it. For 369 example, the attacker might tap a fiber-optic cable, acquire a mirror 370 port on a switch, or listen to a wireless signal. The need for these 371 taps to have physical access to a link exposes the attacker to the 372 risk that the taps will be discovered. For example, a fiber tap or 373 mirror port might be discovered by network operators noticing 374 increased attenuation in the fiber or a change in switch 375 configuration. Of course, passive attacks may be accomplished with 376 the cooperation of the network operator, in which case there is a 377 risk that the attacker's interactions with the network operator will 378 be exposed. 380 In many ways, the costs and risks for an active attack are similar to 381 those for a passive attack, with a few additions. An active attacker 382 requires more robust network access than a passive attacker, since 383 for example they will often need to transmit data as well as 384 receiving it. In the wireless example above, the attacker would need 385 to act as an transmitter as well as receiver, greatly increasing the 386 probability the attacker will be discovered (e.g., using direction- 387 finding technology). Active attacks are also much more observable at 388 higher layers of the network. For example, an active attacker that 389 attempts to use a mis-issued certificate could be detected via 390 Certificate Transparency [RFC6962]. 392 In terms of raw implementation complexity, passive attacks require 393 only enough processing to extract information from the network and 394 store it. Active attacks, by contrast, often depend on winning race 395 conditions to inject pakets into active connections. So active 396 attacks in the core of the network require processing hardware to 397 that can operate at line speed (roughly 100Gbps to 1Tbps in the core) 398 to identify opportunities for attack and insert attack traffic in a 399 high-volume traffic. 401 Key exfiltration attacks rely on passive attack for access to 402 encrypted data, with the collaborator providing keys to decrypt the 403 data. So the attacker undertakes the cost and risk of a passive 404 attack, as well as additional risk of discovery via the interactions 405 that the attacker has with the collaborator. 407 In this sense, static exfiltration has a lower risk profile than 408 dynamic. In the static case, the attacker need only interact with 409 the collaborator a small number of times, possibly only once, say to 410 exchange a private key. In the dynamic case, the attacker must have 411 continuing interactions with the collaborator. As noted above these 412 interactions may real, such as in-person meetings, or virtual, such 413 as software modifications that render keys available to the attacker. 414 Both of these types of interactions introduce a risk that they will 415 be discovered, e.g., by employees of the collaborator organization 416 noticing suspicious meetings or suspicious code changes. 418 Content exfiltration has a similar risk profile to dynamic key 419 exfiltration. In a content exfiltration attack, the attacker saves 420 the cost and risk of conducting a passive attack. The risk of 421 discovery through interactions with the collaborator, however, is 422 still present, and may be higher. The content of a communication is 423 obviously larger than the key used to encrypt it, often by several 424 orders of magnitude. So in the content exfiltration case, the 425 interactions between the collaborator and the attacker need to be 426 much higher-bandwidth than in the key exfiltration cases, with a 427 corresponding increase in the risk that this high-bandwidth channel 428 will be discovered. 430 It should also be noted that in these latter three exfiltration 431 cases, the collaborator also undertakes a risk that his collaboration 432 with the attacker will be discovered. Thus the attacker may have to 433 incur additional cost in order to convince the collaborator to 434 participate in the attack. Likewise, the scope of these attacks is 435 limited to case where the attacker can convince a collaborator to 436 participate. If the attacker is a national government, for example, 437 it may be able to compel participation within its borders, but have a 438 much more difficult time recruiting foreign collaborators. 440 As noted above, the "collaborator" in an exfiltration attack can be 441 unwitting; the attacker can steal keys or data to enable the attack. 442 In some ways, the risks of this approach are similar to the case of 443 an active collaborator. In the static case, the attacker needs to 444 steal information from the collaborator once; in the dynamic case, 445 the attacker needs to continued presence inside the collaborators 446 systems. The main difference is that the risk in this case is of 447 automated discovery (e.g., by intrusion detection systems) rather 448 than discovery by humans. 450 5. Responding to Pervasive Attack 452 Given this threat model, how should the Internet technical community 453 respond to pervasive attack? 455 The cost and risk considerations discussed above can provide a guide 456 to response. Namely, responses to passive attack should close off 457 avenues for attack that are safe, scalable, and cheap, forcing the 458 attacker to mount attacks that expose it to higher cost and risk. 460 In this section, we discuss a collection of high-level approaches to 461 mitigating pervasive attacks. These approaches are not meant to be 462 exhaustive, but rather to provide general guidance to protocol 463 designers in creating protocols that are resistant to pervasive 464 attack. 466 +--------------------------+----------------------------------------+ 467 | Attack Class | High-level mitigations | 468 +--------------------------+----------------------------------------+ 469 | Passive | Encryption, anonymization | 470 | | | 471 | Active | Authentication, monitoring | 472 | | | 473 | Static key exfiltration | Encryption with per-session state | 474 | | (PFS) | 475 | | | 476 | Dynamic key exfiltration | Transparency, validation of end | 477 | | systems | 478 | | | 479 | Content exfiltration | Object encryption, distributed systems | 480 +--------------------------+----------------------------------------+ 482 The traditional mitigation to passive attack is to render content 483 unintelligible to the attacker by applying encryption, for example, 484 by using TLS or IPsec [RFC5246][RFC4301]. Even without 485 authentication, encryption will prevent a passive attacker from being 486 able to read the encrypted content. Exploiting unauthenticated 487 encryption requires an active attack (man in the middle); with 488 authentication, a key exfiltration attack is required. 490 The additional capabilities of a pervasive passive attacker, however, 491 require some changes in how protocol designers evaluate what 492 information is encrypted. In addition to directly collecting 493 unencrypted data, a pervasive passive attacker can also make 494 inferences about the content of encrypted messages based on what is 495 observable. For example, if a user typically visits a particular set 496 of web sites, then a pervasive passive attacker observing all of the 497 user's behavior can track the user based on the hosts the user 498 communicates with, even if the user changes IP addresses, and even if 499 all of the connections are encrypted. 501 Thus, in designing protocols to be resistant to pervasive passive 502 attacks, protocol designers should consider what information is left 503 unencrypted in the protocol, and how that information might be 504 correlated with other traffic. Information that cannot be encrypted 505 should be anonymized, i.e., it should be randomized so that it cannot 506 be correlated with other information. For example, the TOR overlay 507 routing network anonymizes IP addresses by using multi-hop onion 508 routing [TOR]. 510 As with traditional, limited active attacks, the basic mitigation to 511 pervasive active attack is to enable the endpoints of a communication 512 to authenticate each other. However, as noted above, attackers that 513 can mount pervasive active attacks can often subvert the authorities 514 on which authentication systems rely. Thus, in order to make 515 authentication systems more resilient to pervasive attack, it is 516 beneficial to monitor these authorities to detect misbehavior that 517 could enable active attack. For example, DANE and Certificate 518 Transparency both provide mechanisms for detecting when a CA has 519 issued a certificate for a domain name without the authorization of 520 the holder of that domain name [RFC6962][RFC6698]. 522 While encryption and authentication protect the security of 523 individual sessions, these sessions may still leak information, such 524 as IP addresses or server names, that a pervasive attacker can use to 525 correlate sessions and derive additional information about the 526 target. Thus, pervasive attack highlights the need for anonymization 527 technologies, which make correlation more difficult. Typical 528 approaches to anonymization include: 530 o Aggregation: Routing sessions for many endpoints through a common 531 mid-point (e.g., an HTTP proxy). Since the midpoint appears as 532 the end of the communication, individual endpoints cannot be 533 distinguished. 535 o Onion routing: Routing a session through several mid-points, 536 rather than directly end-to-end, with encryption that guarantees 537 that each node can only see the previous and next hops [TOR]. 538 This ensures that the source and destination of a communication 539 are never revealed simultaneously. 541 o Multi-path: Routing different sessions via different paths (even 542 if they originate from the same endpoint). This reduces the 543 probability that the same attacker will be able to collect many 544 sessions. 546 An encrypted, authenticated session is safe from attacks in which 547 neither end collaborates with the attacker, but can still be 548 subverted by the endpoints. The most common ciphersuites used for 549 HTTPS today, for example, are based on using RSA encryption in such a 550 way that if an attacker has the private key, the attacker can derive 551 the session keys from passive observation of a session. These 552 ciphersuites are thus vulnerable to a static key exfiltration attack 553 - if the attacker obtains the server's private key once, then they 554 can decrypt all past and future sessions for that server. 556 Static key exfiltration attacks are prevented by including ephemeral, 557 per-session secret information in the keys used for a session. Most 558 IETF security protocols include modes of operation that have this 559 property. These modes are known in the literature under the heading 560 "perfect forward secrecy" (PFS) because even if an adversary has all 561 of the secrets for one session, the next session will use new, 562 different secrets and the attacker will not be able to decrypt it. 563 The Internet Key Exchange (IKE) protocol used by IPsec supports PFS 564 by default [RFC4306], and TLS supports PFS via the use of specific 565 ciphersuites [RFC5246]. 567 Dynamic key exfiltration cannot be prevent by protocol means. By 568 definition, any secrets that are used in the protocol will be 569 transmitted to the attacker and used to decrypt what the protocol 570 encrypts. Likewise, no technical means will stop a willing 571 collaborator from sharing keys with an attacker. However, this 572 attack model also covers "unwitting collaborators", whose technical 573 resources are collaborating with the attacker without their owners 574 knowledge. This could happen, for example, if flaws are built in 575 products or if malware is injected later on. 577 The best defense against becoming an unwitting collaborator is thus 578 to end systems are well-vetted and secure. Transparency is a major 579 tool in this process [secure]. Open source software is easier to 580 evaluate for potential flaws than proprietary software. Products 581 that conform to standards for cryptography and security protocols are 582 limited in the ways they can misbehave. And standards processes that 583 are open and transparent help ensure that the standards themselves do 584 not provide avenues for attack. 586 Standards can also define protocols that provide greater or lesser 587 opportunity for dynamic key exfiltration. Collaborators engaging in 588 key exfiltration through a standard protocol will need to use covert 589 channels in the protocol to leak information that can be used by the 590 attacker to recover the key. Such use of covert channels has been 591 demonstrated for SSL, TLS, and SSH [key-recovery]. Any protocol bits 592 that can be freely set by the collaborator can be used as a covert 593 channel, including, for example, TCP options or unencrypted traffic 594 sent before a STARTTLS message in SMTP or XMPP. Protocol designers 595 should consider what covert channels their protocols expose, and how 596 those channels can be exploited to exfiltrate key information. 598 Content exfiltration has some similarity to the dynamic exfiltration 599 case, in that nothing can prevent a collaborator from revealing what 600 they know, and the mitigations against becoming an unwitting 601 collaborator apply. In this case, however, applications can limit 602 what the collaborator is able to reveal. For example, the S/MIME and 603 PGP systems for secure email both deny intermediate servers access to 604 certain parts of the message [RFC5750][RFC2015]. Even if a server 605 were to provide an attacker with full access, the attacker would 606 still not be able to read the protected parts of the message. 608 Mechanisms like S/MIME and PGP are often referred to as "end-to-end" 609 security mechanisms, as opposed to "hop-by-hop" or "end-to-middle" 610 mechanisms like the use of SMTP over TLS. These two different 611 mechanisms address different types of attackers: Hop-by-hop 612 mechanisms protect from attackers on the wire (passive or active), 613 while end-to-end mechansims protect against attackers within 614 intermediate nodes. Thus, neither of these mechanisms provides 615 complete protection by itself. For example: 617 o Two users messaging via Facebook over HTTPS are protected against 618 passive and active attackers in the network between the users and 619 Facebook. However, if Facebook is a collaborator in an 620 exfiltration attack, their communications can still be monitored. 621 They would need to encrypt their messages end-to-end in order to 622 protect themselves against this risk. 624 o Two users exchanging PGP-protected email have protected the 625 content of their exchange from network attackers and intermediate 626 servers, but the header information (e.g., To and From addresses) 627 is unnecessarily exposed to passive and active attackers that can 628 see communications among the mail agents handling the email 629 messages. These mail agents need to use hop-by-hop encryption to 630 address this risk. 632 Mechanisms such as S/MIME and PGP are also known as "object-based" 633 security mechanisms (as opposed to "communications security" 634 mechanisms), since they operate at the level of objects, rather than 635 communications sessions. Such secure object can be safely handled by 636 intermediaries in order to realize, for example, store and forward 637 messaging. In the examples above, the encrypted instant messages or 638 email messages would be the secure objects. 640 The mitigations to the content exfiltration case are thus to regard 641 participants in the protocol as potential passive attackers 642 themselves, and apply the mitigations discussed above with regard to 643 passive attack. Information that is not necessary for these 644 participants to fulfill their role in the protocol can be encrypted, 645 and other information can be anonymized. 647 In summary, many of the basic tools for mitigating pervasive attack 648 already exist. As Edward Snowden put it, "properly implemented 649 strong crypto systems are one of the few things you can rely on" 650 [snowden]. The task for the Internet community is to ensure that 651 applications are able to use the strong crypto systems we have 652 defined - for example, TLS with PFS ciphersuites - and that these 653 properly implemented. (And, one might add, turned on!) Some of this 654 work will require architectural changes to applications, e.g., in 655 order to limit the information that is exposed to servers. In many 656 other cases, however, the need is simply to make the best use we can 657 of the cryptographic tools we have. 659 6. Acknowledgements 661 o Trammel for ideas around pervasive passive attack and mitigation 663 o Thaler for list of attacks and taxonomy 665 o Security ADs for starting and managing the perpass discussion 667 7. TODO 669 o More thorough review of problem statement documents to ensure all 670 bases are covered 672 o Look at better alignment with draft-farrell-perpass-attack 674 o Better coverage of traffic analysis and mitigations 676 8. Informative References 678 [pass1] The Guardian, "How the NSA is still harvesting your online 679 data", 2013, . 682 [pass2] The Guardian, "NSA's Prism surveillance program: how it 683 works and what it can do", 2013, . 687 [pass3] The Guardian, "XKeyscore: NSA tool collects 'nearly 688 everything a user does on the internet'", 2013, . 692 [pass4] The Guardian, "How does GCHQ's internet surveillance 693 work?", n.d., . 696 [dec1] The New York Times, "N.S.A. Able to Foil Basic Safeguards 697 of Privacy on Web", 2013, . 700 [dec2] The Guardian, "Project Bullrun - classification guide to 701 the NSA's decryption program", 2013, . 705 [dec3] The Guardian, "Revealed: how US and UK spy agencies defeat 706 internet privacy and security", 2013, . 710 [TOR] The Tor Project, "TOR", 2013, 711 . 713 [TOR1] Schneier, B., "How the NSA Attacks Tor/Firefox Users With 714 QUANTUM and FOXACID", 2013, . 717 [TOR2] The Guardian, "'Tor Stinks' presentation - read the full 718 document", 2013, . 722 [dir1] The Guardian, "NSA collecting phone records of millions of 723 Verizon customers daily", 2013, . 727 [dir2] The Guardian, "NSA Prism program taps in to user data of 728 Apple, Google and others", 2013, . 732 [dir3] The Guardian, "Sigint - how the NSA collaborates with 733 technology companies", 2013, . 737 [secure] Schneier, B., "NSA surveillance: A guide to staying 738 secure", 2013, . 741 [snowden] Technology Review, "NSA Leak Leaves Crypto-Math Intact but 742 Highlights Known Workarounds", 2013, . 747 [key-recovery] 748 Golle, P., "The Design and Implementation of Protocol- 749 Based Hidden Key Recovery", 2003, 750 . 752 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 753 RFC 4949, August 2007. 755 [RFC6962] Laurie, B., Langley, A., and E. Kasper, "Certificate 756 Transparency", RFC 6962, June 2013. 758 [RFC6698] Hoffman, P. and J. Schlyter, "The DNS-Based Authentication 759 of Named Entities (DANE) Transport Layer Security (TLS) 760 Protocol: TLSA", RFC 6698, August 2012. 762 [RFC5246] Dierks, T. and E. Rescorla, "The Transport Layer Security 763 (TLS) Protocol Version 1.2", RFC 5246, August 2008. 765 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 766 Internet Protocol", RFC 4301, December 2005. 768 [RFC4306] Kaufman, C., "Internet Key Exchange (IKEv2) Protocol", 769 RFC 4306, December 2005. 771 [RFC5750] Ramsdell, B. and S. Turner, "Secure/Multipurpose Internet 772 Mail Extensions (S/MIME) Version 3.2 Certificate 773 Handling", RFC 5750, January 2010. 775 [RFC2015] Elkins, M., "MIME Security with Pretty Good Privacy 776 (PGP)", RFC 2015, October 1996. 778 [I-D.farrell-perpass-attack] 779 Farrell, S. and H. Tschofenig, "Pervasive Monitoring is an 780 Attack", draft-farrell-perpass-attack-06 (work in 781 progress), February 2014. 783 Authors' Addresses 785 Richard Barnes 787 Email: rlb@ipv.sx 789 Bruce Schneier 791 Email: schneier@schneier.com 793 Cullen Jennings 795 Email: fluffy@cisco.com 797 Ted Hardie 799 Email: ted.ietf@gmail.com