idnits 2.17.1 draft-iab-privsec-confidentiality-threat-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document is more than 15 pages and seems to lack a Table of Contents. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) -- The document has examples using IPv4 documentation addresses according to RFC6890, but does not use any IPv6 documentation addresses. Maybe there should be IPv6 examples, too? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 11, 2014) is 3515 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2821' is defined on line 1066, but no explicit reference was found in the text == Unused Reference: 'RFC3851' is defined on line 1081, but no explicit reference was found in the text == Unused Reference: 'RFC5655' is defined on line 1107, but no explicit reference was found in the text == Unused Reference: 'RFC6120' is defined on line 1115, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 2821 (Obsoleted by RFC 5321) -- Obsolete informational reference (is this intentional?): RFC 3501 (Obsoleted by RFC 9051) -- Obsolete informational reference (is this intentional?): RFC 3851 (Obsoleted by RFC 5751) -- Obsolete informational reference (is this intentional?): RFC 4306 (Obsoleted by RFC 5996) -- Obsolete informational reference (is this intentional?): RFC 5246 (Obsoleted by RFC 8446) -- Obsolete informational reference (is this intentional?): RFC 5750 (Obsoleted by RFC 8550) -- Obsolete informational reference (is this intentional?): RFC 6962 (Obsoleted by RFC 9162) Summary: 3 errors (**), 0 flaws (~~), 5 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group R. Barnes 3 Internet-Draft 4 Intended status: Informational B. Schneier 5 Expires: March 15, 2015 6 C. Jennings 8 T. Hardie 10 B. Trammell 12 C. Huitema 14 D. Borkmann 16 September 11, 2014 18 Confidentiality in the Face of Pervasive Surveillance: A Threat Model 19 and Problem Statement 20 draft-iab-privsec-confidentiality-threat-00 22 Abstract 24 Documents published in 2013 have revealed several classes of 25 "pervasive" attack on Internet communications. In this document we 26 develop a threat model that describes these pervasive attacks. We 27 start by assuming a completely passive adversary with an interest in 28 indiscriminate eavesdropping that can observe network traffic, then 29 expand the threat model with a set of verified attacks that have been 30 published. Based on this threat model, we discuss the techniques 31 that can be employed in Internet protocol design to increase the 32 protocols robustness to pervasive attacks. 34 Status of This Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at http://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on March 15, 2015. 50 Copyright Notice 52 Copyright (c) 2014 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 1. Introduction 67 Starting in June 2013, documents released to the press by Edward 68 Snowden have revealed several operations undertaken by intelligence 69 agencies to exploit Internet communications for intelligence 70 purposes. These attacks were largely based on protocol 71 vulnerabilities that were already known to exist. The attacks were 72 nonetheless striking in their pervasive nature, both in terms of the 73 amount of Internet communications targeted, and in terms of the 74 diversity of attack techniques employed. 76 To ensure that the Internet can be trusted by users, it is necessary 77 for the Internet technical community to address the vulnerabilities 78 exploited in these attacks [RFC7258]. The goal of this document is 79 to describe more precisely the threats posed by these pervasive 80 attacks, and based on those threats, lay out the problems that need 81 to be solved in order to secure the Internet in the face of those 82 threats. 84 The remainder of this document is structured as follows. In 85 Section 3, we describe an idealized passive adversary, one which 86 could completely undetectably compromise communications at Internet 87 scale. In Section 4, we provide a brief summary of some attacks that 88 have been disclosed, and use these to expand the assumed capabilities 89 of our idealized adversary. Section 5 describes a threat model based 90 on these attacks, focusing on classes of attack that have not been a 91 focus of Internet engineering to date. Section 6 provides some high- 92 level guidance on how Internet protocols can defend against the 93 threats described here. 95 2. Terminology 97 This document makes extensive use of standard security and privacy 98 terminology; see [RFC4949] and [RFC6973]. In addition, we use a few 99 terms that are specific to the attacks discussed here: 101 Pervasive Attack: An attack on Internet protocols that makes use of 102 access at a large number of points in the network, or otherwise 103 provides the attacker with access to a large amount of Internet 104 traffic. 106 Observation: Information collected directly from communications by 107 an eavesdropper or observer. For example, the knowledge that 108 sent a message to via SMTP 109 taken from the headers of an observed SMTP message would be an 110 observation. 112 Inference: :Information extracted from analysis of information 113 collected directly from communications by an eavesdropper or 114 observer. For example, the knowledge that a given web page was 115 accessed by a given IP address, by comparing the size in octets of 116 measured network flow records to fingerprints derived from known 117 sizes of linked resources on the web servers involved, would be an 118 inference. 120 Collaborator: An entity that is a legitimate participant in a 121 protocol, but who provides information about that interaction 122 (keys or data) to an attacker. 124 Key Exfiltration: The transmission of keying material for an 125 encrypted communication from a collaborator to an attacker 127 Content Exfiltration: The transmission of the content of a 128 communication from a collaborator to an attacker 130 Unwitting Collaborator: A collaborator that provides information to 131 the attacker not deliberately, but because the attacker has 132 exploited some technology used by the collaborator. 134 3. An Idealized Pervasive Passive Adversary 136 We assume a pervasive passive adversary, an indiscriminate 137 eavesdropper on an Internet-attached computer network that 139 o can observe every packet of all communications at any or every hop 140 in any network path between an initiator and a recipient; and 142 o can observe data at rest in intermediate systems between the 143 endpoints controlled by the initiator and recipient; but 145 o takes no other action with respect to these communications (i.e., 146 blocking, modification, injection, etc.). 148 This adversary is less capable than those which we know to have 149 compromised the Internet from press reports, elaborated in Section 4, 150 but represents the threat to communications privacy by a single 151 entity interested in remaining undetectable. 153 The techniques available to our ideal adversary are direct 154 observation and inference. Direct observation involves taking 155 information directly from eavesdropped communications - e.g., URLs 156 identifying content or email addresses identifying individuals from 157 application-layer headers. Inference, on the other hand, involves 158 analyzing eavesdropped information to derive new information from it; 159 e.g., searching for application or behavioral fingerprints in 160 observed traffic to derive information about the observed individual 161 from them, in absence of directly-observed sources of the same 162 information. The use of encryption to protect confidentiality is 163 generally enough to prevent direct observation, assuming 164 uncompromised encryption implementations and key material, but 165 provides less complete protection against inference, especially 166 inference based only on unprotected portions of communications (e.g. 167 IP and TCP headers for TLS). 169 3.1. Information subject to direct observation 171 Protocols which do not encrypt their payload make the entire content 172 of the communication available to a PPA along their path. Following 173 the advice in [RFC3365], most such protocols have a secure variant 174 which encrypts payload for confidentiality, and these secure variants 175 are seeing ever-wider deployment. A noteworthy exception is DNS 176 [RFC1035], as DNSSEC [RFC4033] does not have confidentiality as a 177 requirement. This implies that all DNS queries and answers generated 178 by the activities of any protocol are available to a the adversary. 180 Protocols which imply the storage of some data at rest in 181 intermediaries leave this data subject to observation by an adversary 182 that has compromised these intermediaries, unless the data is 183 encrypted end-to-end by the application layer protocol, or the 184 implementation uses an encrypted store for this data. 186 3.2. Information useful for inference 188 Inference is information extracted from later analysis of an observed 189 communication, and/or correlation of observed information with 190 information available from other sources. Indeed, most useful 191 inference performed by a our ideal adversary falls under the rubric 192 of correlation. The simplest example of this is the observation of 193 DNS queries and answers from and to a source and correlating those 194 with IP addresses with which that source communicates. This can give 195 access to information otherwise not available from encrypted 196 application payloads (e.g., the Host: HTTP/1.1 request header when 197 HTTP is used with TLS). 199 Protocols which encrypt their payload using an application- or 200 transport-layer encryption scheme (e.g. TLS [RFC5246]) still expose 201 all the information in their network and transport layer headers to a 202 PPA, including source and destination addresses and ports. IPsec 203 ESP[RFC4303] further encrypts the transport-layer headers, but still 204 leaves IP address information unencrypted; in tunnel mode, these 205 addresses correspond to the tunnel endpoints. Features of the 206 cryptographic protocols themselves, e.g. the TLS session identifier, 207 may leak information that can be used for correlation and inference. 208 While this information is much less semantically rich than the 209 application payload, it can still be useful for the inferring an 210 individual's activities. 212 Inference can also leverage information obtained from sources other 213 than direct traffic observation. Geolocation databases, for example, 214 have been developed map IP addresses to a location, in order to 215 provide location-aware services such as targeted advertising. This 216 location information is often of sufficient resolution that it can be 217 used to draw further inferences toward identifying or profiling an 218 individual. 220 Social media provide another source of more or less publicly 221 accessible information. This information can be extremely 222 semantically rich, including information about an individual's 223 location, associations with other individuals and groups, and 224 activities. Further, this information is generally contributed and 225 curated voluntarily by the individuals themselves: it represents 226 information which the individuals are not necessarily interested in 227 protecting for privascy reasons. However, correlation of this social 228 networking data with information available from direct observation of 229 network traffic allows the creation of a much richer picture of an 230 individual's activities than either alone. We note with some alarm 231 that there is little that can be done from the protocol design side 232 to limit such correlation by a PPA, and that the existence of such 233 data sources in many cases greatly complicates the problem of 234 protecting privacy by hardening protocols alone. 236 3.3. An illustration of an ideal passive attack 238 To illustrate how capable even this limited adversary is, we explore 239 the non-anonymity of even encrypted IP traffic by examining in detail 240 some inference techniques for associating a set of addresses with an 241 individual, in order to illustrate the difficulty of defending 242 communications against a PPA. Here, the basic problem is that 243 information radiated even from protocols which have no obvious 244 connection with personal data can be correlated with other 245 information which can paint a very rich behavioral picture, that only 246 takes one unprotected link in the chain to associate with an 247 identity. 249 3.3.1. Analysis of IP headers 251 Internet traffic can be monitored by tapping Internet links, or by 252 installing monitoring tools in Internet routers. Of course, a single 253 link or a single router only provides access to a fraction of the 254 global Internet traffic. However, monitoring a number of high 255 capacity links or a set of routers placed at strategic locations 256 provides access to a good sampling of Internet traffic. 258 Tools like IPFIX [RFC7011] allow administrators to acquire statistics 259 about sequences of packets with some common properties that pass 260 through a network device. The most common set of properties used in 261 flow measurement is the "five-tuple"of source and destination 262 addresses, protocol type, and source and destination ports. These 263 statistics are commonly used for network engineering, but could 264 certainly be used for other purposes. 266 Let's assume for a moment that IP addresses can be correlated to 267 specific services or specific users. Analysis of the sequences of 268 packets will quickly reveal which users use what services, and also 269 which users engage in peer-to-peer connections with other users. 270 Analysis of traffic variations over time can be used to detect 271 increased activity by particular users, or in the case of peer-to- 272 peer connections increased activity within groups of users. 274 3.3.2. Correlation of IP addresses to user identities 276 The correlation of IP addresses with specific users can be done in 277 various ways. For example, tools like reverse DNS lookup can be used 278 to retrieve the DNS names of servers. Since the addresses of servers 279 tend to be quite stable and since servers are relatively less 280 numerous than users, a PPA could easily maintain its own copy of the 281 DNS for well-known or popular servers, to accelerate such lookups. 283 On the other hand, the reverse lookup of IP addresses of users is 284 generally less informative. For example, a lookup of the address 285 currently used by one author's home network returns a name of the 286 form "c-192-000-002-033.hsd1.wa.comcast.net". This particular type 287 of reverse DNS lookup generally reveals only coarse-grained location 288 or provider information. 290 In many jurisdictions, Internet Service Providers (ISPs) are required 291 to provide identification on a case by case basis of the "owner" of a 292 specific IP address for law enforcement purposes. This is a 293 reasonably expedient process for targeted investigations, but 294 pervasive surveillance requires something more efficient. This 295 provides an incentive for the adversary to secure the cooperation of 296 the ISP in order to automate this correlation. 298 3.3.3. Monitoring messaging clients for IP address correlation 300 Even if the ISP does not cooperate, user identity can often be 301 obtained via inference. POP3 [RFC1939] and IMAP [RFC3501] are used 302 to retrieve mail from mail servers, while a variant of SMTP [RFC5321] 303 is used to submit messages through mail servers. IMAP connections 304 originate from the client, and typically start with an authentication 305 exchange in which the client proves its identity by answering a 306 password challenge. The same holds for the SIP protocol [RFC3261] 307 and many instant messaging services operating over the Internet using 308 proprietary protocols. 310 The username is directly observable if any of these protocols operate 311 in cleartext; the username can then be directly associated with the 312 source address. 314 3.3.4. Retrieving IP addresses from mail headers 316 SMTP [RFC5321] requires that each successive SMTP relay adds a 317 "Received" header to the mail headers. The purpose of these headers 318 is to enable audit of mail transmission, and perhaps to distinguish 319 between regular mail and spam. Here is an extract from the headers 320 of a message recently received from the "perpass" mailing list: 322 "Received: from 192-000-002-044.zone13.example.org (HELO 323 ?192.168.1.100?) (xxx.xxx.xxx.xxx) by lvps192-000-002-219.example.net 324 with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 27 Oct 325 2013 21:47:14 +0100 Message-ID: <526D7BD2.7070908@example.org> Date: 326 Sun, 27 Oct 2013 20:47:14 +0000 From: Some One 327 " 328 This is the first "Received" header attached to the message by the 329 first SMTP relay; for privacy reasons, the field values have been 330 anonymized. We learn here that the message was submitted by "Some 331 One" on October 27, from a host behind a NAT (192.168.1.100) 332 [RFC1918] that used the IP address 192.0.2.44. The information 333 remained in the message, and is accessible by all recipients of the 334 "perpass" mailing list, or indeed by any PPA that sees at least one 335 copy of the message. 337 An idealized adversary that can observe sufficient email traffic can 338 regularly update the mapping between public IP addresses and 339 individual email identities. Even if the SMTP traffic was encrypted 340 on submission and relaying, the adversary can still receive a copy of 341 public mailing lists like "perpass". 343 3.3.5. Tracking address usage with web cookies 345 Many web sites only encrypt a small fraction of their transactions. 346 A popular pattern was to use HTTPS for the login information, and 347 then use a "cookie" to associate following clear-text transactions 348 with the user's identity. Cookies are also used by various 349 advertisement services to quickly identify the users and serve them 350 with "personalized" advertisements. Such cookies are particularly 351 useful if the advertisement services want to keep tracking the user 352 across multiple sessions that may use different IP addresses. 354 As cookies are sent in clear text, a PPA can build a database that 355 associates cookies to IP addresses for non-HTTPS traffic. If the IP 356 address is already identified, the cookie can be linked to the user 357 identify. After that, if the same cookie appears on a new IP 358 address, the new IP address can be immediately associated with the 359 pre-determined identity. 361 3.3.6. Graph-based approaches to address correlation 363 An adversary can track traffic from an IP address not yet associated 364 with an individual to various public services (e.g. websites, mail 365 servers, game servers), and exploit patterns in the observed traffic 366 to correlate this address with other addresses that show similar 367 patterns. For example, any two addresses that show connections to 368 the same IMAP or webmail services, the same set of favorite websites, 369 and game servers at similar times of day may be associated with the 370 same individual. Correlated addresses can then be tied to an 371 individual through one of the techniques above, walking the "network 372 graph" to expand the set of attributable traffic. 374 4. Reported Instances of Large-Scale Attacks 376 The situation in reality is more bleak than that suggested by an 377 analysis of our idealized adversary. Through revelations of 378 sensitive documents in several media outlets, the Internet community 379 has been made aware of several intelligence activities conducted by 380 US and UK national intelligence agencies, particularly the US 381 National Security Agency (NSA) and the UK Government Communications 382 Headquarters (GCHQ). These documents have revealed methods that 383 these agencies use to attack Internet applications and obtain 384 sensitive user information. 386 First, they have confirmed that these agencies have capabilities in 387 line with those of our idealized adversary, thorugh the large-scale 388 passive collection of Internet traffic [pass1][pass2][pass3][pass4]. 389 For example: * The NSA XKEYSCORE system accesses data from multiple 390 access points and searches for "selectors" such as email addresses, 391 at the scale of tens of terabytes of data per day. 392 * The GCHQ Tempora system appears to have access to around 1,500 393 major cables passing through the UK. * The NSA MUSCULAR program 394 tapped cables between data centers belonging to major service 395 providers. * Several programs appear to perform wide-scale 396 collection of cookies in web traffic and location data from location- 397 aware portable devices such as smartphones. 399 However, the capabilities described go beyond those available to our 400 idealized adversary, including: 402 o Decryption of TLS-protected Internet sessions [dec1][dec2][dec3]. 403 For example, the NSA BULLRUN project appears to have had a budget 404 of around $250M per year to undermine encryption through multiple 405 approaches. 407 o Insertion of NSA devices as a man-in-the-middle of Internet 408 transactions [TOR1][TOR2]. For example, the NSA QUANTUM system 409 appears to use several different techniques to hijack HTTP 410 connections, ranging from DNS response injection to HTTP 302 411 redirects. 413 o Direct acquisition of bulk data and metadata from service 414 providers [dir1][dir2][dir3]. For example, the NSA PRISM program 415 provides the agency with access to many types of user data (e.g., 416 email, chat, VoIP). 418 o Use of implants (covert modifications or malware) to undermine 419 security and anonymity features [dec2][TOR1][TOR2]. For example: 421 * NSA appears to use the QUANTUM man-in-the-middle system to 422 direct users to a FOXACID server, which delivers an implant to 423 compromise the browser of a user of the Tor anonymous 424 communications network. 426 * The BULLRUN program mentioned above includes the addition of 427 covert modifications to software as one means to undermine 428 encryption. 430 * There is also some suspicion that NSA modifications to the 431 DUAL_EC_DRBG random number generator were made to ensure that 432 keys generated using that generator could be predicted by NSA. 433 These suspicions have been reinforced by reports that RSA 434 Security was paid roughly $10M to make DUAL_EC_DRBG the default 435 in their products. 437 We use the term "pervasive attack" to collectively describe these 438 operations. The term "pervasive" is used because the attacks are 439 designed to indiscriminately gather as much data as possible and to 440 apply selective analysis on targets after the fact. This means that 441 all, or nearly all, Internet communications are targets for these 442 attacks. To achieve this scale, the attacks are physically 443 pervasive; they affect a large number of Internet communications. 444 They are pervasive in content, consuming and exploiting any 445 information revealed by the protocol. And they are pervasive in 446 technology, exploiting many different vulnerabilities in many 447 different protocols. 449 It's important to note that although the attacks mentioned above were 450 executed by NSA and GCHQ, there are many other organizations that can 451 mount pervasive attacks. Because of the resources required to 452 achieve pervasive scale, pervasive attacks are most commonly 453 undertaken by nation-state actors. For example, the Chinese Internet 454 filtering system known as the "Great Firewall of China" uses several 455 techniques that are similar to the QUANTUM program, and which have a 456 high degree of pervasiveness with regard to the Internet in China. 458 5. Threat Model 460 Given these disclosures, we must consider broader threat model. 462 Pervasive surveillance aims to collect information across a large 463 number of Internet communications, observing the collected 464 communications to identify information of interest within individual 465 communications, or inferring information from correlated 466 communications. This analysis sometimes benefits from decryption of 467 encrypted communications and deanonymization of anonymized 468 communications. As a result, these attackers desire both access to 469 the bulk of Internet traffic and to the keying material required to 470 decrypt any traffic that has been encrypted (though the presence of a 471 communication and the fact that it is encrypted may both be inputs to 472 an analysis, even if the attacker cannot decrypt the communication). 474 The attacks listed above highlight new avenues both for access to 475 traffic and for access to relevant encryption keys. They further 476 indicate that the scale of surveillance is sufficient to provide a 477 general capability to cross-correlate communications, a threat not 478 previously thought to be relevant at the scale of all Internet 479 communications. 481 5.1. Attacker Capabilities 483 +--------------------------+-------------------------------------+ 484 | Attack Class | Capability | 485 +--------------------------+-------------------------------------+ 486 | Passive observation | Directly capture data in transit | 487 | | | 488 | Passive inference | Infer from reduced/encrypted data | 489 | | | 490 | Active | Manipulate / inject data in transit | 491 | | | 492 | Static key exfiltration | Obtain key material once / rarely | 493 | | | 494 | Dynamic key exfiltration | Obtain per-session key material | 495 | | | 496 | Content exfiltration | Access data at rest | 497 +--------------------------+-------------------------------------+ 499 Security analyses of Internet protocols commonly consider two classes 500 of attacker: Passive attackers, who can simply listen in on 501 communications as they transit the network, and "active attackers", 502 who can modify or delete packets in addition to simply collecting 503 them. 505 In the context of pervasive attack, these attacks take on an even 506 greater significance. In the past, these attackers were often 507 assumed to operate near the edge of the network, where attacks can be 508 simpler. For example, in some LANs, it is simple for any node to 509 engage in passive listening to other nodes' traffic or inject packets 510 to accomplish active attacks. In the pervasive attack case, however, 511 both passive and active attacks are undertaken closer to the core of 512 the network, greatly expanding the scope and capability of the 513 attacker. 515 A passive attacker with access to a large portion of the Internet can 516 analyze collected traffic to create a much more detailed view of user 517 behavior than an attacker that collects at a single point. Even the 518 usual claim that encryption defeats passive attackers is weakened, 519 since a pervasive passive attacker can infer relationships from 520 correlations over large numbers of sessions, e.g., pairing encrypted 521 sessions with unencrypted sessions from the same host, or performing 522 traffic fingerprinting between known and unknown encrypted sessions. 523 The reports on the NSA XKEYSCORE system would make it an example of 524 such an attacker. 526 A pervasive active attacker likewise has capabilities beyond those of 527 a localized active attacker. Active attacks are often limited by 528 network topology, for example by a requirement that the attacker be 529 able to see a targeted session as well as inject packets into it. A 530 pervasive active attacker with multiple accesses at core points of 531 the Internet is able to overcome these topological limitations and 532 apply attacks over a much broader scope. Being positioned in the 533 core of the network rather than the edge can also enable a pervasive 534 active attacker to reroute targeted traffic. Pervasive active 535 attackers can also benefit from pervasive passive collection to 536 identify vulnerable hosts. 538 While not directly related to pervasiveness, attackers that are in a 539 position to mount a pervasive active attack are also often in a 540 position to subvert authentication, the traditional response to 541 active attack. Authentication in the Internet is often achieved via 542 trusted third party authorities such as the Certificate Authorities 543 (CAs) that provide web sites with authentication credentials. An 544 attacker with sufficient resources for pervasive attack may also be 545 able to induce an authority to grant credentials for an identity of 546 the attacker's choosing. If the parties to a communication will 547 trust multiple authorities to certify a specific identity, this 548 attack may be mounted by suborning any one of the authorities (the 549 proverbial "weakest link"). Subversion of authorities in this way 550 can allow an active attack to succeed in spite of an authentication 551 check. 553 Beyond these three classes (observation, inference, and active), 554 reports on the BULLRUN effort to defeat encryption and the PRISM 555 effort to obtain data from service providers suggest three more 556 classes of attack: 558 o Static key exfiltration 560 o Dynamic key exfiltration 562 o Content exfiltration 563 These attacks all rely on a "collaborator" endpoint providing the 564 attacker with some information, either keys or data. These attacks 565 have not traditionally been considered in security analyses of 566 protocols, since they happen outside of the protocol. 568 The term "key exfiltration" refers to the transfer of keying material 569 for an encrypted communication from the collaborator to the attacker. 570 By "static", we mean that the transfer of keys happens once, or 571 rarely, typically of a long-lived key. For example, this case would 572 cover a web site operator that provides the private key corresponding 573 to its HTTPS certificate to an intelligence agency. 575 "Dynamic" key exfiltration, by contrast, refers to attacks in which 576 the collaborator delivers keying material to the attacker frequently, 577 e.g., on a per-session basis. This does not necessarily imply 578 frequent communications with the attacker; the transfer of keying 579 material may be virtual. For example, if an endpoint were modified 580 in such a way that the attacker could predict the state of its 581 psuedorandom number generator, then the attacker would be able to 582 derive per-session keys even without per-session communications. 584 Finally, content exfiltration is the attack in which the collaborator 585 simply provides the attacker with the desired data or metadata. 586 Unlike the key exfiltration cases, this attack does not require the 587 attacker to capture the desired data as it flows through the network. 588 The risk is to data at rest as opposed to data in transit. This 589 increases the scope of data that the attacker can obtain, since the 590 attacker can access historical data - the attacker does not have to 591 be listening at the time the communication happens. 593 Exfiltration attacks can be accomplished via attacks against one of 594 the parties to a communication, i.e., by the attacker stealing the 595 keys or content rather than the party providing them willingly. In 596 these cases, the party may not be aware that they are collaborating, 597 at least at a human level. Rather, the subverted technical assets 598 are "collaborating" with the attacker (by providing keys/content) 599 without their owner's knowledge or consent. 601 Any party that has access to encryption keys or unencrypted data can 602 be a collaborator. While collaborators are typically the endpoints 603 of a communication (with encryption securing the links), 604 intermediaries in an unencrypted communication can also facilitate 605 content exfiltration attacks as collaborators by providing the 606 attacker access to those communications. For example, documents 607 describing the NSA PRISM program claim that NSA is able to access 608 user data directly from servers, where it was stored unencrypted. In 609 these cases, the operator of the server would be a collaborator 610 (wittingly or unwittingly). By contrast, in the NSA MUSCULAR 611 program, a set of collaborators enabled attackers to access the 612 cables connecting data centers used by service providers such as 613 Google and Yahoo. Because communications among these data centers 614 were not encrypted, the collaboration by an intermediate entity 615 allowed NSA to collect unencrypted user data. 617 5.2. Attacker Costs 619 +--------------------------+-----------------------------------+ 620 | Attack Class | Cost / Risk to Attacker | 621 +--------------------------+-----------------------------------+ 622 | Passive observation | Passive data access | 623 | | | 624 | Passive inference | Passive data access + processing | 625 | | | 626 | Active | Active data access + processing | 627 | | | 628 | Static key exfiltration | One-time interaction | 629 | | | 630 | Dynamic key exfiltration | Ongoing interaction / code change | 631 | | | 632 | Content exfiltration | Ongoing, bulk interaction | 633 +--------------------------+-----------------------------------+ 635 In order to realize an attack of each of the types discussed above, 636 the attacker has to incur certain costs and undertake certain risks. 637 These costs differ by attack, and can be helpful in guiding response 638 to pervasive attack. 640 Depending on the attack, the attacker may be exposed to several types 641 of risk, ranging from simply losing access to arrest or prosecution. 642 In order for any of these negative consequences to happen, however, 643 the attacker must first be discovered and identified. So the primary 644 risk we focus on here is the risk of discovery and attribution. 646 A passive attack is the simplest attack to mount in some ways. The 647 base requirement is that the attacker obtain physical access to a 648 communications medium and extract communications from it. For 649 example, the attacker might tap a fiber-optic cable, acquire a mirror 650 port on a switch, or listen to a wireless signal. The need for these 651 taps to have physical access or proximity to a link exposes the 652 attacker to the risk that the taps will be discovered. For example, 653 a fiber tap or mirror port might be discovered by network operators 654 noticing increased attenuation in the fiber or a change in switch 655 configuration. Of course, passive attacks may be accomplished with 656 the cooperation of the network operator, in which case there is a 657 risk that the attacker's interactions with the network operator will 658 be exposed. 660 In many ways, the costs and risks for an active attack are similar to 661 those for a passive attack, with a few additions. An active attacker 662 requires more robust network access than a passive attacker, since 663 for example they will often need to transmit data as well as 664 receiving it. In the wireless example above, the attacker would need 665 to act as an transmitter as well as receiver, greatly increasing the 666 probability the attacker will be discovered (e.g., using direction- 667 finding technology). Active attacks are also much more observable at 668 higher layers of the network. For example, an active attacker that 669 attempts to use a mis-issued certificate could be detected via 670 Certificate Transparency [RFC6962]. 672 In terms of raw implementation complexity, passive attacks require 673 only enough processing to extract information from the network and 674 store it. Active attacks, by contrast, often depend on winning race 675 conditions to inject pakets into active connections. So active 676 attacks in the core of the network require processing hardware to 677 that can operate at line speed (roughly 100Gbps to 1Tbps in the core) 678 to identify opportunities for attack and insert attack traffic in a 679 high-volume traffic. 681 Key exfiltration attacks rely on passive attack for access to 682 encrypted data, with the collaborator providing keys to decrypt the 683 data. So the attacker undertakes the cost and risk of a passive 684 attack, as well as additional risk of discovery via the interactions 685 that the attacker has with the collaborator. 687 In this sense, static exfiltration has a lower risk profile than 688 dynamic. In the static case, the attacker need only interact with 689 the collaborator a small number of times, possibly only once, say to 690 exchange a private key. In the dynamic case, the attacker must have 691 continuing interactions with the collaborator. As noted above these 692 interactions may real, such as in-person meetings, or virtual, such 693 as software modifications that render keys available to the attacker. 694 Both of these types of interactions introduce a risk that they will 695 be discovered, e.g., by employees of the collaborator organization 696 noticing suspicious meetings or suspicious code changes. 698 Content exfiltration has a similar risk profile to dynamic key 699 exfiltration. In a content exfiltration attack, the attacker saves 700 the cost and risk of conducting a passive attack. The risk of 701 discovery through interactions with the collaborator, however, is 702 still present, and may be higher. The content of a communication is 703 obviously larger than the key used to encrypt it, often by several 704 orders of magnitude. So in the content exfiltration case, the 705 interactions between the collaborator and the attacker need to be 706 much higher-bandwidth than in the key exfiltration cases, with a 707 corresponding increase in the risk that this high-bandwidth channel 708 will be discovered. 710 It should also be noted that in these latter three exfiltration 711 cases, the collaborator also undertakes a risk that his collaboration 712 with the attacker will be discovered. Thus the attacker may have to 713 incur additional cost in order to convince the collaborator to 714 participate in the attack. Likewise, the scope of these attacks is 715 limited to case where the attacker can convince a collaborator to 716 participate. If the attacker is a national government, for example, 717 it may be able to compel participation within its borders, but have a 718 much more difficult time recruiting foreign collaborators. 720 As noted above, the "collaborator" in an exfiltration attack can be 721 unwitting; the attacker can steal keys or data to enable the attack. 722 In some ways, the risks of this approach are similar to the case of 723 an active collaborator. In the static case, the attacker needs to 724 steal information from the collaborator once; in the dynamic case, 725 the attacker needs to continued presence inside the collaborators 726 systems. The main difference is that the risk in this case is of 727 automated discovery (e.g., by intrusion detection systems) rather 728 than discovery by humans. 730 6. Responding to Pervasive Attack 732 Given this threat model, how should the Internet technical community 733 respond to pervasive attack? 735 The cost and risk considerations discussed above can provide a guide 736 to response. Namely, responses to passive attack should close off 737 avenues for attack that are safe, scalable, and cheap, forcing the 738 attacker to mount attacks that expose it to higher cost and risk. 740 In this section, we discuss a collection of high-level approaches to 741 mitigating pervasive attacks. These approaches are not meant to be 742 exhaustive, but rather to provide general guidance to protocol 743 designers in creating protocols that are resistant to pervasive 744 attack. 746 +--------------------------+----------------------------------------+ 747 | Attack Class | High-level mitigations | 748 +--------------------------+----------------------------------------+ 749 | Passive observation | Encryption for confidentiality | 750 | | | 751 | Passive inference | ??? | 752 | | | 753 | Active | Authentication, monitoring | 754 | | | 755 | Static key exfiltration | Encryption with per-session state | 756 | | (PFS) | 757 | | | 758 | Dynamic key exfiltration | Transparency, validation of end | 759 | | systems | 760 | | | 761 | Content exfiltration | Object encryption, distributed systems | 762 +--------------------------+----------------------------------------+ 764 The traditional mitigation to passive attack is to render content 765 unintelligible to the attacker by applying encryption, for example, 766 by using TLS or IPsec [RFC5246][RFC4301]. Even without 767 authentication, encryption will prevent a passive attacker from being 768 able to read the encrypted content. Exploiting unauthenticated 769 encryption requires an active attack (man in the middle); with 770 authentication, a key exfiltration attack is required. 772 The additional capabilities of a pervasive passive attacker, however, 773 require some changes in how protocol designers evaluate what 774 information is encrypted. In addition to directly collecting 775 unencrypted data, a pervasive passive attacker can also make 776 inferences about the content of encrypted messages based on what is 777 observable. For example, if a user typically visits a particular set 778 of web sites, then a pervasive passive attacker observing all of the 779 user's behavior can track the user based on the hosts the user 780 communicates with, even if the user changes IP addresses, and even if 781 all of the connections are encrypted. 783 Thus, in designing protocols to be resistant to pervasive passive 784 attacks, protocol designers should consider what information is left 785 unencrypted in the protocol, and how that information might be 786 correlated with other traffic. Information that cannot be encrypted 787 should be anonymized, i.e., it should be dissociated from other 788 information. For example, the Tor overlay routing network anonymizes 789 IP addresses by using multi-hop onion routing [TOR]. 791 As with traditional, limited active attacks, the basic mitigation to 792 pervasive active attack is to enable the endpoints of a communication 793 to authenticate each other. However, as noted above, attackers that 794 can mount pervasive active attacks can often subvert the authorities 795 on which authentication systems rely. Thus, in order to make 796 authentication systems more resilient to pervasive attack, it is 797 beneficial to monitor these authorities to detect misbehavior that 798 could enable active attack. For example, DANE and Certificate 799 Transparency both provide mechanisms for detecting when a CA has 800 issued a certificate for a domain name without the authorization of 801 the holder of that domain name [RFC6962][RFC6698]. 803 While encryption and authentication protect the security of 804 individual sessions, these sessions may still leak information, such 805 as IP addresses or server names, that a pervasive attacker can use to 806 correlate sessions and derive additional information about the 807 target. Thus, pervasive attack highlights the need for anonymization 808 technologies, which make correlation more difficult. Typical 809 approaches to anonymization against traffic analysis include: 811 o Aggregation: Routing sessions for many endpoints through a common 812 mid-point (e.g., an HTTP proxy). Since the midpoint appears as 813 the end of the communication, individual endpoints cannot be 814 distinguished. 816 o Onion routing: Routing a session through several mid-points, 817 rather than directly end-to-end, with encryption that guarantees 818 that each node can only see the previous and next hops [TOR]. 819 This ensures that the source and destination of a communication 820 are never revealed simultaneously. 822 o Multi-path: Routing different sessions via different paths (even 823 if they originate from the same endpoint). This reduces the 824 probability that the same attacker will be able to collect many 825 sessions. 827 An encrypted, authenticated session is safe from content-monitoring 828 attacks in which neither end collaborates with the attacker, but can 829 still be subverted by the endpoints. The most common ciphersuites 830 used for HTTPS today, for example, are based on using RSA encryption 831 in such a way that if an attacker has the private key, the attacker 832 can derive the session keys from passive observation of a session. 833 These ciphersuites are thus vulnerable to a static key exfiltration 834 attack - if the attacker obtains the server's private key once, then 835 they can decrypt all past and future sessions for that server. 837 Static key exfiltration attacks are prevented by including ephemeral, 838 per-session secret information in the keys used for a session. Most 839 IETF security protocols include modes of operation that have this 840 property. These modes are known in the literature under the heading 841 "perfect forward secrecy" (PFS) because even if an adversary has all 842 of the secrets for one session, the next session will use new, 843 different secrets and the attacker will not be able to decrypt it. 844 The Internet Key Exchange (IKE) protocol used by IPsec supports PFS 845 by default [RFC4306], and TLS supports PFS via the use of specific 846 ciphersuites [RFC5246]. 848 Dynamic key exfiltration cannot be prevent by protocol means. By 849 definition, any secrets that are used in the protocol will be 850 transmitted to the attacker and used to decrypt what the protocol 851 encrypts. Likewise, no technical means will stop a willing 852 collaborator from sharing keys with an attacker. However, this 853 attack model also covers "unwitting collaborators", whose technical 854 resources are collaborating with the attacker without their owners' 855 knowledge. This could happen, for example, if flaws are built into 856 products or if malware is injected later on. 858 The best defense against becoming an unwitting collaborator is thus 859 to assure that end systems are well-vetted and secure. Transparency 860 is a major tool in this process [secure]. Open source software is 861 easier to evaluate for potential flaws than proprietary software, by 862 a wider array of independent analysts. Products that conform to 863 standards for cryptography and security protocols are limited in the 864 ways they can misbehave. And standards processes that are open and 865 transparent help ensure that the standards themselves do not provide 866 avenues for attack. 868 Standards can also define protocols that provide greater or lesser 869 opportunity for dynamic key exfiltration. Collaborators engaging in 870 key exfiltration through a standard protocol will need to use covert 871 channels in the protocol to leak information that can be used by the 872 attacker to recover the key. Such use of covert channels has been 873 demonstrated for SSL, TLS, and SSH [key-recovery]. Any protocol bits 874 that can be freely set by the collaborator can be used as a covert 875 channel, including, for example, TCP options or unencrypted traffic 876 sent before a STARTTLS message in SMTP or XMPP. Protocol designers 877 should consider what covert channels their protocols expose, and how 878 those channels can be exploited to exfiltrate key information. 880 Content exfiltration has some similarity to the dynamic exfiltration 881 case, in that nothing can prevent a collaborator from revealing what 882 they know, and the mitigations against becoming an unwitting 883 collaborator apply. In this case, however, applications can limit 884 what the collaborator is able to reveal. For example, the S/MIME and 885 PGP systems for secure email both deny intermediate servers access to 886 certain parts of the message [RFC5750][RFC2015]. Even if a server 887 were to provide an attacker with full access, the attacker would 888 still not be able to read the protected parts of the message. 890 Mechanisms like S/MIME and PGP are often referred to as "end-to-end" 891 security mechanisms, as opposed to "hop-by-hop" or "end-to-middle" 892 mechanisms like the use of SMTP over TLS. These two different 893 mechanisms address different types of attackers: Hop-by-hop 894 mechanisms protect from attackers on the wire (passive or active), 895 while end-to-end mechansims protect against attackers within 896 intermediate nodes. Thus, neither of these mechanisms provides 897 complete protection by itself. For example: 899 o Two users messaging via Facebook over HTTPS are protected against 900 passive and active attackers in the network between the users and 901 Facebook. However, if Facebook is a collaborator in an 902 exfiltration attack, their communications can still be monitored. 903 They would need to encrypt their messages end-to-end in order to 904 protect themselves against this risk. 906 o Two users exchanging PGP-protected email have protected the 907 content of their exchange from network attackers and intermediate 908 servers, but the header information (e.g., To and From addresses) 909 is unnecessarily exposed to passive and active attackers that can 910 see communications among the mail agents handling the email 911 messages. These mail agents need to use hop-by-hop encryption and 912 traffic analysis mitigation to address this risk. 914 Mechanisms such as S/MIME and PGP are also known as "object-based" 915 security mechanisms (as opposed to "communications security" 916 mechanisms), since they operate at the level of objects, rather than 917 communications sessions. Such secure object can be safely handled by 918 intermediaries in order to realize, for example, store and forward 919 messaging. In the examples above, the encrypted instant messages or 920 email messages would be the secure objects. 922 The mitigations to the content exfiltration case are thus to regard 923 participants in the protocol as potential passive attackers 924 themselves, and apply the mitigations discussed above with regard to 925 passive attack. Information that is not necessary for these 926 participants to fulfill their role in the protocol can be encrypted, 927 and other information can be anonymized. 929 In summary, many of the basic tools for mitigating pervasive attack 930 already exist. As Edward Snowden put it, "properly implemented 931 strong crypto systems are one of the few things you can rely on" 932 [snowden]. The task for the Internet community is to ensure that 933 applications are able to use the strong crypto systems we have 934 defined - for example, TLS with PFS ciphersuites - and that these 935 properly implemented. (And, one might add, turned on!) Some of this 936 work will require architectural changes to applications, e.g., in 937 order to limit the information that is exposed to servers. In many 938 other cases, however, the need is simply to make the best use we can 939 of the cryptographic tools we have. 941 7. Acknowledgements 943 o Thaler for list of attacks and taxonomy 945 o Security ADs for starting and managing the perpass discussion 947 o See PPA acks as well 949 8. TODO 951 o Ensure all bases are covered WRT threats to confidentiality 953 o Consider moving mitigations to a separate document per program 954 description 956 o Look at better alignment with draft-farrell-perpass-attack 958 o Better coverage of traffic analysis - PPA helped somewhat here but 959 the problem is hard 961 o Terminology alignment (after the program agrees the structure is 962 good) 964 9. References 966 9.1. Normative References 968 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 969 Morris, J., Hansen, M., and R. Smith, "Privacy 970 Considerations for Internet Protocols", RFC 6973, July 971 2013. 973 9.2. Informative References 975 [pass1] The Guardian, "How the NSA is still harvesting your online 976 data", 2013, 977 . 980 [pass2] The Guardian, "NSA's Prism surveillance program: how it 981 works and what it can do", 2013, 982 . 985 [pass3] The Guardian, "XKeyscore: NSA tool collects 'nearly 986 everything a user does on the internet'", 2013, 987 . 990 [pass4] The Guardian, "How does GCHQ's internet surveillance 991 work?", n.d., . 994 [dec1] The New York Times, "N.S.A. Able to Foil Basic Safeguards 995 of Privacy on Web", 2013, 996 . 999 [dec2] The Guardian, "Project Bullrun - classification guide to 1000 the NSA's decryption program", 2013, 1001 . 1004 [dec3] The Guardian, "Revealed: how US and UK spy agencies defeat 1005 internet privacy and security", 2013, 1006 . 1009 [TOR] The Tor Project, "Tor", 2013, 1010 . 1012 [TOR1] Schneier, B., "How the NSA Attacks Tor/Firefox Users With 1013 QUANTUM and FOXACID", 2013, 1014 . 1017 [TOR2] The Guardian, "'Tor Stinks' presentation - read the full 1018 document", 2013, 1019 . 1022 [dir1] The Guardian, "NSA collecting phone records of millions of 1023 Verizon customers daily", 2013, 1024 . 1027 [dir2] The Guardian, "NSA Prism program taps in to user data of 1028 Apple, Google and others", 2013, 1029 . 1032 [dir3] The Guardian, "Sigint - how the NSA collaborates with 1033 technology companies", 2013, 1034 . 1037 [secure] Schneier, B., "NSA surveillance: A guide to staying 1038 secure", 2013, 1039 . 1042 [snowden] Technology Review, "NSA Leak Leaves Crypto-Math Intact but 1043 Highlights Known Workarounds", 2013, 1044 . 1048 [key-recovery] 1049 Golle, P., "The Design and Implementation of Protocol- 1050 Based Hidden Key Recovery", 2003, 1051 . 1053 [RFC1035] Mockapetris, P., "Domain names - implementation and 1054 specification", STD 13, RFC 1035, November 1987. 1056 [RFC1918] Rekhter, Y., Moskowitz, R., Karrenberg, D., Groot, G., and 1057 E. Lear, "Address Allocation for Private Internets", BCP 1058 5, RFC 1918, February 1996. 1060 [RFC1939] Myers, J. and M. Rose, "Post Office Protocol - Version 3", 1061 STD 53, RFC 1939, May 1996. 1063 [RFC2015] Elkins, M., "MIME Security with Pretty Good Privacy 1064 (PGP)", RFC 2015, October 1996. 1066 [RFC2821] Klensin, J., "Simple Mail Transfer Protocol", RFC 2821, 1067 April 2001. 1069 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1070 A., Peterson, J., Sparks, R., Handley, M., and E. 1071 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1072 June 2002. 1074 [RFC3365] Schiller, J., "Strong Security Requirements for Internet 1075 Engineering Task Force Standard Protocols", BCP 61, RFC 1076 3365, August 2002. 1078 [RFC3501] Crispin, M., "INTERNET MESSAGE ACCESS PROTOCOL - VERSION 1079 4rev1", RFC 3501, March 2003. 1081 [RFC3851] Ramsdell, B., "Secure/Multipurpose Internet Mail 1082 Extensions (S/MIME) Version 3.1 Message Specification", 1083 RFC 3851, July 2004. 1085 [RFC4033] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1086 Rose, "DNS Security Introduction and Requirements", RFC 1087 4033, March 2005. 1089 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 1090 Internet Protocol", RFC 4301, December 2005. 1092 [RFC4303] Kent, S., "IP Encapsulating Security Payload (ESP)", RFC 1093 4303, December 2005. 1095 [RFC4306] Kaufman, C., "Internet Key Exchange (IKEv2) Protocol", RFC 1096 4306, December 2005. 1098 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", RFC 1099 4949, August 2007. 1101 [RFC5246] Dierks, T. and E. Rescorla, "The Transport Layer Security 1102 (TLS) Protocol Version 1.2", RFC 5246, August 2008. 1104 [RFC5321] Klensin, J., "Simple Mail Transfer Protocol", RFC 5321, 1105 October 2008. 1107 [RFC5655] Trammell, B., Boschi, E., Mark, L., Zseby, T., and A. 1108 Wagner, "Specification of the IP Flow Information Export 1109 (IPFIX) File Format", RFC 5655, October 2009. 1111 [RFC5750] Ramsdell, B. and S. Turner, "Secure/Multipurpose Internet 1112 Mail Extensions (S/MIME) Version 3.2 Certificate 1113 Handling", RFC 5750, January 2010. 1115 [RFC6120] Saint-Andre, P., "Extensible Messaging and Presence 1116 Protocol (XMPP): Core", RFC 6120, March 2011. 1118 [RFC6962] Laurie, B., Langley, A., and E. Kasper, "Certificate 1119 Transparency", RFC 6962, June 2013. 1121 [RFC6698] Hoffman, P. and J. Schlyter, "The DNS-Based Authentication 1122 of Named Entities (DANE) Transport Layer Security (TLS) 1123 Protocol: TLSA", RFC 6698, August 2012. 1125 [RFC7011] Claise, B., Trammell, B., and P. Aitken, "Specification of 1126 the IP Flow Information Export (IPFIX) Protocol for the 1127 Exchange of Flow Information", STD 77, RFC 7011, September 1128 2013. 1130 [RFC7258] Farrell, S. and H. Tschofenig, "Pervasive Monitoring Is an 1131 Attack", BCP 188, RFC 7258, May 2014. 1133 Authors' Addresses 1135 Richard Barnes 1137 Email: rlb@ipv.sx 1139 Bruce Schneier 1141 Email: schneier@schneier.com 1143 Cullen Jennings 1145 Email: fluffy@cisco.com 1147 Ted Hardie 1149 Email: ted.ietf@gmail.com 1151 Brian Trammell 1153 Email: ietf@trammell.ch 1155 Christian Huitema 1157 Email: huitema@huitema.net 1159 Daniel Borkmann 1161 Email: dborkman@redhat.com