idnits 2.17.1 draft-symeonidis-pearg-private-messaging-threats-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (October 31, 2019) is 1633 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'IM' is mentioned on line 210, but not defined ** Downref: Normative reference to an Informational RFC: RFC 4949 == Outdated reference: A later version (-06) exists of draft-birk-pep-04 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group I. Symeonidis 3 Internet-Draft University of Luxembourg 4 Intended status: Standards Track B. Hoeneisen 5 Expires: May 3, 2020 Ucom.ch 6 October 31, 2019 8 Privacy and Security Threat Analysis for Private Messaging 9 draft-symeonidis-pearg-private-messaging-threats-00 11 Abstract 13 Modern email and instant messaging applications offer private 14 communications between users. As IM and Email network designs become 15 more similar, both share common concerns about security and privacy 16 of the information exchanged. However, the solutions available to 17 mitigate these threats and to comply with the requirements may 18 differ. The two communication methods are, in fact, built on 19 differing assumptions and technologies. Assuming a scenario of 20 untrusted servers, we analyze threats against message delivery and 21 storage, the requirements that these systems need, and the solutions 22 that exist in order to help implement secure and private messaging. 23 From the discussed technological challenges and requirements, we aim 24 to derive an open standard for private messaging. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at https://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on May 3, 2020. 43 Copyright Notice 45 Copyright (c) 2019 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (https://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 61 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 62 1.2. Terms . . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 2. System Model . . . . . . . . . . . . . . . . . . . . . . . . 4 64 2.1. Entities . . . . . . . . . . . . . . . . . . . . . . . . 4 65 2.2. Assets and Functional Requirements . . . . . . . . . . . 5 66 3. Threat Analyses and Requirements . . . . . . . . . . . . . . 5 67 3.1. Adversarial Model . . . . . . . . . . . . . . . . . . . . 5 68 3.2. Assumptions . . . . . . . . . . . . . . . . . . . . . . . 6 69 3.3. Security Threats and Requirements . . . . . . . . . . . . 6 70 3.3.1. Spoofing and Entity Authentication . . . . . . . . . 6 71 3.3.2. Information Disclosure and Confidentiality . . . . . 7 72 3.3.3. Tampering With Data and Data Authentication . . . . . 7 73 3.3.4. Repudiation and Accountability (Non-Repudiation) . . 7 74 3.3.5. Elevation of Privilege and Authorization . . . . . . 8 75 3.4. Privacy Threats and Requirements . . . . . . . . . . . . 8 76 3.4.1. Identifiability - Anonymity . . . . . . . . . . . . . 8 77 3.4.2. Linkability - Unlinkability . . . . . . . . . . . . . 8 78 3.4.3. Detectability and Observability - Undetectability . . 9 79 3.5. Information Disclosure - Confidentiality . . . . . . . . 9 80 3.6. Non-repudiation and Deniability . . . . . . . . . . . . . 9 81 3.6.1. Policy Non-compliance and Policy compliance . . . . . 10 82 4. Security Considerations . . . . . . . . . . . . . . . . . . . 10 83 5. Privacy Considerations . . . . . . . . . . . . . . . . . . . 10 84 6. Future Key Challenges . . . . . . . . . . . . . . . . . . . . 10 85 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 86 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 10 87 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 10 88 9.1. Normative References . . . . . . . . . . . . . . . . . . 10 89 9.2. Informative References . . . . . . . . . . . . . . . . . 11 90 Appendix A. Document Changelog . . . . . . . . . . . . . . . . . 12 91 Appendix B. Open Issues . . . . . . . . . . . . . . . . . . . . 12 92 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 12 94 1. Introduction 96 Private messaging should ensure that, in an exchange of messages 97 between (two) peers, no one but the sender and the receiver of the 98 communication will be capable of reading the messages exchanged at 99 any (current, future or past) time. Essentially, no one but the 100 communicating peers should ever have access to the messages during 101 transit such as Telecom, Internet providers, or intermediary parties, 102 and storage such as messaging servers. As private messaging, we are 103 referring to Instant Messaging (IM) [RFC2779], such as WhatsApp and 104 Signal, and Emailing applications, such as the centralized Protonmail 105 and the fully decentralized pEp [I-D.birk-pep]. 107 The aim of this document is to provide an open standard for private 108 messaging requirements, as well as a unified evaluation framework. 109 The framework catalogues security and privacy threats and the 110 corresponding, to threats, requirements. IM and Email applications 111 have common feature design characteristics and support a common set 112 of information assets for transmission during communication between 113 peers. For example, applications for both systems should support 114 message exchange of text and files (e.g., attachments) in a private 115 messaging manner. 117 Despite having common characteristics, IM and Email have network 118 design divergences in areas such as responsiveness and synchronicity. 119 For example, low-latency and synchronous were the common features for 120 instant messaging and high-latency and asynchronous for email. As IM 121 and Email network designs become more similar, approaches to security 122 and privacy should be able to address both types of communications. 123 Current IM applications tend to be asynchronous, allowing delivery of 124 messages when the communicating parties are not at the same time 125 online. 127 Solutions available to implement private messaging in the two types 128 of applications may call for different mitigation mechanisms and 129 design choices. For instance, confidentiality can be preserved in 130 multiple ways and with various cryptographic primitives. As design 131 choices, it depends on the expected level of protection and the 132 background of the user. For instance, for users whose lives may be 133 at stake, such as journalists, whistleblowers, or political 134 dissidents, the design choices for requirements and mitigation 135 mechanisms can be (and often are) much more advanced than those for 136 organizations and general end-users. Despite this distinction, 137 privacy and security on the internet are Human Rights, and easily- 138 enabled means to protect these rights need to exist. But in cases 139 where stronger protections are required, usability may come second to 140 more robust protection. 142 The objectives of this document are to create an open standard for 143 secure messaging requirements. The open standard for private 144 messaging aims to serve as a unified evaluation framework, including 145 an adversarial model, threats, and requirements. With this document, 146 we catalogue the threats and requirements for implementing secure and 147 private messaging systems. In this current version, we discuss two 148 key design features of IM and Email, message delivery and storage/ 149 archival. This draft is an ongoing work in progress, and the list of 150 requirements discussed here are not exhaustive. However, our work 151 already shows an emerging and rich set of security and privacy 152 challenges. 154 Of course, IM additionally can support voice/video calls, which is an 155 additional feature/asset under which a threat assessment and 156 requirements can be evaluated. 158 1.1. Requirements Language 160 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 161 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 162 document are to be interpreted as described in [RFC2119]. 164 1.2. Terms 166 The following terms are defined for the scope of this document: 168 o Man-in-the-middle (MITM) attack: cf. [RFC4949], which states: "A 169 form of active wiretapping attack in which the attacker intercepts 170 and selectively modifies communicated data to masquerade as one or 171 more of the entities involved in a communication association." 173 2. System Model 175 2.1. Entities 177 o Users: The communicating parties who exchange messages, typically 178 referred to as senders and receivers. 180 o Messaging operators and network nodes: The communicating service 181 providers and network nodes that are responsible for message 182 delivery and synchronization. 184 o Third parties: Any other entity who interacts with the messaging 185 system. 187 2.2. Assets and Functional Requirements 189 This section outlines a private messaging system. It describes the 190 functionalities that needs to support and the information that can be 191 collected by the system as assets from users. We follow the 192 requirements extracted from real world systems and applications as 193 well as from the academic literature for email and instant messaging 194 [Unger] [Ermoshina] [Clark]. 196 Assets: 198 o Content: text, files (e.g., attachments), voice/video 200 o Identities: sender/receiver identity, contact list 202 o Metadata: sender/receiver, timing, frequency, packet size 204 Functionalities: 206 o [Email/IM] Messages: send and receive text + attachments 208 * Peer or group: more than 2 participants communicating 210 o [IM] Voice / video call 212 o [Email/IM] Archive and search: of messages and attachments 214 o [Email/IM] Contacts: synchronisation and matching 216 o [Email/IM] Multi-device support: synchronisation across multiple 217 devices 219 3. Threat Analyses and Requirements 221 This section describes a set of possible threats. Note that 222 typically not all threats can be addressed in a system, due to 223 conflicting requirements. 225 3.1. Adversarial Model 227 An adversary is any entity who leverages threats against the 228 communication system, whose goal is to gain improper access to the 229 message content and users' information. They can be anyone who is 230 involved in communication, such as users of the system, message 231 operators, network nodes, or even third parties. 233 o Internal - external: An adversary can seize control of entities 234 within the system, such as extracting information from a specific 235 entity or preventing a message from being sent. An external 236 adversary can only compromise the communication channels 237 themselves, eavesdropping and tampering with messaging such as 238 performing Man-in-the-Middle (MitM) attacks. 240 o Local - global: A local adversary can control one entity that is 241 part of a system, while a global adversary can seize control of 242 several entities in a system. A global adversary can also monitor 243 and control several parts of the network, granting them the 244 ability to correlate network traffic, which is crucial in 245 performing timing attacks. 247 o Passive - active: A passive attacker can only eavesdrop and 248 extract information, while an active attacker can tamper with the 249 messages themselves, such as adding, removing, or even modifying 250 them. 252 Attackers can combine these adversarial properties in a number of 253 ways, increasing the effectiveness - and probable success - of their 254 attacks. For instance, an external global passive attacker can 255 monitor multiple channels of a system, while an internal local active 256 adversary can tamper with the messages of a targeted messaging 257 provider [Diaz]. 259 3.2. Assumptions 261 In this current work, we assume that end points are secure such that 262 the mobile devices of the users. Moreover, we assume that an 263 adversary cannot break any of the underline cryptographic primitives. 265 3.3. Security Threats and Requirements 267 3.3.1. Spoofing and Entity Authentication 269 Spoofing occurs when an adversary gains improper access to the system 270 upon successfully impersonating the profile of a valid user. The 271 adversary may also attempt to send or receive messages on behalf of 272 that user. The threat posed by an adversary's spoofing capabilities 273 is typically based on the local control of one entity or a set of 274 entities, with each compromised account typically is used to 275 communicate with different end-users. In order to mitigate spoofing 276 threats, it is essential to have entity authentication mechanisms in 277 place that will verify that a user is the legitimate owner of a 278 messaging service account. The entity authentication mechanisms 279 typically rely on the information or physical traits that only the 280 valid user should know/possess, such as passwords, valid public keys, 281 or biometric data like fingerprints. 283 3.3.2. Information Disclosure and Confidentiality 285 An adversary aims to eavesdrop and disclose information about the 286 content of a message. They can attempt to perform a man-in-the- 287 middle attack (MitM). For example, an adversary can attempt to 288 position themselves between two communicating parties, such as 289 gaining access to the messaging server and remain undetectable while 290 collecting information transmitted between the intended users. The 291 threat posed by an adversary can be from local gaining control of one 292 point of a communication channel such as an entity or a communication 293 link within the network. The adversarial threat can also be broader 294 in scope, such as seizing global control of several entities and 295 communication links within the channel. That grants the adversary 296 the ability to correlate and control traffic in order to execute 297 timing attacks, even in the end-to-end communication systems [Tor]. 298 Therefore, confidentiality of messages exchanged within a system 299 should be guaranteed with the use of encryption schemes 301 3.3.3. Tampering With Data and Data Authentication 303 An adversary can also modify the information stored and exchanged 304 between the communication entities in the system. For instance, an 305 adversary may attempt to alter an email or an instant message by 306 changing the content of them. As a result, it can be anyone but the 307 users who are communicating, such as the message operators, the 308 network node, or third parties. The threat posed by an adversary can 309 be in gaining local control of an entity which can alter messages, 310 usually resulting in a MitM attack on an encrypted channel. 311 Therefore, no honest party should accept a message that was modified 312 in transit. Data authentication of messages exchanged needs to be 313 guaranteed, such as with the use of Message Authentication Code (MAC) 314 and digital signatures. 316 3.3.4. Repudiation and Accountability (Non-Repudiation) 318 Adversaries can repudiate, or deny, the status of the message to 319 users of the system. For instance, an adversary may attempt to 320 provide inaccurate information about an action performed, such as 321 about sending or receiving an email. An adversary can be anyone who 322 is involved in communicating, such as the users of the system, the 323 message operators, and the network nodes. To mitigate repudiation 324 threats, accountability, and non-repudiation of actions performed 325 must be guaranteed. Non-repudiation of action can include proof of 326 origin, submission, delivery, and receipt between the intended users. 327 Non-repudiation can be achieved with the use of cryptographic schemes 328 such as digital signatures and audit trails such as timestamps. 330 3.3.5. Elevation of Privilege and Authorization 332 An adversary may attempt to elevate privileges aiming to gain access 333 to the assets of other users or the resources of the system. For 334 instance, an adversary may attempt to become an administrator of a 335 message group or a superuser of the system aiming at retrieving 336 users' messages or executing operations as a superuser. Therefore, 337 authorization mechanisms such as access control lists that comply 338 with the principle of least privilege for user accounts and processes 339 should be applied. 341 3.4. Privacy Threats and Requirements 343 3.4.1. Identifiability - Anonymity 345 Identifiability is defined as the extent to which a specific user can 346 be identified from a set of users, which is the identifiability set. 347 Identification is the process of linking information to allow the 348 inference of a particular user's identity [RFC6973]. An adversary 349 can identify a specific user associated with Items of Interest (IOI), 350 which include items such as the ID of a subject, a sent message, or 351 an action performed. For instance, an adversary may identify the 352 sender of a message by examining the headers of a message exchanged 353 within a system. To mitigate identifiability threats, the anonymity 354 of users must be guaranteed. Anonymity is defined from the attackers 355 perspective as the "attacker cannot sufficiently identify the subject 356 within a set of subjects, the anonymity set" [Pfitzmann]. 357 Essentially, in order to make anonymity possible, there always needs 358 to be a set of possible users such that for an adversary the 359 communicating user is equally likely to be of any other user in the 360 set [Diaz]. Thus, an adversary cannot identify who is the sender of 361 a message. Anonymity can be achieved with the use of pseudonyms and 362 cryptographic schemes such as anonymous remailers (i.e., mixnets), 363 anonymous communications channels (e.g., Tor), and secret sharing. 365 3.4.2. Linkability - Unlinkability 367 Linkability occurs when an adversary can sufficiently distinguish 368 within a given system that two or more IOIs such as subjects (i.e., 369 users), objects (i.e., messages), or actions are related to each 370 other [Pfitzmann]. For instance, an adversary may be able to relate 371 pseudonyms by analyzing exchanged messages and deduce that the 372 pseudonyms belong to one user (though the user may not necessarily be 373 identified in this process). Therefore, unlinkability of IOIs should 374 be guaranteed through the use of pseudonyms as well as cryptographic 375 schemes such as anonymous credentials. 377 3.4.3. Detectability and Observability - Undetectability 379 Detectability occurs when an adversary is able to sufficiently 380 distinguish an IOI, such as messages exchanged within the system, 381 from random noise [Pfitzmann]. Observability occurs when that 382 detectability occurs along with a loss of anonymity for the entities 383 within that same system. An adversary can exploit these states in 384 order to infer linkability and possibly identification of users 385 within a system. Therefore, undetectability of IOIs should be 386 guaranteed, which also ensures unobservability. Undetectability for 387 an IOI is defined as that "the attacker cannot sufficiently 388 distinguish whether it exists or not." [Pfitzmann]. Undetectability 389 can be achieved through the use of cryptographic schemes such as mix- 390 nets and obfuscation mechanisms such as the insertion of dummy 391 traffic within a system. 393 3.5. Information Disclosure - Confidentiality 395 Information disclosure - or loss of confidentiality - about users, 396 message content, metadata or other information is not only a security 397 but also a privacy threat that a communicating system can face. For 398 example, a successful MitM attack can yield metadata that can be used 399 to determine with whom a specific user communicates with, and how 400 frequently. To guarantee the confidentiality of messages and prevent 401 information disclosure, security measures need to be guaranteed with 402 the use of cryptographic schemes such as symmetric, asymmetric or 403 homomorphic encryption and secret sharing. 405 3.6. Non-repudiation and Deniability 407 Non-repudiation can be a threat to a user's privacy for private 408 messaging systems, in contrast to security. As discussed in section 409 6.1.4, non-repudiation should be guaranteed for users. However, non- 410 repudiation carries a potential threat vector in itself when it is 411 used against a user in certain instances. For example, whistle- 412 blowers may find non-repudiation used against them by adversaries, 413 particularly in countries with strict censorship policies and in 414 cases where human lives are at stake. Adversaries in these 415 situations may seek to use shreds of evidence collected within a 416 communication system to prove to others that a whistle-blowing user 417 was the originator of a specific message. Therefore, plausible 418 deniability is essential for these users, to ensure that an adversary 419 can neither confirm nor contradict that a specific user sent a 420 particular message. Deniability can be guaranteed through the use of 421 cryptographic protocols such as off-the-record messaging. 423 3.6.1. Policy Non-compliance and Policy compliance 425 Policy non-compliance can be a threat to the privacy of users in a 426 private messaging system. An adversary, can attempt to process 427 information about users unlawfully and not-compliant to regulations. 428 It may attempt to collect and process information of users exchanged 429 in emails without the users' notification and explicit consent. That 430 can result in unauthorized processing of users information under the 431 General Data Protection Regulation resulting in of such as profiling, 432 advertisement and censorship. Therefore, data protection policy 433 compliance must be guaranteed. It can be achieved with auditing such 434 as with Data Protection Impact Assessment considering [GDPR]. 436 4. Security Considerations 438 Relevant security considerations are outlined in Section 3.3. 440 5. Privacy Considerations 442 Relevant privacy considerations are outlined in Section 3.4. 444 6. Future Key Challenges 446 Reducing metadata leakage and standardization (i.e. prevent further 447 fragmentation). 449 7. IANA Considerations 451 This document requests no action from IANA. 453 [[ RFC Editor: This section may be removed before publication. ]] 455 8. Acknowledgments 457 The authors would like to thank the following people who have 458 provided feedback or significant contributions to the development of 459 this document: Athena Schumacher, Claudio Luck, Hernani Marques, 460 Kelly Bristol, Krista Bennett, and Nana Karlstetter. 462 9. References 464 9.1. Normative References 466 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 467 Requirement Levels", BCP 14, RFC 2119, 468 DOI 10.17487/RFC2119, March 1997, 469 . 471 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2", 472 FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007, 473 . 475 9.2. Informative References 477 [Clark] Clark, J., van Oorschot, P., Ruoti, S., Seamons, K., and 478 D. Zappala, "Securing Email", CoRR abs/1804.07706, 2018. 480 [Diaz] Diaz, C., Seys, St., Claessens, J., and B. Preneel, 481 "Towards Measuring Anonymity", PET Privacy Enhancing 482 Technologies, Second International Workshop, San 483 Francisco, CA, USA, April 14-15, 2002, Revised Papers, pp. 484 54-68, 2002. 486 [Ermoshina] 487 Ermoshina, K., Musiani, F., and H. Halpin, "End-to-End 488 Encrypted Messaging Protocols: An Overview", INSCI 2016: 489 pp. 244-254, 2016. 491 [GDPR] "General Data Protection Regulation 2016/680 of the 492 European Parliament and of the Council (GDPR).", Official 493 Journal of the European Union, L 119/89, 4.5.2016 , April 494 2016, . 496 [I-D.birk-pep] 497 Marques, H., Luck, C., and B. Hoeneisen, "pretty Easy 498 privacy (pEp): Privacy by Default", draft-birk-pep-04 499 (work in progress), July 2019. 501 [Pfitzmann] 502 Pfitzmann, A. and M. Hansen, "A terminology for talking 503 about privacy by data minimization: Anonymity, 504 unlinkability, undetectability, unobservability, 505 pseudonymity, and identity management", 2010, 506 . 509 [RFC2779] Day, M., Aggarwal, S., Mohr, G., and J. Vincent, "Instant 510 Messaging / Presence Protocol Requirements", RFC 2779, 511 DOI 10.17487/RFC2779, February 2000, 512 . 514 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 515 Morris, J., Hansen, M., and R. Smith, "Privacy 516 Considerations for Internet Protocols", RFC 6973, 517 DOI 10.17487/RFC6973, July 2013, 518 . 520 [Tor] Project, T., "One cell is enough to break Tor's 521 anonymity", June 2019, . 524 [Unger] Unger, N., Dechand, S., Bonneau, J., Fahl, S., Perl, H., 525 Goldberg, I., and M. Smith, "SoK: Secure Messaging", 526 IEEE Proceedings - 2015 IEEE Symposium on Security and 527 Privacy, SP 2015, pages 232-249, July 2015, 528 . 531 Appendix A. Document Changelog 533 [[ RFC Editor: This section is to be removed before publication ]] 535 o draft-symeonidis-pearg-private-messaging-threats-00: 537 * Initial version 539 * this document partially replaces draft-symeonidis-medup- 540 requirements-00 542 Appendix B. Open Issues 544 [[ RFC Editor: This section should be empty and is to be removed 545 before publication ]] 547 o Add more text on Group Messaging requirements 549 o Decide on whether or not "enterprise requirement" will go to this 550 document 552 Authors' Addresses 554 Iraklis Symeonidis 555 University of Luxembourg 556 29, avenue JF Kennedy 557 L-1855 Luxembourg 558 Luxembourg 560 Email: iraklis.symeonidis@uni.lu 561 URI: https://wwwen.uni.lu/snt/people/iraklis_symeonidis 562 Bernie Hoeneisen 563 Ucom Standards Track Solutions GmbH 564 CH-8046 Zuerich 565 Switzerland 567 Phone: +41 44 500 52 40 568 Email: bernie@ietf.hoeneisen.ch (bernhard.hoeneisen AT ucom.ch) 569 URI: https://ucom.ch/