idnits 2.17.1 draft-morris-privacy-considerations-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 18, 2010) is 4939 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-03) exists of draft-hansen-privacy-terminology-01 == Outdated reference: A later version (-13) exists of draft-ietf-ecrit-framework-11 == Outdated reference: A later version (-27) exists of draft-ietf-geopriv-policy-21 -- Obsolete informational reference (is this intentional?): RFC 3265 (Obsoleted by RFC 6665) -- Obsolete informational reference (is this intentional?): RFC 4282 (Obsoleted by RFC 7542) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group B. Aboba 3 Internet-Draft Microsoft Corporation 4 Intended status: Informational J. Morris 5 Expires: April 21, 2011 CDT 6 J. Peterson 7 NeuStar, Inc. 8 H. Tschofenig 9 Nokia Siemens Networks 10 October 18, 2010 12 Privacy Considerations for Internet Protocols 13 draft-morris-privacy-considerations-00.txt 15 Abstract 17 This document aims to make protocol designers aware of the privacy- 18 related design choices and offers guidance for writing privacy 19 considerations in IETF documents. Similiar to other design aspects 20 the IETF influence on the actual deployment is limited. We discuss 21 these limitations but are convinced that protocol architects have 22 indeed a role to play in a privacy friendly design by making more 23 conscious decision, and by documenting those. 25 Status of this Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on April 21, 2011. 42 Copyright Notice 44 Copyright (c) 2010 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 60 2. Historical Background . . . . . . . . . . . . . . . . . . . . 5 61 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 62 4. Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . 13 63 5. Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . 15 64 6. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 65 6.1. Presence . . . . . . . . . . . . . . . . . . . . . . . . . 16 66 6.2. AAA for Network Access . . . . . . . . . . . . . . . . . . 18 67 6.3. SIP for Internet Telephony . . . . . . . . . . . . . . . . 20 68 7. Security Considerations . . . . . . . . . . . . . . . . . . . 21 69 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 70 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 23 71 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 24 72 10.1. Normative References . . . . . . . . . . . . . . . . . . . 24 73 10.2. Informative References . . . . . . . . . . . . . . . . . . 24 74 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 28 76 1. Introduction 78 The IETF is known for its contributions to the design of the Internet 79 and the specifications IETF participants produce belong to different 80 categories, such as technical specification, best current practice 81 descriptions, and architectural documentations. While these 82 documents do not mandate a specific type of implementation they are 83 often, if not always, impacted by different architectural design 84 decisions. These decision decisions are influenced by technical 85 aspects, expectations about deployment incentives of the involved 86 entities, operational considerations, security frameworks, etc. 88 This document aims to make protocol designers aware of the privacy- 89 related design choices and offers guidance for writing privacy 90 considerations in IETF documents. Similiar to other design aspects 91 there is only a certain degree of influence a protocol designer 92 working in a standards developing organization has on the final 93 deployment outcome. We discuss these limitations in Section 3. 94 Nevertheless, we believe that the IETF community overall has indeed a 95 role to play in making specifications more privacy friendly: Being 96 aware of how design decisions impact privacy, by reflecting them in 97 the protocol design, and by documenting the chosen design choices and 98 potential challenges when deploying a single protocol or an entire 99 suite of protcols. 101 From the activities in the industry one can observe three schools of 102 thought in the work on privacy, namely 104 Privacy by Technology: This approach builds on the observation that 105 considering privacy in the design of a protocol as a technical 106 mechanism. One approach is to approach a specific application 107 problem by sharing fewer data items with other parties (i.e. data 108 minimization). Limiting data sharing also avoids the need for 109 evaluation on how consent is obtained, to define policies around 110 how to protect data, etc. The main idea therefore is that 111 different architectural designs lead to different results with 112 respect to privacy. 114 Examples in the area of location privacy can be found in 115 [EFF-Privacy]. These solution approaches often make heavy use of 116 cryptographic techniques, such as threshold cryptography and 117 secret sharing schemes. 119 Privacy by Policy: With this approach it is assumed that privacy 120 protection happens to a large degree a matter of getting the 121 consent of the user in the form of privacy policies. Hence, 122 protection of the customers privacy is therefore largely a 123 responsibility of the company collecting, processing, and storing 124 personal data. Notice and choice are offered to the customer and 125 backed-up by an appropriate legal framework. 127 An example in the area of location-based services is a recent 128 publication by CTIA [CTIA]. 130 Policy/Technology Hybrid: This approach presents a middle-ground 131 where some privacy enhancing features can be provided by 132 technology, and made transparent to those who implement (via 133 explicit recommendations for implementation, configuration and 134 deployment best current practices or implicitly by raising 135 awareness via a discussion about privacy in technical 136 specifications) but other aspects can only be provided and 137 enforced by the parties who decide about the deployment. For 138 deployments often a certain expectation about the existence of an 139 appropriate legal framework is required. 141 The authors believe that the policy/technology hybrid approach is the 142 most practical one and therefore suggest it to be the leading 143 paradigm in privacy investigations within the IETF. 145 This document is structured as follows: First, we provide a brief 146 introduction to the privacy history in Section 2. In Section 3 we 147 illustrate what is in scope of the IETF work and where the 148 responsibility of the IETF ends. In Section 4 we discuss the main 149 threat model for privacy investigations. In Section 5 we propose 150 guidelines for investing privacy within IETF specifications and in 151 Section VII we discuss privacy characteristics of a few IETF 152 protocols and explain what privacy features have been provided until 153 now. 155 2. Historical Background 157 The "right to be let alone" is a phrase coined by Warren and Brandeis 158 in their seminal Harvard Law Review article on privacy [Warren]. 159 They were the first scholars to recognize that a right to privacy had 160 evolved in the 19th century to embrace not only physical privacy but 161 also a potential "injury of the feelings", which could, for example, 162 result from the public disclosure of embarrassing private facts. 164 In 1967 Westin [Westin] described privacy as a "personal adjustment 165 process" in which individuals balance "the desire for privacy with 166 the desire for disclosure and communication" in the context of social 167 norms and their environment. Privacy thus requires that an 168 individual has a means to exercise selective control of access to the 169 self and is aware of the potential consequences of exercising that 170 control [Altman]. 172 Efforts to define and analyze the privacy concept evolved 173 considerably in the 20th century. In 1975, Altman conceptualized 174 privacy as a "boundary regulation process whereby people optimize 175 their accessibility along a spectrum of 'openness' and 'closedness' 176 depending on context" [Altman]. "Privacy is the claim of 177 individuals, groups, or institutions to determine for themselves 178 when, how, and to what extent information about them is communicated 179 to others. Viewed in terms of the relation of the individual to 180 social participation, privacy is the voluntary and temporary 181 withdrawal of a person from the general society through physical or 182 psychological means, either in a state of solitude or small-group 183 intimacy or, when among larger groups, in a condition of anonymity or 184 reserve." [Westin]. 186 Note: Altman and Westin were referring to nonelectronic environments, 187 where privacy intrusion was typically based on fresh information, 188 referring to one particular person only, and stemming from traceable 189 human sources. The scope of possible privacy breaches was therefore 190 rather limited. Today, in contrast, details about an individual's 191 activities are typically stored over a longer period of time, 192 collected from many different sources, and information about almost 193 every activity in life is available electronically. 195 In 1980, the Organization for Economic Co-operation and Development 196 (OECD) published eight Guidelines on the Protection of Privacy and 197 Trans-Border Flows of Personal Data [OECD], which are often referred 198 to as Fair Information Practices (FIPs). Fair information practices 199 include the following principles: 201 Notice and Consent: Before the collection of data, the data subject 202 should be provided: notice of what information is being collected 203 and for what purpose and an opportunity to choose whether to 204 accept the data collection and use. In Europe, data collection 205 cannot proceed unless data subject has unambiguously given his 206 consent (with exceptions). 208 Collection Limitation: Data should be collected for specified, 209 explicit and legitimate purposes. The data collected should be 210 adequate, relevant and not excessive in relation to the purposes 211 for which they are collected. 213 Use/Disclosure Limitation: Data should be used only for the purpose 214 for which it was collected and should not be used or disclosed in 215 any way incompatible with those purposes. 217 Retention Limitation: Data should be kept in a form that permits 218 identification of the data subject no longer than is necessary for 219 the purposes for which the data were collected. 221 Accuracy: The party collecting and storing data is obligated to 222 ensure its accuracy and, where necessary, keep it up to date; 223 every reasonable step must be taken to ensure that data which are 224 inaccurate or incomplete are corrected or deleted. 226 Access: A data subject should have access to data about himself, in 227 order to verify its accuracy and to determine how it is being 228 used. 230 Security: Those holding data about others must take steps to protect 231 its confidentiality. 233 The OECD guidelines and also recent onces, like the Madrid resolution 234 [Madrid] or the Granada Charter of Privacy in a Digital World 235 [Granada], provide a useful understanding of how to provide privacy 236 protection but these guidelines quite naturally stay on a higher 237 level. As such, they do not aim to evaluate the tradeoffs in 238 addressing privacy protection in the different stages of the 239 development process, as illustrated in Figure 1. 241 US regulatory and self-regulatory efforts supported by the Federal 242 Trade Commission (FTC) have focused on a subset of these principles, 243 namely to notice, choice, access, and security rather than minimizing 244 data collection or use limitation. Hence, they are sometimes labeled 245 as the "notice and choice" approach to privacy. From a practical 246 point of view it became evident that companies are reluctant to stop 247 collecting and using data but individuals expect to remain in control 248 about its usage. Today, the effectiveness to deal with privacy 249 violations using the "notice and choice" approach is heavily 250 criticized [limits]. 252 Among these considers (although often implicit) are assumptions on 253 how information is exchanged between different parties and for 254 certain protocols this information may help to identify entities, and 255 potentially humans behind them. Without doubt the information 256 exchanged is not always equal. The terms 'personal data' [DPD95] and 257 Personally Identifiable Information (PII) [SP800-122] have become 258 common language in the vocabulary of privacy experts. It seems 259 therefore understandable that regulators around the globe have 260 focused on the type of data being exchanged and have provided laws 261 according to the level of sensitivity. Medical data is treated 262 differently in many juristictions than blog comments. For an initial 263 investigation it is intuitive and helpful to determine whether 264 specific protocol or application may be privacy sensitive. The ever 265 increasing ability for parties on the Internet to collect, aggregate, 266 and to reason about information collected from a wide range of 267 sources requires to apply further thinking about potential other 268 privacy sensitive items. The recent example of browser 269 fingerprinting [browser-fingerprinting] shows how many information 270 items combined can lead to a privacy threat. 272 The following list contains examples of information that may be 273 considered personal data: 275 o Name 277 o Address information 279 o Phone numbers, email addresses, SIP/XMPP URIs, other identifiers 281 o IP and MAC addresses or other host-specific persistent identifiers 282 that consistently links to a particular person or small, well- 283 defined group of people 285 o Information identifying personally owned property, such as vehicle 286 registration number 288 Data minimization means that first of all, the possibility to collect 289 personal data about others should be minimized. Next, within the 290 remaining possibilities, collecting personal data should be 291 minimized. Finally, the time how long collected personal data is 292 stored should be minimized. 294 As stated in [I-D.hansen-privacy-terminology], "If we exclude 295 providing misinformation (inaccurate or erroneous information, 296 provided usually without conscious effort at misleading, deceiving, 297 or persuading one way or another) or disinformation (deliberately 298 false or distorted information given out in order to mislead or 299 deceive), data minimization is the only generic strategy to enable 300 anonymity, since all correct personal data help to identify.". 302 Early papers from the 1980ies about privacy by data minimization 303 already deal with anonymity, unlinkability, unobservability, and 304 pseudonymity. [I-D.hansen-privacy-terminology] provides a 305 compilation of terms. 307 3. Scope 309 The IETF at large produces specifications that typically fall into 310 the following categories: 312 o Process specifications (e.g. WG shepherding guidelines described 313 in RFC 4858 [RFC4858]) These documents aim to document and to 314 improve the work style within the IETF. 316 o Building blocks (e.g. cryptographic algorithms, MIME types 317 registrations). These specifications are meant to be used with 318 other protocols one or several communication paradigms. 320 o Architectural descriptions (for example, on IP-based emergency 321 services [I-D.ietf-ecrit-framework], Internet Mail [RFC5598]) 323 o Best current practices (e.g. Guidance for Authentication, 324 Authorization, and Accounting (AAA) Key Management [RFC4962]) 326 o Policy statements (e.g. IETF Policy on Wiretapping [RFC2804]) 328 Often, the architectural description is compiled some time after the 329 deployment has long been ongoing and therefore those who implement 330 and those who deploy have to make their own determination of which 331 protocols they would like to glue together to a complete system. 332 This type of work style has the advantage that protocol designers are 333 encouraged to write their specifications in a flexible way so that 334 they can be used in multiple contexts with different deployment 335 scenarios without a huge amount of interdependency between the 336 components. [Tussle] highlights the importance of such an approach 337 and [I-D.morris-policy-cons] offers a more detailed discussion. 339 This work style has an important consequence for the scope of privacy 340 work in the IETF, namely 342 o the standardization work focuses on those parts where 343 interoperability is really essentially rather than describing a 344 specific instantiation of an architecture and therefore leaving a 345 lot of choices for deployments. 347 o application internal functionality, such as API, and details about 348 databases are outside the scope of the IETF 350 o regulatory requirements of different juristictions are not part of 351 the IETF work either. 353 Here is an example that aims to illustrate the boundaries of the IETF 354 work: Imagine a social networking site that allows user registration, 355 requires user authentication prior to usage, and offers its 356 functionality for Web browser users via HTTP, real-time messaging 357 functionality via XMPP, and email notifications. Additionally, 358 support for data sharing with other Internet service providers is 359 provided by OAuth. 361 While HTTP, XMPP, Email, and OAuth are IETF specifications they only 362 define how the protocol behavior on the wire looks like. They 363 certainly have an architectural spirit that has enormous impact on 364 the protocol mechanisms and the set of specifications that are 365 required. However, IETF specifications would not go into details of 366 how the user has to register, what type of data he has to provide to 367 this social networking site, how long transaction data is kept, how 368 requirements for lawful intercept are met, how authorization policies 369 are designed to let users know more about data they share with other 370 Internet services, how the user's data is secured against authorized 371 access, whether the HTTP communication exchange between the browser 372 and the social networking site is using TLS or not, what data is 373 uploaded by the user, how the privacy policy of the social networking 374 site should look like, etc. 376 Another example is the usage of HTTP for the Web. HTTP is published 377 in RFC 2616 and was designed to allow the exchange of arbitrary data. 378 An analysis of potential privacy problems would consider what type of 379 data is exchanged, how this data is stored and processed. Hence, the 380 analysis for a static webpage by a company would different than the 381 usage of HTTP for exchanging health records. A protocol designer 382 working on HTTP extensions (such as WebDAV) it would therefore be 383 difficult to describe all possible privacy considersations given that 384 the space of possible usage is essentially unlimited. 386 +--------+ 387 |Building|-------+ 388 |Blocks | | 389 +--------+ | 390 +------v-----+ 391 | |----+ 392 |Architecture| | 393 +------------+ | 394 +---v--+ 395 |System|--------+ 396 |Design| | 397 +------+ | 398 +-------v------+ 399 | |------+ 400 |Implementation| | 401 +--------------+ | 402 +-----v----+ 403 | | 404 |Deployment| 405 +----------+ 407 Figure 1: Development Process 409 Figure 1 shows a typical development process. IETF work often starts 410 with identifying building blocks that ccan then be used in different 411 architectural variants useful for a wide range of usage scenarios. 412 Before implementation activities start a software architecture needs 413 to evaluable which components to integrate, how to provide proper 414 performance characteristics, etc. Finally, the implemented work 415 needs to be deployed. Privacy considerations play a role along the 416 entire process. 418 To pick an example from the security field consider the NIST 419 Framework for Designing Cryptographic Key Management Systems 420 [SP800-130], NIST SP 800-130. SP 800-130 provides a number of 421 recommendations that can be addressed largely during the system 422 design phase as well as in the implementation phase of product 423 development. The cryptographic building blocks and the underlying 424 architecture is assumed to be sound. Even with well-design 425 cryptographic components there are plenty of possibilities to 426 introduce security vulnerabilities in the later stage of the 427 development cycle. 429 Similiar to the work on security the impact of work in standards 430 developing organizations is limited. Neverthelesss, discussing 431 potential privacy problems and considering privacy in the design of 432 an IETF protocol can offer system architects and those deploying 433 systems additional insights. The rest of this document is focused on 434 illustrating how protocol designers can consider privacy in their 435 design decisions, as they do factors like security, congestion 436 control, scalability, operations and management, etc. 438 4. Threat Model 440 To consider privacy in protocol design it useful to think about the 441 overall communication architecture and what the different actors 442 could do. This analysis is similar to a threat analysis found in 443 security consideration sections of IETF documents. See also RFC 4101 444 [RFC4101] for an illustration on how to write protocol models. In 445 Figure 2 we show a communication model found in many of today's 446 protocols where a sender wants to establish communication with some 447 recipient and thereby uses some form of intermediary (referred as 448 relay in Figure 2. In some cases this intermediary stays in the 449 communication path for the entire duration of the communication and 450 sometimes it is only used for communication establishment, for either 451 inbound or outbound communication. In rare cases they may even be a 452 series of relays that are traversed. 454 +-----------+ 455 | | 456 >| Recipient | 457 / | | 458 ,' +-----------+ 459 +--------+ )-------( ,' +-----------+ 460 | | | | - | | 461 | Sender |<------>|Relay |<------>| Recipient | 462 | | | |`. | | 463 +--------+ )-------( \ +-----------+ 464 ^ `. +-----------+ 465 : \ | | 466 : `>| Recipient | 467 ..............................>| | 468 +-----------+ 470 Legend: 472 <....> End-to-End Communication 473 <----> Hop-by-Hop Communication 475 Figure 2: Example Instantiation of involved Entities 477 We can distinguish between three types of adversaries: 479 Eavesdropper: RFC 4949 describes the act of 'eavesdropping' as 481 "Passive wiretapping done secretly, i.e., without the knowledge 482 of the originator or the intended recipients of the 483 communication." 485 Eavesdropping is often considered by IETF protocols in the context 486 of a security analysis to deal with a range of attacks by offering 487 confidentiality protection. 489 RFC 3552 provides guidance on how to write security considerations 490 for IETF documents and already demands the confidentiality 491 security services to be considered. While IETF protocols offer 492 guidance on how to secure communication against eavesdroppers 493 deployments sometimes choose not to enable its usage. 495 Middleman: Many protocols developed today show a more complex 496 communication pattern than just client and server communication, 497 as motivated in Figure 2. Store-and-forward protocols are 498 examples where entities participate in the message delivery even 499 though they are not the final recipients. Often, these 500 intermediaries only need to see a small amount of information 501 necessary for message routing and security and/or protocol 502 mechanisms should ensure that end-to-end information is made 503 inaccessible for these entities. Unfortunately, the difficulty to 504 deploy end-to-end security proceduces, the additional messaging, 505 the computational overhead, and other business / legal 506 requirements often slow down or prevent the deployment of these 507 end-to-end security mechanisms giving these intermediaries more 508 exposure to communication patters and communication payloads than 509 necessary. 511 Recipient: It may seem strange to put the recipient as an adversary 512 in this list since the entire purpose of the communication 513 interaction is to provide information to it. However, the degree 514 of familiarity and the type of information that needs to be shared 515 with such an entity may vary from context to context and also 516 between application scenarios. Often enough, the sender has no 517 strong familiarity with the other communication endpoint. While 518 it seems to be advisable to utilize access control before 519 disclosing information with such an entity reality in Internet 520 communication is not so simple. As such, a sender may still want 521 to limit the amount of information disclosed to the recipient some 522 mutual understanding of how this data is treated my need to be 523 created, e.g. how long it is kept (retention), whether re- 524 distribution is permitted. 526 5. Guidelines 528 A pre-condition for reasoning about the impact of a protocol or an 529 architecture is to look at the high level protocol model, as 530 described in [RFC4101]. This step helps to identify actors and their 531 relationship. The protocol specification (or the set of 532 specifications then allows a deep dive into the data that is 533 exchanged. 535 The answers to these questions provide insight into the potential 536 privacy impact: 538 1. What entities collect and use data? 540 1.a: How many entities collect and use data? 542 Note that this question aims to raise the question of what 543 is possible for various entities to inspect (or potentially 544 modify). In architectures with intermediaries, the 545 question can be stated as "What data is exposed to 546 intermediaries that they do not need to know to do their 547 job?". 549 1.b: For each entity, what type of entity is it? 551 + The first-party site or application 553 + Other sites or applications whose data collection and use 554 is in some way controlled by the first party 556 + Third parties that may use the data they collect for other 557 purposes 559 2. For each entity, think about the relationship between the entity 560 and the user. 562 2.a: What is the user's familiarity or degree of relationship 563 with the entity in other contexts? 565 2.b: What is the user's reasonable expectation of the entity's 566 involvement? 568 3. What data about the user is likely needed to be collected? 570 4. What is the identification level of the data? (identified, 571 pseudonymous, anonymous, see [I-D.hansen-privacy-terminology]) 573 6. Example 575 This section allows us to illustrate how privacy was deal within 576 certain IETF protocols. We will start the description with AAA for 577 network access and expand it to other protocols in a future version 578 of this draft. 580 6.1. Presence 582 A presence service, as defined in the abstract in RFC 2778 [RFC2778], 583 allows users of a communications service to monitor one another's 584 availability and disposition in order to make de- cisions about 585 communicating. Presence information is highly dynamic, and generally 586 characterizes whether a user is online or offline, busy or idle, away 587 from communications devices or nearby, and the like. Necessarily, 588 this information has certain privacy implications, and from the start 589 the IETF approached this work with the aim to provide users with the 590 controls to determine how their presence information would be shared. 591 The Common Profile for Presence (CPP) [RFC3859] defines a set of 592 logical operations for delivery of presence information. This 593 abstract model is applicable to multiple presence systems. The SIP- 594 based SIMPLE presence system [RFC3261] uses CPP as its baseline 595 architecture, and the presence operations in the Extensible Messaging 596 and Presence Protocol (XMPP) have also been mapped to CPP [RFC3922]. 598 SIMPLE [RFC3261], the application of the Session Initiation Protocol 599 (SIP) to instant messaging and presence, has native support for 600 subscriptions and notifications (with its event framework [RFC3265]) 601 and has added an event package [RFC3856] for pres- ence in order to 602 satisfy the requirements of CPP. Other event packages were defined 603 later to allow additional information to be exchanged. With the help 604 of the PUBLISH method [RFC3903]. clients are able to install presence 605 information on a server, so that the server can apply access-control 606 policies before sharing presence information with other entities. 607 The integration of an explicit authorization mechanism into the 608 presence architecture has been a major improvement in terms of 609 involving the end users in the decision making pro- cess before 610 sharing information. Nearly all presence systems deployed today 611 provide such a mechanism, typically through a reciprocal 612 authorization system by which a pair of users, when they agree to be 613 "buddies," consent to divulge their presence information to one 614 another. 616 One important extension for presence was to enable the support for 617 location sharing. With the desire to standardize protocols for 618 systems sharing geolocation IETF work was started in the GEOPRIV 619 working group. During the initial requirements and privacy threat 620 analysis in the process of chartering the working group, it became 621 clear that the system would an underlying communication mechanism 622 supporting user consent to share location information. The 623 resemblance of these requirements to the presence framework was 624 quickly recognized, and this design decision was documented in RFC 625 4079 [RFC4079]. 627 While presence systems exerted influence on location pri- vacy, the 628 location privacy work also influenced ongoing IETF work on presence 629 by triggering the standardization of a general access control policy 630 language called the Common Policy (defined in RFC 4745 [RFC4745]) 631 framework. This language allows one to express ways to control the 632 distribution of information as simple conditions, actions, and 633 transformations rules expressed in an XML format. Common Policy 634 itself is an abstract format which needs to be instantiated: two 635 examples can be found with the Presence Authorization Rules [RFC5025] 636 and the Geolocation Policy [I-D.ietf-geopriv-policy]. The former 637 provides additional expressiveness for presence based systems, while 638 the latter defines syntax and semantic for location based conditions 639 and transformations. 641 As a component of the prior work on the presence architecture, a 642 format for presence information, called Presence Information Data 643 Format (PIDF), had been developed. For the purposes of conveying 644 location information an extension was developed, the PIDF Location 645 Object (PIDF-LO). With the aim to meet the privacy requirements 646 defined in RFC 2779 [RFC2779] a set of usage indications (such as 647 whether retransmission is allowed or when the retention period 648 expires) in the form of the following policies have been added that 649 always travel with location information itself. We believe that the 650 standardization of these meta-rules that travel with location 651 information has been a unique contribution to privacy on the 652 Internet, recognizing the need for users to express their preferences 653 when information travels through the Internet, from website to 654 website. This approach very much follows the spirit of Creative 655 Commons [CC], namely the usage of a limited number of conditions 656 (such as 'Share Alike' [CC-SA]). Unlike Creative Commons, the 657 GEOPRIV working group did not, however, initiate work to produce 658 legal language nor to de- sign graphical icons since this would fall 659 outside the scope of the IETF. In particular, the GEOPRIV rules 660 state a preference on the retention and retransmission of location 661 information; while GEOPRIV cannot force any entity receiving a 662 PIDF-LO object to abide by those preferences, if users lack the 663 ability to express them at all, we can guarantee their preferences 664 will not be honored. 666 While these retention and retransmission meta-data elements could 667 have been devised to accompany information elements in other IETF 668 protocols, the decision was made to introduce these elements for 669 geolocation initially because of the sensitivity of location 670 information. 672 The GEOPRIV working group had decided to clarify the architecture to 673 make it more accessible to those outside the IETF, and also provides 674 a more generic description applicable beyond the context of presence. 675 [I-D.ietf-geopriv-arch] shows the work-in-progress writeup. 677 6.2. AAA for Network Access 679 On a high-level, AAA for network access uses the communication model 680 shown in Figure 3. When an end host requests access to the network 681 it has to interact with a Network Access Server (NAS) using some 682 front-end protocol (often at the link layer, such as IEEE 802.1X). 683 When asked by the NAS, the end host presents a Network Access 684 Identifier (NAI), an email alike identifier that consists of a 685 username and a domain part. This NAI is then used to discover the 686 AAA server authorized for the users' domain and an initial access 687 request is forwarded to it. To deal with various security, 688 accounting and fraud prevention aspects an end-to-end authentication 689 procedure, run between the end host (the peer) and a separate 690 component within the AAA server (the server) is executed using the 691 Extensible Authentication Protocol (EAP). After a successful 692 authentication protocol exchange the user may get authorized to 693 access the network and keying material is provided to the NAS to 694 enable link layer security over the air interface. 696 From a privacy point of view, the entities participating in this eco- 697 system are the user, an end host, the NAS, a range of different 698 intermediaries, and the AAA server. The user will most likely have 699 some form of contractual relationship with the entity operating the 700 AAA server since credential provisioning had to happen someone but, 701 in certain deployments like coffee shops, this is not guaranteed. In 702 many deployment during this initial registration process the 703 subscriber is provided with credentials after showing some form of 704 identification information (e.g. a passport) and consequently the NAI 705 together with credentials can be used to linked to a specific 706 subscriber, often a single person. 708 The username part of the NAI is data provided by the end host 709 provides during network access authentication that intermediaries do 710 not need to fulfill their role in AAA message routing. Hiding the 711 user's identity is, as discussed in RFC 4282 [RFC4282], possible only 712 when NAIs are used together with a separate authentication method 713 that can transfer the username in a secure manner. Such EAP methods 714 have been designed and requirements for offering such functionality 715 have have become recommended design criteria, see [RFC4017]. 717 More than just identity information is exchanged during the network 718 access authentication is exchanged. The NAS provides information 719 about the user's point of attachment towards the AAA server and the 720 AAA server in response provides data related to the authorization 721 decision back. While the need to exchange data is motivated by the 722 service usage itself there are still a number of questions that could 723 be asked, such as 725 o What mechanisms can be utilized to offer users ways to authorize 726 sharing of information (considering that the ability for protocol 727 interaction is limited without sucessful network access 728 connectivity)? 730 o What are the best current practices for a privacy-sensitive 731 operation of intermediaries? Since end hosts are not interacting 732 with intermediaries explicitly and users have no relationship with 733 those who operate them it is quite likely their practices are less 734 widely known. 736 o Are there alternative approaches to trust establishment between 737 the NAS and the AAA server so that the involvement of 738 intermediaries can be limited or avoided? 739 +--------------+ 740 | AAA Server | 741 +-^----------^-+ 742 * EAP | RADIUS/ 743 * | Diameter 744 --v----------v-- 745 /// \\\ 746 // AAA Proxies, \\ *** 747 | Relays, and | back- 748 | Redirect Agents | end 749 \\ // *** 750 \\\ /// 751 --^----------^-- 752 * EAP | RADIUS/ 753 * | Diameter 754 +----------+ Data +-v----------v-- + 755 | |<---------------->| | 756 | End Host | EAP/EAP Method | Network Access | 757 | |<****************>| Server | 758 +----------+ +--------------- + 759 *** front-end *** 760 Legend: 762 <****>: End-to-end exchange 763 <---->: Hop-by-hop exchange 765 Figure 3: Network Access Authentication Architecture 767 6.3. SIP for Internet Telephony 769 [Editor's Note: Jon/Bernard to add a little bit of text here.] 771 7. Security Considerations 773 This document describes aspects a protocol designer would considered 774 in the area of privacy in addition to the regular security analysis. 776 8. IANA Considerations 778 This document does not require actions by IANA. 780 9. Acknowledgements 782 Add your name here. 784 10. References 786 10.1. Normative References 788 [I-D.hansen-privacy-terminology] 789 Pfitzmann, A., Hansen, M., and H. Tschofenig, "Terminology 790 for Talking about Privacy by Data Minimization: Anonymity, 791 Unlinkability, Undetectability, Unobservability, 792 Pseudonymity, and Identity Management", 793 draft-hansen-privacy-terminology-01 (work in progress), 794 August 2010. 796 [OECD] Organization for Economic Co-operation and Development, 797 "OECD Guidelines on the Protection of Privacy and 798 Transborder Flows of Personal Data", available at 799 (September 2010) , http://www.oecd.org/EN/document/ 800 0,,EN-document-0-nodirectorate-no-24-10255-0,00.html, 801 1980. 803 10.2. Informative References 805 [Altman] Altman, I., "The Environment and Social Behavior: Privacy, 806 Personal Space, Territory, Crowding", Brooks/Cole , 1975. 808 [CC] "Creative Commons", June 2010. 810 [CC-SA] "Creative Commons - Licenses", June 2010. 812 [CTIA] CTIA, "Best Practices and Guidelines for Location-Based 813 Services", , March 2010. 815 [DPD95] European Commission, "Directive 95/46/EC of the European 816 Parliament and of the Council of 24 October 1995 on the 817 protection of individuals with regard to the processing of 818 personal data and on the free movement of such data", 819 Official Journal L 281 , 23/11/1995 P. 0031 - 0050, 820 November 2005. 822 [EFF-Privacy] 823 Blumberg, A. and P. Eckersley, "On Locational Privacy, and 824 How to Avoid Losing it Forever", August 2009. 826 [Granada] International Working Group on Data Protection in 827 Telecommunications, "The Granada Charter of Privacy in a 828 Digital World, Granada (Spain)", April 2010. 830 [I-D.ietf-ecrit-framework] 831 Rosen, B., Schulzrinne, H., Polk, J., and A. Newton, 832 "Framework for Emergency Calling using Internet 833 Multimedia", draft-ietf-ecrit-framework-11 (work in 834 progress), July 2010. 836 [I-D.ietf-geopriv-arch] 837 Barnes, R., Lepinski, M., Cooper, A., Morris, J., 838 Tschofenig, H., and H. Schulzrinne, "An Architecture for 839 Location and Location Privacy in Internet Applications", 840 draft-ietf-geopriv-arch-03 (work in progress), 841 October 2010. 843 [I-D.ietf-geopriv-policy] 844 Schulzrinne, H., Tschofenig, H., Morris, J., Cuellar, J., 845 and J. Polk, "Geolocation Policy: A Document Format for 846 Expressing Privacy Preferences for Location Information", 847 draft-ietf-geopriv-policy-21 (work in progress), 848 January 2010. 850 [I-D.morris-policy-cons] 851 Morris, J., Aboba, B., Peterson, J., and H. Tschofenig, 852 "Public Policy Considerations for Internet Protocols", 853 draft-morris-policy-cons-00 (work in progress), 854 October 2010. 856 [Madrid] Data Protection Authorities and Privacy Regulators, "The 857 Madrid Resolution, International Standards on the 858 Protection of Personal Data and Privacy", Conference of 859 Data Protection and Privacy Commissioners , 31st 860 International Meeting, November 2009. 862 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 863 Presence and Instant Messaging", RFC 2778, February 2000. 865 [RFC2779] Day, M., Aggarwal, S., Mohr, G., and J. Vincent, "Instant 866 Messaging / Presence Protocol Requirements", RFC 2779, 867 February 2000. 869 [RFC2804] IAB and IESG, "IETF Policy on Wiretapping", RFC 2804, 870 May 2000. 872 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 873 A., Peterson, J., Sparks, R., Handley, M., and E. 874 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 875 June 2002. 877 [RFC3265] Roach, A., "Session Initiation Protocol (SIP)-Specific 878 Event Notification", RFC 3265, June 2002. 880 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 881 Initiation Protocol (SIP)", RFC 3856, August 2004. 883 [RFC3859] Peterson, J., "Common Profile for Presence (CPP)", 884 RFC 3859, August 2004. 886 [RFC3903] Niemi, A., "Session Initiation Protocol (SIP) Extension 887 for Event State Publication", RFC 3903, October 2004. 889 [RFC3922] Saint-Andre, P., "Mapping the Extensible Messaging and 890 Presence Protocol (XMPP) to Common Presence and Instant 891 Messaging (CPIM)", RFC 3922, October 2004. 893 [RFC4017] Stanley, D., Walker, J., and B. Aboba, "Extensible 894 Authentication Protocol (EAP) Method Requirements for 895 Wireless LANs", RFC 4017, March 2005. 897 [RFC4079] Peterson, J., "A Presence Architecture for the 898 Distribution of GEOPRIV Location Objects", RFC 4079, 899 July 2005. 901 [RFC4101] Rescorla, E. and IAB, "Writing Protocol Models", RFC 4101, 902 June 2005. 904 [RFC4282] Aboba, B., Beadles, M., Arkko, J., and P. Eronen, "The 905 Network Access Identifier", RFC 4282, December 2005. 907 [RFC4745] Schulzrinne, H., Tschofenig, H., Morris, J., Cuellar, J., 908 Polk, J., and J. Rosenberg, "Common Policy: A Document 909 Format for Expressing Privacy Preferences", RFC 4745, 910 February 2007. 912 [RFC4858] Levkowetz, H., Meyer, D., Eggert, L., and A. Mankin, 913 "Document Shepherding from Working Group Last Call to 914 Publication", RFC 4858, May 2007. 916 [RFC4962] Housley, R. and B. Aboba, "Guidance for Authentication, 917 Authorization, and Accounting (AAA) Key Management", 918 BCP 132, RFC 4962, July 2007. 920 [RFC5025] Rosenberg, J., "Presence Authorization Rules", RFC 5025, 921 December 2007. 923 [RFC5598] Crocker, D., "Internet Mail Architecture", RFC 5598, 924 July 2009. 926 [SP800-122] 927 McCallister, E., Grance, T., and K. Scarfone, "Guide to 928 Protecting the Confidentiality of Personally Identifiable 929 Information (PII)", NIST Special Publication (SP) , 800- 930 122, April 2010. 932 [SP800-130] 933 Barker, E., Branstad, D., Chokhani, S., and M. Smid, 934 "DRAFT: A Framework for Designing Cryptographic Key 935 Management Systems", NIST Special Publication (SP) , 800- 936 130, June 2010. 938 [Tussle] Clark, D., Wroslawski, J., Sollins, K., and R. Braden, 939 "Tussle in Cyberspace: Defining Tomorrow's Internet", In 940 Proc. ACM SIGCOMM , 941 http://www.acm.org/sigcomm/sigcomm2002/papers/tussle.html, 942 2002. 944 [Warren] Warren, D. and L. Brandeis, "The Right to Privacy", 945 Harvard Law Rev. , vol. 45, 1890. 947 [Westin] Westin, A., "Privacy and Freedom", Atheneum, New York , 948 1967. 950 [browser-fingerprinting] 951 Eckersley, P., "How Unique Is Your Browser?", Springer 952 Lecture Notes in Computer Science , Privacy Enhancing 953 Technologies Symposium (PETS 2010), 2010. 955 [limits] Cate, F., "The Limits of Notice and Choice", IEEE Computer 956 Society , IEEE Security and Privacy, pg. 59-62, 957 November 2005. 959 Authors' Addresses 961 Bernard Aboba 962 Microsoft Corporation 963 One Microsoft Way 964 Redmond, WA 98052 965 US 967 Email: bernarda@microsoft.com 969 John B. Morris, Jr. 970 Center for Democracy and Technology 971 1634 I Street NW, Suite 1100 972 Washington, DC 20006 973 USA 975 Email: jmorris@cdt.org 976 URI: http://www.cdt.org 978 Jon Peterson 979 NeuStar, Inc. 980 1800 Sutter St Suite 570 981 Concord, CA 94520 982 US 984 Email: jon.peterson@neustar.biz 986 Hannes Tschofenig 987 Nokia Siemens Networks 988 Linnoitustie 6 989 Espoo 02600 990 Finland 992 Phone: +358 (50) 4871445 993 Email: Hannes.Tschofenig@gmx.net 994 URI: http://www.tschofenig.priv.at