idnits 2.17.1 draft-ietf-rtcweb-security-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 15, 2013) is 3938 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'I-D.kaufman-rtcweb-security-ui' is defined on line 991, but no explicit reference was found in the text == Unused Reference: 'RFC2818' is defined on line 1002, but no explicit reference was found in the text == Unused Reference: 'RFC6454' is defined on line 1053, but no explicit reference was found in the text == Outdated reference: A later version (-19) exists of draft-ietf-rtcweb-overview-06 == Outdated reference: A later version (-20) exists of draft-ietf-rtcweb-security-arch-06 == Outdated reference: A later version (-04) exists of draft-muthu-behave-consent-freshness-03 -- Obsolete informational reference (is this intentional?): RFC 2818 (Obsoleted by RFC 9110) -- Obsolete informational reference (is this intentional?): RFC 4347 (Obsoleted by RFC 6347) -- Obsolete informational reference (is this intentional?): RFC 5245 (Obsoleted by RFC 8445, RFC 8839) -- Obsolete informational reference (is this intentional?): RFC 6222 (Obsoleted by RFC 7022) Summary: 1 error (**), 0 flaws (~~), 8 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RTC-Web E. Rescorla 3 Internet-Draft RTFM, Inc. 4 Intended status: Standards Track July 15, 2013 5 Expires: January 16, 2014 7 Security Considerations for WebRTC 8 draft-ietf-rtcweb-security-05 10 Abstract 12 The Real-Time Communications on the Web (RTCWEB) working group is 13 tasked with standardizing protocols for real-time communications 14 between Web browsers, generally called "WebRTC". The major use cases 15 for WebRTC technology are real-time audio and/or video calls, Web 16 conferencing, and direct data transfer. Unlike most conventional 17 real-time systems (e.g., SIP-based soft phones) WebRTC communications 18 are directly controlled by a Web server, which poses new security 19 challenges. For instance, a Web browser might expose a JavaScript 20 API which allows a server to place a video call. Unrestricted access 21 to such an API would allow any site which a user visited to "bug" a 22 user's computer, capturing any activity which passed in front of 23 their camera. This document defines the WebRTC threat model and 24 analyzes the security threats of WebRTC in that model. 26 Legal 28 THIS DOCUMENT AND THE INFORMATION CONTAINED THEREIN ARE PROVIDED ON 29 AN "AS IS" BASIS AND THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 30 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 31 IETF TRUST, AND THE INTERNET ENGINEERING TASK FORCE, DISCLAIM ALL 32 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 33 WARRANTY THAT THE USE OF THE INFORMATION THEREIN WILL NOT INFRINGE 34 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 35 FOR A PARTICULAR PURPOSE. 37 Status of this Memo 39 This Internet-Draft is submitted in full conformance with the 40 provisions of BCP 78 and BCP 79. 42 Internet-Drafts are working documents of the Internet Engineering 43 Task Force (IETF). Note that other groups may also distribute 44 working documents as Internet-Drafts. The list of current Internet- 45 Drafts is at http://datatracker.ietf.org/drafts/current/. 47 Internet-Drafts are draft documents valid for a maximum of six months 48 and may be updated, replaced, or obsoleted by other documents at any 49 time. It is inappropriate to use Internet-Drafts as reference 50 material or to cite them other than as "work in progress." 52 This Internet-Draft will expire on January 16, 2014. 54 Copyright Notice 56 Copyright (c) 2013 IETF Trust and the persons identified as the 57 document authors. All rights reserved. 59 This document is subject to BCP 78 and the IETF Trust's Legal 60 Provisions Relating to IETF Documents 61 (http://trustee.ietf.org/license-info) in effect on the date of 62 publication of this document. Please review these documents 63 carefully, as they describe your rights and restrictions with respect 64 to this document. Code Components extracted from this document must 65 include Simplified BSD License text as described in Section 4.e of 66 the Trust Legal Provisions and are provided without warranty as 67 described in the Simplified BSD License. 69 This document may contain material from IETF Documents or IETF 70 Contributions published or made publicly available before November 71 10, 2008. The person(s) controlling the copyright in some of this 72 material may not have granted the IETF Trust the right to allow 73 modifications of such material outside the IETF Standards Process. 74 Without obtaining an adequate license from the person(s) controlling 75 the copyright in such materials, this document may not be modified 76 outside the IETF Standards Process, and derivative works of it may 77 not be created outside the IETF Standards Process, except to format 78 it for publication as an RFC or to translate it into languages other 79 than English. 81 Table of Contents 83 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 84 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 85 3. The Browser Threat Model . . . . . . . . . . . . . . . . . . . 5 86 3.1. Access to Local Resources . . . . . . . . . . . . . . . . 6 87 3.2. Same Origin Policy . . . . . . . . . . . . . . . . . . . . 6 88 3.3. Bypassing SOP: CORS, WebSockets, and consent to 89 communicate . . . . . . . . . . . . . . . . . . . . . . . 7 90 4. Security for WebRTC Applications . . . . . . . . . . . . . . . 7 91 4.1. Access to Local Devices . . . . . . . . . . . . . . . . . 8 92 4.1.1. Threats from Screen Sharing . . . . . . . . . . . . . 9 93 4.1.2. Calling Scenarios and User Expectations . . . . . . . 9 94 4.1.2.1. Dedicated Calling Services . . . . . . . . . . . . 9 95 4.1.2.2. Calling the Site You're On . . . . . . . . . . . . 10 96 4.1.3. Origin-Based Security . . . . . . . . . . . . . . . . 10 97 4.1.4. Security Properties of the Calling Page . . . . . . . 12 98 4.2. Communications Consent Verification . . . . . . . . . . . 13 99 4.2.1. ICE . . . . . . . . . . . . . . . . . . . . . . . . . 13 100 4.2.2. Masking . . . . . . . . . . . . . . . . . . . . . . . 13 101 4.2.3. Backward Compatibility . . . . . . . . . . . . . . . . 14 102 4.2.4. IP Location Privacy . . . . . . . . . . . . . . . . . 15 103 4.3. Communications Security . . . . . . . . . . . . . . . . . 15 104 4.3.1. Protecting Against Retrospective Compromise . . . . . 16 105 4.3.2. Protecting Against During-Call Attack . . . . . . . . 17 106 4.3.2.1. Key Continuity . . . . . . . . . . . . . . . . . . 17 107 4.3.2.2. Short Authentication Strings . . . . . . . . . . . 18 108 4.3.2.3. Third Party Identity . . . . . . . . . . . . . . . 19 109 4.3.2.4. Page Access to Media . . . . . . . . . . . . . . . 19 110 4.3.3. Malicious Peers . . . . . . . . . . . . . . . . . . . 20 111 4.4. Privacy Considerations . . . . . . . . . . . . . . . . . . 20 112 4.4.1. Correlation of Anonymous Calls . . . . . . . . . . . . 20 113 4.4.2. Browser Fingerprinting . . . . . . . . . . . . . . . . 21 114 5. Security Considerations . . . . . . . . . . . . . . . . . . . 21 115 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 21 116 7. Changes Since -04 . . . . . . . . . . . . . . . . . . . . . . 21 117 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 21 118 8.1. Normative References . . . . . . . . . . . . . . . . . . . 21 119 8.2. Informative References . . . . . . . . . . . . . . . . . . 22 120 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 24 122 1. Introduction 124 The Real-Time Communications on the Web (RTCWEB) working group is 125 tasked with standardizing protocols for real-time communications 126 between Web browsers, generally called "WebRTC" 127 [I-D.ietf-rtcweb-overview]. The major use cases for WebTC technology 128 are real-time audio and/or video calls, Web conferencing, and direct 129 data transfer. Unlike most conventional real-time systems, (e.g., 130 SIP-based[RFC3261] soft phones) WebRTC communications are directly 131 controlled by some Web server. A simple case is shown below. 133 +----------------+ 134 | | 135 | Web Server | 136 | | 137 +----------------+ 138 ^ ^ 139 / \ 140 HTTP / \ HTTP 141 or / \ or 142 WebSockets / \ WebSockets 143 v v 144 JS API JS API 145 +-----------+ +-----------+ 146 | | Media | | 147 | Browser |<---------->| Browser | 148 | | | | 149 +-----------+ +-----------+ 151 Figure 1: A simple WebRTC system 153 In the system shown in Figure 1, Alice and Bob both have WebRTC 154 enabled browsers and they visit some Web server which operates a 155 calling service. Each of their browsers exposes standardized 156 JavaScript calling APIs (implementated as browser built-ins) which 157 are used by the Web server to set up a call between Alice and Bob. 158 The Web server also serves as the signaling channel to transport 159 control messages between the browsers. While this system is 160 topologically similar to a conventional SIP-based system (with the 161 Web server acting as the signaling service and browsers acting as 162 softphones), control has moved to the central Web server; the browser 163 simply provides API points that are used by the calling service. As 164 with any Web application, the Web server can move logic between the 165 server and JavaScript in the browser, but regardless of where the 166 code is executing, it is ultimately under control of the server. 168 It should be immediately apparent that this type of system poses new 169 security challenges beyond those of a conventional VoIP system. In 170 particular, it needs to contend with malicious calling services. For 171 example, if the calling service can cause the browser to make a call 172 at any time to any callee of its choice, then this facility can be 173 used to bug a user's computer without their knowledge, simply by 174 placing a call to some recording service. More subtly, if the 175 exposed APIs allow the server to instruct the browser to send 176 arbitrary content, then they can be used to bypass firewalls or mount 177 denial of service attacks. Any successful system will need to be 178 resistant to this and other attacks. 180 A companion document [I-D.ietf-rtcweb-security-arch] describes a 181 security architecture intended to address the issues raised in this 182 document. 184 2. Terminology 186 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 187 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 188 document are to be interpreted as described in RFC 2119 [RFC2119]. 190 3. The Browser Threat Model 192 The security requirements for WebRTC follow directly from the 193 requirement that the browser's job is to protect the user. Huang et 194 al. [huang-w2sp] summarize the core browser security guarantee as: 196 Users can safely visit arbitrary web sites and execute scripts 197 provided by those sites. 199 It is important to realize that this includes sites hosting arbitrary 200 malicious scripts. The motivation for this requirement is simple: 201 it is trivial for attackers to divert users to sites of their choice. 202 For instance, an attacker can purchase display advertisements which 203 direct the user (either automatically or via user clicking) to their 204 site, at which point the browser will execute the attacker's scripts. 205 Thus, it is important that it be safe to view arbitrarily malicious 206 pages. Of course, browsers inevitably have bugs which cause them to 207 fall short of this goal, but any new WebRTC functionality must be 208 designed with the intent to meet this standard. The remainder of 209 this section provides more background on the existing Web security 210 model. 212 In this model, then, the browser acts as a TRUSTED COMPUTING BASE 213 (TCB) both from the user's perspective and to some extent from the 214 server's. While HTML and JavaScript (JS) provided by the server can 215 cause the browser to execute a variety of actions, those scripts 216 operate in a sandbox that isolates them both from the user's computer 217 and from each other, as detailed below. 219 Conventionally, we refer to either WEB ATTACKERS, who are able to 220 induce you to visit their sites but do not control the network, and 221 NETWORK ATTACKERS, who are able to control your network. Network 222 attackers correspond to the [RFC3552] "Internet Threat Model". Note 223 that for HTTP traffic, a network attacker is also a Web attacker, 224 since it can inject traffic as if it were any non-HTTPS Web site. 225 Thus, when analyzing HTTP connections, we must assume that traffic is 226 going to the attacker. 228 3.1. Access to Local Resources 230 While the browser has access to local resources such as keying 231 material, files, the camera and the microphone, it strictly limits or 232 forbids web servers from accessing those same resources. For 233 instance, while it is possible to produce an HTML form which will 234 allow file upload, a script cannot do so without user consent and in 235 fact cannot even suggest a specific file (e.g., /etc/passwd); the 236 user must explicitly select the file and consent to its upload. 237 [Note: in many cases browsers are explicitly designed to avoid 238 dialogs with the semantics of "click here to screw yourself", as 239 extensive research shows that users are prone to consent under such 240 circumstances.] 242 Similarly, while Flash programs (SWFs) [SWF] can access the camera 243 and microphone, they explicitly require that the user consent to that 244 access. In addition, some resources simply cannot be accessed from 245 the browser at all. For instance, there is no real way to run 246 specific executables directly from a script (though the user can of 247 course be induced to download executable files and run them). 249 3.2. Same Origin Policy 251 Many other resources are accessible but isolated. For instance, 252 while scripts are allowed to make HTTP requests via the 253 XMLHttpRequest() API those requests are not allowed to be made to any 254 server, but rather solely to the same ORIGIN from whence the script 255 came xref target="RFC6454"/> (although CORS [CORS] and WebSockets 256 [RFC6455] provide a escape hatch from this restriction, as described 257 below.) This SAME ORIGIN POLICY (SOP) prevents server A from 258 mounting attacks on server B via the user's browser, which protects 259 both the user (e.g., from misuse of his credentials) and the server B 260 (e.g., from DoS attack). 262 More generally, SOP forces scripts from each site to run in their 263 own, isolated, sandboxes. While there are techniques to allow them 264 to interact, those interactions generally must be mutually consensual 265 (by each site) and are limited to certain channels. For instance, 266 multiple pages/browser panes from the same origin can read each 267 other's JS variables, but pages from the different origins--or even 268 iframes from different origins on the same page--cannot. 270 3.3. Bypassing SOP: CORS, WebSockets, and consent to communicate 272 While SOP serves an important security function, it also makes it 273 inconvenient to write certain classes of applications. In 274 particular, mash-ups, in which a script from origin A uses resources 275 from origin B, can only be achieved via a certain amount of hackery. 276 The W3C Cross-Origin Resource Sharing (CORS) spec [CORS] is a 277 response to this demand. In CORS, when a script from origin A 278 executes what would otherwise be a forbidden cross-origin request, 279 the browser instead contacts the target server to determine whether 280 it is willing to allow cross-origin requests from A. If it is so 281 willing, the browser then allows the request. This consent 282 verification process is designed to safely allow cross-origin 283 requests. 285 While CORS is designed to allow cross-origin HTTP requests, 286 WebSockets [RFC6455] allows cross-origin establishment of transparent 287 channels. Once a WebSockets connection has been established from a 288 script to a site, the script can exchange any traffic it likes 289 without being required to frame it as a series of HTTP request/ 290 response transactions. As with CORS, a WebSockets transaction starts 291 with a consent verification stage to avoid allowing scripts to simply 292 send arbitrary data to another origin. 294 While consent verification is conceptually simple--just do a 295 handshake before you start exchanging the real data--experience has 296 shown that designing a correct consent verification system is 297 difficult. In particular, Huang et al. [huang-w2sp] have shown 298 vulnerabilities in the existing Java and Flash consent verification 299 techniques and in a simplified version of the WebSockets handshake. 300 In particular, it is important to be wary of CROSS-PROTOCOL attacks 301 in which the attacking script generates traffic which is acceptable 302 to some non-Web protocol state machine. In order to resist this form 303 of attack, WebSockets incorporates a masking technique intended to 304 randomize the bits on the wire, thus making it more difficult to 305 generate traffic which resembles a given protocol. 307 4. Security for WebRTC Applications 308 4.1. Access to Local Devices 310 As discussed in Section 1, allowing arbitrary sites to initiate calls 311 violates the core Web security guarantee; without some access 312 restrictions on local devices, any malicious site could simply bug a 313 user. At minimum, then, it MUST NOT be possible for arbitrary sites 314 to initiate calls to arbitrary locations without user consent. This 315 immediately raises the question, however, of what should be the scope 316 of user consent. 318 In order for the user to make an intelligent decision about whether 319 to allow a call (and hence his camera and microphone input to be 320 routed somewhere), he must understand either who is requesting 321 access, where the media is going, or both. As detailed below, there 322 are two basic conceptual models: 324 You are sending your media to entity A because you want to talk to 325 Entity A (e.g., your mother). 326 Entity A (e.g., a calling service) asks to access the user's 327 devices with the assurance that it will transfer the media to 328 entity B (e.g., your mother) 330 In either case, identity is at the heart of any consent decision. 331 Moreover, identity is all that the browser can meaningfully enforce; 332 if you are calling A, A can simply forward the media to C. Similarly, 333 if you authorize A to place a call to B, A can call C instead. In 334 either case, all the browser is able to do is verify and check 335 authorization for whoever is controlling where the media goes. The 336 target of the media can of course advertise a security/privacy 337 policy, but this is not something that the browser can enforce. Even 338 so, there are a variety of different consent scenarios that motivate 339 different technical consent mechanisms. We discuss these mechanisms 340 in the sections below. 342 It's important to understand that consent to access local devices is 343 largely orthogonal to consent to transmit various kinds of data over 344 the network (see Section 4.2. Consent for device access is largely a 345 matter of protecting the user's privacy from malicious sites. By 346 contrast, consent to send network traffic is about preventing the 347 user's browser from being used to attack its local network. Thus, we 348 need to ensure communications consent even if the site is not able to 349 access the camera and microphone at all (hence WebSockets's consent 350 mechanism) and similarly we need to be concerned with the site 351 accessing the user's camera and microphone even if the data is to be 352 sent back to the site via conventional HTTP-based network mechanisms 353 such as HTTP POST. 355 4.1.1. Threats from Screen Sharing 357 In addition to camera and microphone access, there has been demand 358 for screen and/or application sharing functionality. Unfortunately, 359 the security implications of this functionality are much harder for 360 users to intuitively analyze than for camera and microphone access. 361 (See 362 http://lists.w3.org/Archives/Public/public-webrtc/2013Mar/0024.html 363 for a full analysis.) 365 The most obvious threats are simply those of "oversharing". I.e., 366 the user may believe they are sharing a window when in fact they are 367 sharing an application, or may forget they are sharing their whole 368 screen, icons, notifications, and all. This is already an issue with 369 existing screen sharing technologies and is made somewhat worse if a 370 partially trusted site is responsible for asking for the resource to 371 be shared rather than having the user propose it. 373 A less obvious threat involves the impact of screen sharing on the 374 Web security model. A key part of the Same Origin Policy is that 375 HTML or JS from site A can reference content from site B and cause 376 the browser to load it, but (unless explicitly permitted) cannot see 377 the result. However, if a web application from a site is screen 378 sharing the browser, then this violates that invariant, with serious 379 security consequences. For example, an attacker site might request 380 screen sharing and then briefly open up a new Window to the user's 381 bank or Gmail account, using screen sharing to read the resulting 382 displayed content. A more sophisticated attack would be open up a 383 source view window to a site and use the screen sharing result to 384 view anti cross-site request forgery tokens. 386 These threats suggest that screen/application sharing might need a 387 higher level of user consent than access to the camera or microphone. 389 4.1.2. Calling Scenarios and User Expectations 391 While a large number of possible calling scenarios are possible, the 392 scenarios discussed in this section illustrate many of the 393 difficulties of identifying the relevant scope of consent. 395 4.1.2.1. Dedicated Calling Services 397 The first scenario we consider is a dedicated calling service. In 398 this case, the user has a relationship with a calling site and 399 repeatedly makes calls on it. It is likely that rather than having 400 to give permission for each call that the user will want to give the 401 calling service long-term access to the camera and microphone. This 402 is a natural fit for a long-term consent mechanism (e.g., installing 403 an app store "application" to indicate permission for the calling 404 service.) A variant of the dedicated calling service is a gaming 405 site (e.g., a poker site) which hosts a dedicated calling service to 406 allow players to call each other. 408 With any kind of service where the user may use the same service to 409 talk to many different people, there is a question about whether the 410 user can know who they are talking to. If I grant permission to 411 calling service A to make calls on my behalf, then I am implicitly 412 granting it permission to bug my computer whenever it wants. This 413 suggests another consent model in which a site is authorized to make 414 calls but only to certain target entities (identified via media-plane 415 cryptographic mechanisms as described in Section 4.3.2 and especially 416 Section 4.3.2.3.) Note that the question of consent here is related 417 to but distinct from the question of peer identity: I might be 418 willing to allow a calling site to in general initiate calls on my 419 behalf but still have some calls via that site where I can be sure 420 that the site is not listening in. 422 4.1.2.2. Calling the Site You're On 424 Another simple scenario is calling the site you're actually visiting. 425 The paradigmatic case here is the "click here to talk to a 426 representative" windows that appear on many shopping sites. In this 427 case, the user's expectation is that they are calling the site 428 they're actually visiting. However, it is unlikely that they want to 429 provide a general consent to such a site; just because I want some 430 information on a car doesn't mean that I want the car manufacturer to 431 be able to activate my microphone whenever they please. Thus, this 432 suggests the need for a second consent mechanism where I only grant 433 consent for the duration of a given call. As described in 434 Section 3.1, great care must be taken in the design of this interface 435 to avoid the users just clicking through. Note also that the user 436 interface chrome must clearly display elements showing that the call 437 is continuing in order to avoid attacks where the calling site just 438 leaves it up indefinitely but shows a Web UI that implies otherwise. 440 4.1.3. Origin-Based Security 442 Now that we have seen another use case, we can start to reason about 443 the security requirements. 445 As discussed in Section 3.2, the basic unit of Web sandboxing is the 446 origin, and so it is natural to scope consent to origin. 447 Specifically, a script from origin A MUST only be allowed to initiate 448 communications (and hence to access camera and microphone) if the 449 user has specifically authorized access for that origin. It is of 450 course technically possible to have coarser-scoped permissions, but 451 because the Web model is scoped to origin, this creates a difficult 452 mismatch. 454 Arguably, origin is not fine-grained enough. Consider the situation 455 where Alice visits a site and authorizes it to make a single call. 456 If consent is expressed solely in terms of origin, then at any future 457 visit to that site (including one induced via mash-up or ad network), 458 the site can bug Alice's computer, use the computer to place bogus 459 calls, etc. While in principle Alice could grant and then revoke the 460 privilege, in practice privileges accumulate; if we are concerned 461 about this attack, something else is needed. There are a number of 462 potential countermeasures to this sort of issue. 464 Individual Consent 465 Ask the user for permission for each call. 467 Callee-oriented Consent 468 Only allow calls to a given user. 470 Cryptographic Consent 471 Only allow calls to a given set of peer keying material or to a 472 cryptographically established identity. 474 Unfortunately, none of these approaches is satisfactory for all 475 cases. As discussed above, individual consent puts the user's 476 approval in the UI flow for every call. Not only does this quickly 477 become annoying but it can train the user to simply click "OK", at 478 which point the consent becomes useless. Thus, while it may be 479 necessary to have individual consent in some case, this is not a 480 suitable solution for (for instance) the calling service case. Where 481 necessary, in-flow user interfaces must be carefully designed to 482 avoid the risk of the user blindly clicking through. 484 The other two options are designed to restrict calls to a given 485 target. Callee-oriented consent provided by the calling site not 486 work well because a malicious site can claim that the user is calling 487 any user of his choice. One fix for this is to tie calls to a 488 cryptographically established identity. While not suitable for all 489 cases, this approach may be useful for some. If we consider the case 490 of advertising, it's not particularly convenient to require the 491 advertiser to instantiate an iframe on the hosting site just to get 492 permission; a more convenient approach is to cryptographically tie 493 the advertiser's certificate to the communication directly. We're 494 still tying permissions to origin here, but to the media origin 495 (and-or destination) rather than to the Web origin. 496 [I-D.ietf-rtcweb-security-arch] describes mechanisms which facilitate 497 this sort of consent. 499 Another case where media-level cryptographic identity makes sense is 500 when a user really does not trust the calling site. For instance, I 501 might be worried that the calling service will attempt to bug my 502 computer, but I also want to be able to conveniently call my friends. 503 If consent is tied to particular communications endpoints, then my 504 risk is limited. Naturally, it is somewhat challenging to design UI 505 primitives which express this sort of policy. The problem becomes 506 even more challenging in multi-user calling cases. 508 4.1.4. Security Properties of the Calling Page 510 Origin-based security is intended to secure against web attackers. 511 However, we must also consider the case of network attackers. 512 Consider the case where I have granted permission to a calling 513 service by an origin that has the HTTP scheme, e.g., 514 http://calling-service.example.com. If I ever use my computer on an 515 unsecured network (e.g., a hotspot or if my own home wireless network 516 is insecure), and browse any HTTP site, then an attacker can bug my 517 computer. The attack proceeds like this: 519 1. I connect to http://anything.example.org/. Note that this site 520 is unaffiliated with the calling service. 521 2. The attacker modifies my HTTP connection to inject an IFRAME (or 522 a redirect) to http://calling-service.example.com 523 3. The attacker forges the response apparently 524 http://calling-service.example.com/ to inject JS to initiate a 525 call to himself. 527 Note that this attack does not depend on the media being insecure. 528 Because the call is to the attacker, it is also encrypted to him. 529 Moreover, it need not be executed immediately; the attacker can 530 "infect" the origin semi-permanently (e.g., with a web worker or a 531 popped-up window that is hidden under the main window.) and thus be 532 able to bug me long after I have left the infected network. This 533 risk is created by allowing calls at all from a page fetched over 534 HTTP. 536 Even if calls are only possible from HTTPS sites, if the site embeds 537 active content (e.g., JavaScript) that is fetched over HTTP or from 538 an untrusted site, because that JavaScript is executed in the 539 security context of the page [finer-grained]. Thus, it is also 540 dangerous to allow WebRTC functionality from HTTPS origins that embed 541 mixed content. Note: this issue is not restricted to PAGES which 542 contain mixed content. If a page from a given origin ever loads 543 mixed content then it is possible for a network attacker to infect 544 the browser's notion of that origin semi-permanently. 546 4.2. Communications Consent Verification 548 As discussed in Section 3.3, allowing web applications unrestricted 549 network access via the browser introduces the risk of using the 550 browser as an attack platform against machines which would not 551 otherwise be accessible to the malicious site, for instance because 552 they are topologically restricted (e.g., behind a firewall or NAT). 553 In order to prevent this form of attack as well as cross-protocol 554 attacks it is important to require that the target of traffic 555 explicitly consent to receiving the traffic in question. Until that 556 consent has been verified for a given endpoint, traffic other than 557 the consent handshake MUST NOT be sent to that endpoint. 559 4.2.1. ICE 561 Verifying receiver consent requires some sort of explicit handshake, 562 but conveniently we already need one in order to do NAT hole- 563 punching. ICE [RFC5245] includes a handshake designed to verify that 564 the receiving element wishes to receive traffic from the sender. It 565 is important to remember here that the site initiating ICE is 566 presumed malicious; in order for the handshake to be secure the 567 receiving element MUST demonstrate receipt/knowledge of some value 568 not available to the site (thus preventing the site from forging 569 responses). In order to achieve this objective with ICE, the STUN 570 transaction IDs must be generated by the browser and MUST NOT be made 571 available to the initiating script, even via a diagnostic interface. 572 Verifying receiver consent also requires verifying the receiver wants 573 to receive traffic from a particular sender, and at this time; for 574 example a malicious site may simply attempt ICE to known servers that 575 are using ICE for other sessions. ICE provides this verification as 576 well, by using the STUN credentials as a form of per-session shared 577 secret. Those credentials are known to the Web application, but 578 would need to also be known and used by the STUN-receiving element to 579 be useful. 581 There also needs to be some mechanism for the browser to verify that 582 the target of the traffic continues to wish to receive it. Because 583 ICE keepalives are indications, they will not work here, so some 584 other mechanism is needed as described in 585 [I-D.muthu-behave-consent-freshness]. 587 4.2.2. Masking 589 Once consent is verified, there still is some concern about 590 misinterpretation attacks as described by Huang et al.[huang-w2sp]. 591 Once consent is verified, there still is some concern about 592 misinterpretation attacks as described by Huang et al.[huang-w2sp]. 593 Where TCP is used the risk is substantial due to the potential 594 presence of transparent proxies and therefore if TCP is to be used, 595 then WebSockets style masking MUST be employed. 597 Since DTLS (with the anti-chosen plaintext mechanisms required by TLS 598 1.1) does not allow the attacker to generate predictable ciphertext, 599 there is no need for masking of protocols running over DTLS (e.g. 600 SCTP over DTLS, UDP over DTLS, etc.). 602 4.2.3. Backward Compatibility 604 A requirement to use ICE limits compatibility with legacy non-ICE 605 clients. It seems unsafe to completely remove the requirement for 606 some check. All proposed checks have the common feature that the 607 browser sends some message to the candidate traffic recipient and 608 refuses to send other traffic until that message has been replied to. 609 The message/reply pair must be generated in such a way that an 610 attacker who controls the Web application cannot forge them, 611 generally by having the message contain some secret value that must 612 be incorporated (e.g., echoed, hashed into, etc.). Non-ICE 613 candidates for this role (in cases where the legacy endpoint has a 614 public address) include: 616 o STUN checks without using ICE (i.e., the non-RTC-web endpoint sets 617 up a STUN responder.) 618 o Use or RTCP as an implicit reachability check. 620 In the RTCP approach, the WebRTC endpoint is allowed to send a 621 limited number of RTP packets prior to receiving consent. This 622 allows a short window of attack. In addition, some legacy endpoints 623 do not support RTCP, so this is a much more expensive solution for 624 such endpoints, for which it would likely be easier to implement ICE. 625 For these two reasons, an RTCP-based approach does not seem to 626 address the security issue satisfactorily. 628 In the STUN approach, the WebRTC endpoint is able to verify that the 629 recipient is running some kind of STUN endpoint but unless the STUN 630 responder is integrated with the ICE username/password establishment 631 system, the WebRTC endpoint cannot verify that the recipient consents 632 to this particular call. This may be an issue if existing STUN 633 servers are operated at addresses that are not able to handle 634 bandwidth-based attacks. Thus, this approach does not seem 635 satisfactory either. 637 If the systems are tightly integrated (i.e., the STUN endpoint 638 responds with responses authenticated with ICE credentials) then this 639 issue does not exist. However, such a design is very close to an 640 ICE-Lite implementation (indeed, arguably is one). An intermediate 641 approach would be to have a STUN extension that indicated that one 642 was responding to WebRTC checks but not computing integrity checks 643 based on the ICE credentials. This would allow the use of standalone 644 STUN servers without the risk of confusing them with legacy STUN 645 servers. If a non-ICE legacy solution is needed, then this is 646 probably the best choice. 648 Once initial consent is verified, we also need to verify continuing 649 consent, in order to avoid attacks where two people briefly share an 650 IP (e.g., behind a NAT in an Internet cafe) and the attacker arranges 651 for a large, unstoppable, traffic flow to the network and then 652 leaves. The appropriate technologies here are fairly similar to 653 those for initial consent, though are perhaps weaker since the 654 threats is less severe. 656 4.2.4. IP Location Privacy 658 Note that as soon as the callee sends their ICE candidates, the 659 caller learns the callee's IP addresses. The callee's server 660 reflexive address reveals a lot of information about the callee's 661 location. In order to avoid tracking, implementations may wish to 662 suppress the start of ICE negotiation until the callee has answered. 663 In addition, either side may wish to hide their location entirely by 664 forcing all traffic through a TURN server. 666 In ordinary operation, the site learns the browser's IP address, 667 though it may be hidden via mechanisms like Tor 668 [http://www.torproject.org] or a VPN. However, because sites can 669 cause the browser to provide IP addresses, this provides a mechanism 670 for sites to learn about the user's network environment even if the 671 user is behind a VPN that masks their IP address. Implementations 672 wish to provide settings which suppress all non-VPN candidates if the 673 user is on certain kinds of VPN, especially privacy-oriented systems 674 such as Tor. 676 4.3. Communications Security 678 Finally, we consider a problem familiar from the SIP world: 679 communications security. For obvious reasons, it MUST be possible 680 for the communicating parties to establish a channel which is secure 681 against both message recovery and message modification. (See 682 [RFC5479] for more details.) This service must be provided for both 683 data and voice/video. Ideally the same security mechanisms would be 684 used for both types of content. Technology for providing this 685 service (for instance, SRTP [RFC3711], DTLS [RFC4347] and DTLS-SRTP 686 [RFC5763]) is well understood. However, we must examine this 687 technology to the WebRTC context, where the threat model is somewhat 688 different. 690 In general, it is important to understand that unlike a conventional 691 SIP proxy, the calling service (i.e., the Web server) controls not 692 only the channel between the communicating endpoints but also the 693 application running on the user's browser. While in principle it is 694 possible for the browser to cut the calling service out of the loop 695 and directly present trusted information (and perhaps get consent), 696 practice in modern browsers is to avoid this whenever possible. "In- 697 flow" modal dialogs which require the user to consent to specific 698 actions are particularly disfavored as human factors research 699 indicates that unless they are made extremely invasive, users simply 700 agree to them without actually consciously giving consent. 701 [abarth-rtcweb]. Thus, nearly all the UI will necessarily be 702 rendered by the browser but under control of the calling service. 703 This likely includes the peer's identity information, which, after 704 all, is only meaningful in the context of some calling service. 706 This limitation does not mean that preventing attack by the calling 707 service is completely hopeless. However, we need to distinguish 708 between two classes of attack: 710 Retrospective compromise of calling service. 711 The calling service is is non-malicious during a call but 712 subsequently is compromised and wishes to attack an older call 713 (often called a "passive attack") 715 During-call attack by calling service. 716 The calling service is compromised during the call it wishes to 717 attack (often called an "active attack"). 719 Providing security against the former type of attack is practical 720 using the techniques discussed in Section 4.3.1. However, it is 721 extremely difficult to prevent a trusted but malicious calling 722 service from actively attacking a user's calls, either by mounting a 723 MITM attack or by diverting them entirely. (Note that this attack 724 applies equally to a network attacker if communications to the 725 calling service are not secured.) We discuss some potential 726 approaches and why they are likely to be impractical in 727 Section 4.3.2. 729 4.3.1. Protecting Against Retrospective Compromise 731 In a retrospective attack, the calling service was uncompromised 732 during the call, but that an attacker subsequently wants to recover 733 the content of the call. We assume that the attacker has access to 734 the protected media stream as well as having full control of the 735 calling service. 737 If the calling service has access to the traffic keying material (as 738 in SDES [RFC4568]), then retrospective attack is trivial. This form 739 of attack is particularly serious in the Web context because it is 740 standard practice in Web services to run extensive logging and 741 monitoring. Thus, it is highly likely that if the traffic key is 742 part of any HTTP request it will be logged somewhere and thus subject 743 to subsequent compromise. It is this consideration that makes an 744 automatic, public key-based key exchange mechanism imperative for 745 WebRTC (this is a good idea for any communications security system) 746 and this mechanism SHOULD provide perfect forward secrecy (PFS). The 747 signaling channel/calling service can be used to authenticate this 748 mechanism. 750 In addition, if end-to-end keying is in used, the system MUST NOT 751 provide any APIs to extract either long-term keying material or to 752 directly access any stored traffic keys. Otherwise, an attacker who 753 subsequently compromised the calling service might be able to use 754 those APIs to recover the traffic keys and thus compromise the 755 traffic. 757 4.3.2. Protecting Against During-Call Attack 759 Protecting against attacks during a call is a more difficult 760 proposition. Even if the calling service cannot directly access 761 keying material (as recommended in the previous section), it can 762 simply mount a man-in-the-middle attack on the connection, telling 763 Alice that she is calling Bob and Bob that he is calling Alice, while 764 in fact the calling service is acting as a calling bridge and 765 capturing all the traffic. Protecting against this form of attack 766 requires positive authentication of the remote endpoint such as 767 explicit out-of-band key verification (e.g., by a fingerprint) or a 768 third-party identity service as described in 769 [I-D.ietf-rtcweb-security-arch]. 771 4.3.2.1. Key Continuity 773 One natural approach is to use "key continuity". While a malicious 774 calling service can present any identity it chooses to the user, it 775 cannot produce a private key that maps to a given public key. Thus, 776 it is possible for the browser to note a given user's public key and 777 generate an alarm whenever that user's key changes. SSH [RFC4251] 778 uses a similar technique. (Note that the need to avoid explicit user 779 consent on every call precludes the browser requiring an immediate 780 manual check of the peer's key). 782 Unfortunately, this sort of key continuity mechanism is far less 783 useful in the WebRTC context. First, much of the virtue of WebRTC 784 (and any Web application) is that it is not bound to particular piece 785 of client software. Thus, it will be not only possible but routine 786 for a user to use multiple browsers on different computers which will 787 of course have different keying material (SACRED [RFC3760] 788 notwithstanding.) Thus, users will frequently be alerted to key 789 mismatches which are in fact completely legitimate, with the result 790 that they are trained to simply click through them. As it is known 791 that users routinely will click through far more dire warnings 792 [cranor-wolf], it seems extremely unlikely that any key continuity 793 mechanism will be effective rather than simply annoying. 795 Moreover, it is trivial to bypass even this kind of mechanism. 796 Recall that unlike the case of SSH, the browser never directly gets 797 the peer's identity from the user. Rather, it is provided by the 798 calling service. Even enabling a mechanism of this type would 799 require an API to allow the calling service to tell the browser "this 800 is a call to user X". All the calling service needs to do to avoid 801 triggering a key continuity warning is to tell the browser that "this 802 is a call to user Y" where Y is close to X. Even if the user actually 803 checks the other side's name (which all available evidence indicates 804 is unlikely), this would require (a) the browser to trusted UI to 805 provide the name and (b) the user to not be fooled by similar 806 appearing names. 808 4.3.2.2. Short Authentication Strings 810 ZRTP [RFC6189] uses a "short authentication string" (SAS) which is 811 derived from the key agreement protocol. This SAS is designed to be 812 compared by the users (e.g., read aloud over the the voice channel or 813 transmitted via an out of band channel) and if confirmed by both 814 sides precludes MITM attack. The intention is that the SAS is used 815 once and then key continuity (though a different mechanism from that 816 discussed above) is used thereafter. 818 Unfortunately, the SAS does not offer a practical solution to the 819 problem of a compromised calling service. "Voice conversion" 820 systems, which modify voice from one speaker to make it sound like 821 another, are an active area of research. These systems are already 822 good enough to fool both automatic recognition systems 823 [farus-conversion] and humans [kain-conversion] in many cases, and 824 are of course likely to improve in future, especially in an 825 environment where the user just wants to get on with the phone call. 826 Thus, even if SAS is effective today, it is likely not to be so for 827 much longer. 829 Additionally, it is unclear that users will actually use an SAS. As 830 discussed above, the browser UI constraints preclude requiring the 831 SAS exchange prior to completing the call and so it must be 832 voluntary; at most the browser will provide some UI indicator that 833 the SAS has not yet been checked. However, it it is well-known that 834 when faced with optional security mechanisms, many users simply 835 ignore them [whitten-johnny]. 837 Once uses have checked the SAS once, key continuity is required to 838 avoid them needing to check it on every call. However, this is 839 problematic for reasons indicated in Section 4.3.2.1. In principle 840 it is of course possible to render a different UI element to indicate 841 that calls are using an unauthenticated set of keying material 842 (recall that the attacker can just present a slightly different name 843 so that the attack shows the same UI as a call to a new device or to 844 someone you haven't called before) but as a practical matter, users 845 simply ignore such indicators even in the rather more dire case of 846 mixed content warnings. 848 4.3.2.3. Third Party Identity 850 The conventional approach to providing communications identity has of 851 course been to have some third party identity system (e.g., PKI) to 852 authenticate the endpoints. Such mechanisms have proven to be too 853 cumbersome for use by typical users (and nearly too cumbersome for 854 administrators). However, a new generation of Web-based identity 855 providers (BrowserID, Federated Google Login, Facebook Connect, 856 OAuth, OpenID, WebFinger), has recently been developed and use Web 857 technologies to provide lightweight (from the user's perspective) 858 third-party authenticated transactions. It is possible to use 859 systems of this type to authenticate WebRTC calls, linking them to 860 existing user notions of identity (e.g., Facebook adjacencies). 861 Specifically, the third-party identity system is used to bind the 862 user's identity to cryptographic keying material which is then used 863 to authenticate the calling endpoints. Calls which are authenticated 864 in this fashion are naturally resistant even to active MITM attack by 865 the calling site. 867 Note that there is one special case in which PKI-style certificates 868 do provide a practical solution: calls from end-users to large 869 sites. For instance, if you are making a call to Amazon.com, then 870 Amazon can easily get a certificate to authenticate their media 871 traffic, just as they get one to authenticate their Web traffic. 872 This does not provide additional security value in cases in which the 873 calling site and the media peer are one in the same, but might be 874 useful in cases in which third parties (e.g., ad networks or 875 retailers) arrange for calls but do not participate in them. 877 4.3.2.4. Page Access to Media 879 Identifying the identity of the far media endpoint is a necessary but 880 not sufficient condition for providing media security. In WebRTC, 881 media flows are rendered into HTML5 MediaStreams which can be 882 manipulated by the calling site. Obviously, if the site can modify 883 or view the media, then the user is not getting the level of 884 assurance they would expect from being able to authenticate their 885 peer. In many cases, this is acceptable because the user values 886 site-based special effects over complete security from the site. 887 However, there are also cases where users wish to know that the site 888 cannot interfere. In order to facilitate that, it will be necessary 889 to provide features whereby the site can verifiably give up access to 890 the media streams. This verification must be possible both from the 891 local side and the remote side. I.e., I must be able to verify that 892 the person I am calling has engaged a secure media mode. In order to 893 achieve this it will be necessary to cryptographically bind an 894 indication of the local media access policy into the cryptographic 895 authentication procedures detailed in the previous sections. 897 4.3.3. Malicious Peers 899 One class of attack that we do not generally try to prevent is 900 malicious peers. For instance, no matter what confidentiality 901 measures you employ the person you are talking to might record the 902 call and publish it on the Internet. Similarly, we do not attempt to 903 prevent them from using voice or video processing technology from 904 hiding or changing their appearance. While technologies (DRM, etc.) 905 do exist to attempt to address these issues, they are generally not 906 compatible with open systems and WebRTC does not address them. 908 Similarly, we make no attempt to prevent prank calling or other 909 unwanted calls. In general, this is in the scope of the calling 910 site, though because WebRTC does offer some forms of strong 911 authentication, that may be useful as part of a defense against such 912 attacks. 914 4.4. Privacy Considerations 916 4.4.1. Correlation of Anonymous Calls 918 While persistent endpoint identifiers can be a useful security 919 feature (see Section 4.3.2.1 they can also represent a privacy threat 920 in settings where the user wishes to be anonymous. WebRTC provides a 921 number of possible persistent identifiers such as DTLS certificates 922 (if they are reused between connections) and RTCP CNAMES (if 923 generated according to [RFC6222] rather than the privacy preserving 924 mode of [I-D.ietf-avtcore-6222bis]). In order to prevent this type 925 of correlation, browsers need to provide mechanisms to reset these 926 identifiers (e.g., with the same lifetime as cookies). Moreover, the 927 API should provide mechanisms to allow sites intended for anonymous 928 calling to force the minting of fresh identifiers. 930 4.4.2. Browser Fingerprinting 932 Any new set of API features adds a risk of browser fingerprinting, 933 and WebRTC is no exception. Specifically, sites can use the presence 934 or absence of specific devices as a browser fingerprint. In general, 935 the API needs to be balanced between functionality and the 936 incremental fingerprint risk. 938 5. Security Considerations 940 This entire document is about security. 942 6. Acknowledgements 944 Bernard Aboba, Harald Alvestrand, Dan Druta, Cullen Jennings, Alan 945 Johnston, Hadriel Kaplan (S 4.2.1), Matthew Kaufman, Martin Thomson, 946 Magnus Westerland. 948 7. Changes Since -04 950 o Replaced RTCWEB and RTC-Web with WebRTC, except when referring to 951 the IETF WG 952 o Removed discussion of the IFRAMEd advertisement case, since we 953 decided not to treat it specially. 954 o Added a privacy section considerations section. 955 o Significant edits to the SAS section to reflect Alan Johnston's 956 comments. 957 o Added some discussion if IP location privacy and Tor. 958 o Updated the "communications consent" section to reflrect draft- 959 muthu. 960 o Added a section about "malicious peers". 961 o Added a section describing screen sharing threats. 962 o Assorted editorial changes. 964 8. References 966 8.1. Normative References 968 [I-D.ietf-rtcweb-overview] 969 Alvestrand, H., "Overview: Real Time Protocols for Brower- 970 based Applications", draft-ietf-rtcweb-overview-06 (work 971 in progress), February 2013. 973 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 974 Requirement Levels", BCP 14, RFC 2119, March 1997. 976 8.2. Informative References 978 [CORS] van Kesteren, A., "Cross-Origin Resource Sharing". 980 [I-D.ietf-avtcore-6222bis] 981 Begen, A., Perkins, C., Wing, D., and E. Rescorla, 982 "Guidelines for Choosing RTP Control Protocol (RTCP) 983 Canonical Names (CNAMEs)", draft-ietf-avtcore-6222bis-06 984 (work in progress), July 2013. 986 [I-D.ietf-rtcweb-security-arch] 987 Rescorla, E., "RTCWEB Security Architecture", 988 draft-ietf-rtcweb-security-arch-06 (work in progress), 989 January 2013. 991 [I-D.kaufman-rtcweb-security-ui] 992 Kaufman, M., "Client Security User Interface Requirements 993 for RTCWEB", draft-kaufman-rtcweb-security-ui-00 (work in 994 progress), June 2011. 996 [I-D.muthu-behave-consent-freshness] 997 Perumal, M., Wing, D., R, R., and H. Kaplan, "STUN Usage 998 for Consent Freshness", 999 draft-muthu-behave-consent-freshness-03 (work in 1000 progress), February 2013. 1002 [RFC2818] Rescorla, E., "HTTP Over TLS", RFC 2818, May 2000. 1004 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1005 A., Peterson, J., Sparks, R., Handley, M., and E. 1006 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1007 June 2002. 1009 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 1010 Text on Security Considerations", BCP 72, RFC 3552, 1011 July 2003. 1013 [RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K. 1014 Norrman, "The Secure Real-time Transport Protocol (SRTP)", 1015 RFC 3711, March 2004. 1017 [RFC3760] Gustafson, D., Just, M., and M. Nystrom, "Securely 1018 Available Credentials (SACRED) - Credential Server 1019 Framework", RFC 3760, April 2004. 1021 [RFC4251] Ylonen, T. and C. Lonvick, "The Secure Shell (SSH) 1022 Protocol Architecture", RFC 4251, January 2006. 1024 [RFC4347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer 1025 Security", RFC 4347, April 2006. 1027 [RFC4568] Andreasen, F., Baugher, M., and D. Wing, "Session 1028 Description Protocol (SDP) Security Descriptions for Media 1029 Streams", RFC 4568, July 2006. 1031 [RFC5245] Rosenberg, J., "Interactive Connectivity Establishment 1032 (ICE): A Protocol for Network Address Translator (NAT) 1033 Traversal for Offer/Answer Protocols", RFC 5245, 1034 April 2010. 1036 [RFC5479] Wing, D., Fries, S., Tschofenig, H., and F. Audet, 1037 "Requirements and Analysis of Media Security Management 1038 Protocols", RFC 5479, April 2009. 1040 [RFC5763] Fischl, J., Tschofenig, H., and E. Rescorla, "Framework 1041 for Establishing a Secure Real-time Transport Protocol 1042 (SRTP) Security Context Using Datagram Transport Layer 1043 Security (DTLS)", RFC 5763, May 2010. 1045 [RFC6189] Zimmermann, P., Johnston, A., and J. Callas, "ZRTP: Media 1046 Path Key Agreement for Unicast Secure RTP", RFC 6189, 1047 April 2011. 1049 [RFC6222] Begen, A., Perkins, C., and D. Wing, "Guidelines for 1050 Choosing RTP Control Protocol (RTCP) Canonical Names 1051 (CNAMEs)", RFC 6222, April 2011. 1053 [RFC6454] Barth, A., "The Web Origin Concept", RFC 6454, 1054 December 2011. 1056 [RFC6455] Fette, I. and A. Melnikov, "The WebSocket Protocol", 1057 RFC 6455, December 2011. 1059 [SWF] Adobe, "SWF File Format Specification Version 19". 1061 [abarth-rtcweb] 1062 Barth, A., "Prompting the user is security failure", RTC- 1063 Web Workshop. 1065 [cranor-wolf] 1066 Sunshine, J., Egelman, S., Almuhimedi, H., Atri, N., and 1067 L. cranor, "Crying Wolf: An Empirical Study of SSL Warning 1068 Effectiveness", Proceedings of the 18th USENIX Security 1069 Symposium, 2009. 1071 [farus-conversion] 1072 Farrus, M., Erro, D., and J. Hernando, "Speaker 1073 Recognition Robustness to Voice Conversion". 1075 [finer-grained] 1076 Barth, A. and C. Jackson, "Beware of Finer-Grained 1077 Origins", W2SP, 2008. 1079 [huang-w2sp] 1080 Huang, L-S., Chen, E., Barth, A., Rescorla, E., and C. 1081 Jackson, "Talking to Yourself for Fun and Profit", W2SP, 1082 2011. 1084 [kain-conversion] 1085 Kain, A. and M. Macon, "Design and Evaluation of a Voice 1086 Conversion Algorithm based on Spectral Envelope Mapping 1087 and Residual Prediction", Proceedings of ICASSP, May 1088 2001. 1090 [whitten-johnny] 1091 Whitten, A. and J. Tygar, "Why Johnny Can't Encrypt: A 1092 Usability Evaluation of PGP 5.0", Proceedings of the 8th 1093 USENIX Security Symposium, 1999. 1095 Author's Address 1097 Eric Rescorla 1098 RTFM, Inc. 1099 2064 Edgewood Drive 1100 Palo Alto, CA 94303 1101 USA 1103 Phone: +1 650 678 2350 1104 Email: ekr@rtfm.com