idnits 2.17.1 draft-ietf-rtcweb-security-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 1, 2019) is 1910 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-12) exists of draft-ietf-rtcweb-ip-handling-11 == Outdated reference: A later version (-20) exists of draft-ietf-rtcweb-security-arch-17 -- Obsolete informational reference (is this intentional?): RFC 2818 (Obsoleted by RFC 9110) -- Obsolete informational reference (is this intentional?): RFC 6222 (Obsoleted by RFC 7022) -- Obsolete informational reference (is this intentional?): RFC 6347 (Obsoleted by RFC 9147) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RTC-Web E. Rescorla 3 Internet-Draft RTFM, Inc. 4 Intended status: Standards Track February 1, 2019 5 Expires: August 5, 2019 7 Security Considerations for WebRTC 8 draft-ietf-rtcweb-security-11 10 Abstract 12 WebRTC is a protocol suite for use with real-time applications that 13 can be deployed in browsers - "real time communication on the Web". 14 This document defines the WebRTC threat model and analyzes the 15 security threats of WebRTC in that model. 17 Status of This Memo 19 This Internet-Draft is submitted in full conformance with the 20 provisions of BCP 78 and BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF). Note that other groups may also distribute 24 working documents as Internet-Drafts. The list of current Internet- 25 Drafts is at https://datatracker.ietf.org/drafts/current/. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 This Internet-Draft will expire on August 5, 2019. 34 Copyright Notice 36 Copyright (c) 2019 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (https://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with respect 44 to this document. Code Components extracted from this document must 45 include Simplified BSD License text as described in Section 4.e of 46 the Trust Legal Provisions and are provided without warranty as 47 described in the Simplified BSD License. 49 This document may contain material from IETF Documents or IETF 50 Contributions published or made publicly available before November 51 10, 2008. The person(s) controlling the copyright in some of this 52 material may not have granted the IETF Trust the right to allow 53 modifications of such material outside the IETF Standards Process. 54 Without obtaining an adequate license from the person(s) controlling 55 the copyright in such materials, this document may not be modified 56 outside the IETF Standards Process, and derivative works of it may 57 not be created outside the IETF Standards Process, except to format 58 it for publication as an RFC or to translate it into languages other 59 than English. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 65 3. The Browser Threat Model . . . . . . . . . . . . . . . . . . 4 66 3.1. Access to Local Resources . . . . . . . . . . . . . . . . 5 67 3.2. Same-Origin Policy . . . . . . . . . . . . . . . . . . . 5 68 3.3. Bypassing SOP: CORS, WebSockets, and consent to 69 communicate . . . . . . . . . . . . . . . . . . . . . . . 6 70 4. Security for WebRTC Applications . . . . . . . . . . . . . . 7 71 4.1. Access to Local Devices . . . . . . . . . . . . . . . . . 7 72 4.1.1. Threats from Screen Sharing . . . . . . . . . . . . . 8 73 4.1.2. Calling Scenarios and User Expectations . . . . . . . 8 74 4.1.2.1. Dedicated Calling Services . . . . . . . . . . . 9 75 4.1.2.2. Calling the Site You're On . . . . . . . . . . . 9 76 4.1.3. Origin-Based Security . . . . . . . . . . . . . . . . 10 77 4.1.4. Security Properties of the Calling Page . . . . . . . 11 78 4.2. Communications Consent Verification . . . . . . . . . . . 12 79 4.2.1. ICE . . . . . . . . . . . . . . . . . . . . . . . . . 13 80 4.2.2. Masking . . . . . . . . . . . . . . . . . . . . . . . 13 81 4.2.3. Backward Compatibility . . . . . . . . . . . . . . . 14 82 4.2.4. IP Location Privacy . . . . . . . . . . . . . . . . . 15 83 4.3. Communications Security . . . . . . . . . . . . . . . . . 15 84 4.3.1. Protecting Against Retrospective Compromise . . . . . 16 85 4.3.2. Protecting Against During-Call Attack . . . . . . . . 17 86 4.3.2.1. Key Continuity . . . . . . . . . . . . . . . . . 17 87 4.3.2.2. Short Authentication Strings . . . . . . . . . . 18 88 4.3.2.3. Third Party Identity . . . . . . . . . . . . . . 19 89 4.3.2.4. Page Access to Media . . . . . . . . . . . . . . 19 90 4.3.3. Malicious Peers . . . . . . . . . . . . . . . . . . . 20 91 4.4. Privacy Considerations . . . . . . . . . . . . . . . . . 20 92 4.4.1. Correlation of Anonymous Calls . . . . . . . . . . . 20 93 4.4.2. Browser Fingerprinting . . . . . . . . . . . . . . . 21 94 5. Security Considerations . . . . . . . . . . . . . . . . . . . 21 95 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21 96 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 97 8. Changes Since -04 . . . . . . . . . . . . . . . . . . . . . . 21 98 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 99 9.1. Normative References . . . . . . . . . . . . . . . . . . 22 100 9.2. Informative References . . . . . . . . . . . . . . . . . 22 101 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 25 103 1. Introduction 105 The Real-Time Communications on the Web (RTCWEB) working group has 106 standardized protocols for real-time communications between Web 107 browsers, generally called "WebRTC" [I-D.ietf-rtcweb-overview]. The 108 major use cases for WebRTC technology are real-time audio and/or 109 video calls, Web conferencing, and direct data transfer. Unlike most 110 conventional real-time systems, (e.g., SIP-based [RFC3261] soft 111 phones) WebRTC communications are directly controlled by some Web 112 server. A simple case is shown below. 114 +----------------+ 115 | | 116 | Web Server | 117 | | 118 +----------------+ 119 ^ ^ 120 / \ 121 HTTP / \ HTTP 122 or / \ or 123 WebSockets / \ WebSockets 124 v v 125 JS API JS API 126 +-----------+ +-----------+ 127 | | Media | | 128 | Browser |<---------->| Browser | 129 | | | | 130 +-----------+ +-----------+ 131 Alice Bob 133 Figure 1: A simple WebRTC system 135 In the system shown in Figure 1, Alice and Bob both have WebRTC- 136 enabled browsers and they visit some Web server which operates a 137 calling service. Each of their browsers exposes standardized 138 JavaScript calling APIs (implementated as browser built-ins) which 139 are used by the Web server to set up a call between Alice and Bob. 140 The Web server also serves as the signaling channel to transport 141 control messages between the browsers. While this system is 142 topologically similar to a conventional SIP-based system (with the 143 Web server acting as the signaling service and browsers acting as 144 softphones), control has moved to the central Web server; the browser 145 simply provides API points that are used by the calling service. As 146 with any Web application, the Web server can move logic between the 147 server and JavaScript in the browser, but regardless of where the 148 code is executing, it is ultimately under control of the server. 150 It should be immediately apparent that this type of system poses new 151 security challenges beyond those of a conventional VoIP system. In 152 particular, it needs to contend with malicious calling services. For 153 example, if the calling service can cause the browser to make a call 154 at any time to any callee of its choice, then this facility can be 155 used to bug a user's computer without their knowledge, simply by 156 placing a call to some recording service. More subtly, if the 157 exposed APIs allow the server to instruct the browser to send 158 arbitrary content, then they can be used to bypass firewalls or mount 159 denial of service attacks. Any successful system will need to be 160 resistant to this and other attacks. 162 A companion document [I-D.ietf-rtcweb-security-arch] describes a 163 security architecture intended to address the issues raised in this 164 document. 166 2. Terminology 168 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 169 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 170 "OPTIONAL" in this document are to be interpreted as described in BCP 171 14 [RFC2119] [RFC8174] when, and only when, they appear in all 172 capitals, as shown here. 174 3. The Browser Threat Model 176 The security requirements for WebRTC follow directly from the 177 requirement that the browser's job is to protect the user. Huang et 178 al. [huang-w2sp] summarize the core browser security guarantee as: 180 Users can safely visit arbitrary web sites and execute scripts 181 provided by those sites. 183 It is important to realize that this includes sites hosting arbitrary 184 malicious scripts. The motivation for this requirement is simple: it 185 is trivial for attackers to divert users to sites of their choice. 186 For instance, an attacker can purchase display advertisements which 187 direct the user (either automatically or via user clicking) to their 188 site, at which point the browser will execute the attacker's scripts. 189 Thus, it is important that it be safe to view arbitrarily malicious 190 pages. Of course, browsers inevitably have bugs which cause them to 191 fall short of this goal, but any new WebRTC functionality must be 192 designed with the intent to meet this standard. The remainder of 193 this section provides more background on the existing Web security 194 model. 196 In this model, then, the browser acts as a TRUSTED COMPUTING BASE 197 (TCB) both from the user's perspective and to some extent from the 198 server's. While HTML and JavaScript (JS) provided by the server can 199 cause the browser to execute a variety of actions, those scripts 200 operate in a sandbox that isolates them both from the user's computer 201 and from each other, as detailed below. 203 Conventionally, we refer to either WEB ATTACKERS, who are able to 204 induce you to visit their sites but do not control the network, and 205 NETWORK ATTACKERS, who are able to control your network. Network 206 attackers correspond to the [RFC3552] "Internet Threat Model". Note 207 that for non-HTTPS traffic, a network attacker is also a Web 208 attacker, since it can inject traffic as if it were any non-HTTPS Web 209 site. Thus, when analyzing HTTP connections, we must assume that 210 traffic is going to the attacker. 212 3.1. Access to Local Resources 214 While the browser has access to local resources such as keying 215 material, files, the camera, and the microphone, it strictly limits 216 or forbids web servers from accessing those same resources. For 217 instance, while it is possible to produce an HTML form which will 218 allow file upload, a script cannot do so without user consent and in 219 fact cannot even suggest a specific file (e.g., /etc/passwd); the 220 user must explicitly select the file and consent to its upload. 221 [Note: in many cases browsers are explicitly designed to avoid 222 dialogs with the semantics of "click here to bypass security checks", 223 as extensive research shows that users are prone to consent under 224 such circumstances.] 226 Similarly, while Flash programs (SWFs) [SWF] can access the camera 227 and microphone, they explicitly require that the user consent to that 228 access. In addition, some resources simply cannot be accessed from 229 the browser at all. For instance, there is no real way to run 230 specific executables directly from a script (though the user can of 231 course be induced to download executable files and run them). 233 3.2. Same-Origin Policy 235 Many other resources are accessible but isolated. For instance, 236 while scripts are allowed to make HTTP requests via the 237 XMLHttpRequest() API (see [XmlHttpRequest]) those requests are not 238 allowed to be made to any server, but rather solely to the same 239 ORIGIN from whence the script came [RFC6454] (although CORS [CORS] 240 and WebSockets [RFC6455] provide a escape hatch from this 241 restriction, as described below.) This SAME ORIGIN POLICY (SOP) 242 prevents server A from mounting attacks on server B via the user's 243 browser, which protects both the user (e.g., from misuse of his 244 credentials) and the server B (e.g., from DoS attack). 246 More generally, SOP forces scripts from each site to run in their 247 own, isolated, sandboxes. While there are techniques to allow them 248 to interact, those interactions generally must be mutually consensual 249 (by each site) and are limited to certain channels. For instance, 250 multiple pages/browser panes from the same origin can read each 251 other's JS variables, but pages from the different origins--or even 252 iframes from different origins on the same page--cannot. 254 3.3. Bypassing SOP: CORS, WebSockets, and consent to communicate 256 While SOP serves an important security function, it also makes it 257 inconvenient to write certain classes of applications. In 258 particular, mash-ups, in which a script from origin A uses resources 259 from origin B, can only be achieved via a certain amount of hackery. 260 The W3C Cross-Origin Resource Sharing (CORS) spec [CORS] is a 261 response to this demand. In CORS, when a script from origin A 262 executes what would otherwise be a forbidden cross-origin request, 263 the browser instead contacts the target server to determine whether 264 it is willing to allow cross-origin requests from A. If it is so 265 willing, the browser then allows the request. This consent 266 verification process is designed to safely allow cross-origin 267 requests. 269 While CORS is designed to allow cross-origin HTTP requests, 270 WebSockets [RFC6455] allows cross-origin establishment of transparent 271 channels. Once a WebSockets connection has been established from a 272 script to a site, the script can exchange any traffic it likes 273 without being required to frame it as a series of HTTP request/ 274 response transactions. As with CORS, a WebSockets transaction starts 275 with a consent verification stage to avoid allowing scripts to simply 276 send arbitrary data to another origin. 278 While consent verification is conceptually simple--just do a 279 handshake before you start exchanging the real data--experience has 280 shown that designing a correct consent verification system is 281 difficult. In particular, Huang et al. [huang-w2sp] have shown 282 vulnerabilities in the existing Java and Flash consent verification 283 techniques and in a simplified version of the WebSockets handshake. 284 In particular, it is important to be wary of CROSS-PROTOCOL attacks 285 in which the attacking script generates traffic which is acceptable 286 to some non-Web protocol state machine. In order to resist this form 287 of attack, WebSockets incorporates a masking technique intended to 288 randomize the bits on the wire, thus making it more difficult to 289 generate traffic which resembles a given protocol. 291 4. Security for WebRTC Applications 293 4.1. Access to Local Devices 295 As discussed in Section 1, allowing arbitrary sites to initiate calls 296 violates the core Web security guarantee; without some access 297 restrictions on local devices, any malicious site could simply bug a 298 user. At minimum, then, it MUST NOT be possible for arbitrary sites 299 to initiate calls to arbitrary locations without user consent. This 300 immediately raises the question, however, of what should be the scope 301 of user consent. 303 In order for the user to make an intelligent decision about whether 304 to allow a call (and hence his camera and microphone input to be 305 routed somewhere), he must understand either who is requesting 306 access, where the media is going, or both. As detailed below, there 307 are two basic conceptual models: 309 1. You are sending your media to entity A because you want to talk 310 to Entity A (e.g., your mother). 312 2. Entity A (e.g., a calling service) asks to access the user's 313 devices with the assurance that it will transfer the media to 314 entity B (e.g., your mother) 316 In either case, identity is at the heart of any consent decision. 317 Moreover, the identity of the party the browser is connecting to is 318 all that the browser can meaningfully enforce; if you are calling A, 319 A can simply forward the media to C. Similarly, if you authorize A 320 to place a call to B, A can call C instead. In either case, all the 321 browser is able to do is verify and check authorization for whoever 322 is controlling where the media goes. The target of the media can of 323 course advertise a security/privacy policy, but this is not something 324 that the browser can enforce. Even so, there are a variety of 325 different consent scenarios that motivate different technical consent 326 mechanisms. We discuss these mechanisms in the sections below. 328 It's important to understand that consent to access local devices is 329 largely orthogonal to consent to transmit various kinds of data over 330 the network (see Section 4.2). Consent for device access is largely 331 a matter of protecting the user's privacy from malicious sites. By 332 contrast, consent to send network traffic is about preventing the 333 user's browser from being used to attack its local network. Thus, we 334 need to ensure communications consent even if the site is not able to 335 access the camera and microphone at all (hence WebSockets's consent 336 mechanism) and similarly we need to be concerned with the site 337 accessing the user's camera and microphone even if the data is to be 338 sent back to the site via conventional HTTP-based network mechanisms 339 such as HTTP POST. 341 4.1.1. Threats from Screen Sharing 343 In addition to camera and microphone access, there has been demand 344 for screen and/or application sharing functionality. Unfortunately, 345 the security implications of this functionality are much harder for 346 users to intuitively analyze than for camera and microphone access. 347 (See http://lists.w3.org/Archives/Public/public- 348 webrtc/2013Mar/0024.html for a full analysis.) 350 The most obvious threats are simply those of "oversharing". I.e., 351 the user may believe they are sharing a window when in fact they are 352 sharing an application, or may forget they are sharing their whole 353 screen, icons, notifications, and all. This is already an issue with 354 existing screen sharing technologies and is made somewhat worse if a 355 partially trusted site is responsible for asking for the resource to 356 be shared rather than having the user propose it. 358 A less obvious threat involves the impact of screen sharing on the 359 Web security model. A key part of the Same-Origin Policy is that 360 HTML or JS from site A can reference content from site B and cause 361 the browser to load it, but (unless explicitly permitted) cannot see 362 the result. However, if a web application from a site is screen 363 sharing the browser, then this violates that invariant, with serious 364 security consequences. For example, an attacker site might request 365 screen sharing and then briefly open up a new Window to the user's 366 bank or webmail account, using screen sharing to read the resulting 367 displayed content. A more sophisticated attack would be open up a 368 source view window to a site and use the screen sharing result to 369 view anti cross-site request forgery tokens. 371 These threats suggest that screen/application sharing might need a 372 higher level of user consent than access to the camera or microphone. 374 4.1.2. Calling Scenarios and User Expectations 376 While a large number of possible calling scenarios are possible, the 377 scenarios discussed in this section illustrate many of the 378 difficulties of identifying the relevant scope of consent. 380 4.1.2.1. Dedicated Calling Services 382 The first scenario we consider is a dedicated calling service. In 383 this case, the user has a relationship with a calling site and 384 repeatedly makes calls on it. It is likely that rather than having 385 to give permission for each call that the user will want to give the 386 calling service long-term access to the camera and microphone. This 387 is a natural fit for a long-term consent mechanism (e.g., installing 388 an app store "application" to indicate permission for the calling 389 service.) A variant of the dedicated calling service is a gaming 390 site (e.g., a poker site) which hosts a dedicated calling service to 391 allow players to call each other. 393 With any kind of service where the user may use the same service to 394 talk to many different people, there is a question about whether the 395 user can know who they are talking to. If I grant permission to 396 calling service A to make calls on my behalf, then I am implicitly 397 granting it permission to bug my computer whenever it wants. This 398 suggests another consent model in which a site is authorized to make 399 calls but only to certain target entities (identified via media-plane 400 cryptographic mechanisms as described in Section 4.3.2 and especially 401 Section 4.3.2.3.) Note that the question of consent here is related 402 to but distinct from the question of peer identity: I might be 403 willing to allow a calling site to in general initiate calls on my 404 behalf but still have some calls via that site where I can be sure 405 that the site is not listening in. 407 4.1.2.2. Calling the Site You're On 409 Another simple scenario is calling the site you're actually visiting. 410 The paradigmatic case here is the "click here to talk to a 411 representative" windows that appear on many shopping sites. In this 412 case, the user's expectation is that they are calling the site 413 they're actually visiting. However, it is unlikely that they want to 414 provide a general consent to such a site; just because I want some 415 information on a car doesn't mean that I want the car manufacturer to 416 be able to activate my microphone whenever they please. Thus, this 417 suggests the need for a second consent mechanism where I only grant 418 consent for the duration of a given call. As described in 419 Section 3.1, great care must be taken in the design of this interface 420 to avoid the users just clicking through. Note also that the user 421 interface chrome, which is the representation through which the user 422 interacts with the user agent itself, must clearly display elements 423 showing that the call is continuing in order to avoid attacks where 424 the calling site just leaves it up indefinitely but shows a Web UI 425 that implies otherwise. 427 4.1.3. Origin-Based Security 429 Now that we have seen another use case, we can start to reason about 430 the security requirements. 432 As discussed in Section 3.2, the basic unit of Web sandboxing is the 433 origin, and so it is natural to scope consent to origin. 434 Specifically, a script from origin A MUST only be allowed to initiate 435 communications (and hence to access camera and microphone) if the 436 user has specifically authorized access for that origin. It is of 437 course technically possible to have coarser-scoped permissions, but 438 because the Web model is scoped to origin, this creates a difficult 439 mismatch. 441 Arguably, origin is not fine-grained enough. Consider the situation 442 where Alice visits a site and authorizes it to make a single call. 443 If consent is expressed solely in terms of origin, then at any future 444 visit to that site (including one induced via mash-up or ad network), 445 the site can bug Alice's computer, use the computer to place bogus 446 calls, etc. While in principle Alice could grant and then revoke the 447 privilege, in practice privileges accumulate; if we are concerned 448 about this attack, something else is needed. There are a number of 449 potential countermeasures to this sort of issue. 451 Individual Consent 453 Ask the user for permission for each call. 455 Callee-oriented Consent 457 Only allow calls to a given user. 459 Cryptographic Consent 461 Only allow calls to a given set of peer keying material or to a 462 cryptographically established identity. 464 Unfortunately, none of these approaches is satisfactory for all 465 cases. As discussed above, individual consent puts the user's 466 approval in the UI flow for every call. Not only does this quickly 467 become annoying but it can train the user to simply click "OK", at 468 which point the consent becomes useless. Thus, while it may be 469 necessary to have individual consent in some case, this is not a 470 suitable solution for (for instance) the calling service case. Where 471 necessary, in-flow user interfaces must be carefully designed to 472 avoid the risk of the user blindly clicking through. 474 The other two options are designed to restrict calls to a given 475 target. Callee-oriented consent provided by the calling site would 476 not work well because a malicious site can claim that the user is 477 calling any user of his choice. One fix for this is to tie calls to 478 a cryptographically-established identity. While not suitable for all 479 cases, this approach may be useful for some. If we consider the case 480 of advertising, it's not particularly convenient to require the 481 advertiser to instantiate an iframe on the hosting site just to get 482 permission; a more convenient approach is to cryptographically tie 483 the advertiser's certificate to the communication directly. We're 484 still tying permissions to origin here, but to the media origin (and- 485 or destination) rather than to the Web origin. 486 [I-D.ietf-rtcweb-security-arch] describes mechanisms which facilitate 487 this sort of consent. 489 Another case where media-level cryptographic identity makes sense is 490 when a user really does not trust the calling site. For instance, I 491 might be worried that the calling service will attempt to bug my 492 computer, but I also want to be able to conveniently call my friends. 493 If consent is tied to particular communications endpoints, then my 494 risk is limited. Naturally, it is somewhat challenging to design UI 495 primitives which express this sort of policy. The problem becomes 496 even more challenging in multi-user calling cases. 498 4.1.4. Security Properties of the Calling Page 500 Origin-based security is intended to secure against web attackers. 501 However, we must also consider the case of network attackers. 502 Consider the case where I have granted permission to a calling 503 service by an origin that has the HTTP scheme, e.g., http://calling- 504 service.example.com. If I ever use my computer on an unsecured 505 network (e.g., a hotspot or if my own home wireless network is 506 insecure), and browse any HTTP site, then an attacker can bug my 507 computer. The attack proceeds like this: 509 1. I connect to http://anything.example.org/. Note that this site is 510 unaffiliated with the calling service. 512 2. The attacker modifies my HTTP connection to inject an IFRAME (or 513 a redirect) to http://calling-service.example.com 515 3. The attacker forges the response apparently http://calling- 516 service.example.com/ to inject JS to initiate a call to himself. 518 Note that this attack does not depend on the media being insecure. 519 Because the call is to the attacker, it is also encrypted to him. 520 Moreover, it need not be executed immediately; the attacker can 521 "infect" the origin semi-permanently (e.g., with a web worker or a 522 popped-up window that is hidden under the main window.) and thus be 523 able to bug me long after I have left the infected network. This 524 risk is created by allowing calls at all from a page fetched over 525 HTTP. 527 Even if calls are only possible from HTTPS [RFC2818] sites, if those 528 sites include active content (e.g., JavaScript) from an untrusted 529 site, that JavaScript is executed in the security context of the page 530 [finer-grained]. This could lead to compromise of a call even if the 531 parent page is safe. Note: this issue is not restricted to PAGES 532 which contain untrusted content. If a page from a given origin ever 533 loads JavaScript from an attacker, then it is possible for that 534 attacker to infect the browser's notion of that origin semi- 535 permanently. 537 4.2. Communications Consent Verification 539 As discussed in Section 3.3, allowing web applications unrestricted 540 network access via the browser introduces the risk of using the 541 browser as an attack platform against machines which would not 542 otherwise be accessible to the malicious site, for instance because 543 they are topologically restricted (e.g., behind a firewall or NAT). 544 In order to prevent this form of attack as well as cross-protocol 545 attacks it is important to require that the target of traffic 546 explicitly consent to receiving the traffic in question. Until that 547 consent has been verified for a given endpoint, traffic other than 548 the consent handshake MUST NOT be sent to that endpoint. 550 Note that consent verification is not sufficient to prevent overuse 551 of network resources. Because WebRTC allows for a Web site to create 552 data flows between two browser instances without user consent, it is 553 possible for a malicious site to chew up a signficant amount of a 554 user's bandwidth without incurring significant costs to himself by 555 setting up such a channel to another user. However, as a practical 556 matter there are a large number of Web sites which can act as data 557 sources, so an attacker can at least use downlink bandwidth with 558 existing Web APIs. However, this potential DoS vector reinforces the 559 need for adequate congestion control for WebRTC protocols to ensure 560 that they play fair with other demands on the user's bandwidth. 562 4.2.1. ICE 564 Verifying receiver consent requires some sort of explicit handshake, 565 but conveniently we already need one in order to do NAT hole- 566 punching. ICE [RFC8445] includes a handshake designed to verify that 567 the receiving element wishes to receive traffic from the sender. It 568 is important to remember here that the site initiating ICE is 569 presumed malicious; in order for the handshake to be secure the 570 receiving element MUST demonstrate receipt/knowledge of some value 571 not available to the site (thus preventing the site from forging 572 responses). In order to achieve this objective with ICE, the STUN 573 transaction IDs must be generated by the browser and MUST NOT be made 574 available to the initiating script, even via a diagnostic interface. 575 Verifying receiver consent also requires verifying the receiver wants 576 to receive traffic from a particular sender, and at this time; for 577 example a malicious site may simply attempt ICE to known servers that 578 are using ICE for other sessions. ICE provides this verification as 579 well, by using the STUN credentials as a form of per-session shared 580 secret. Those credentials are known to the Web application, but 581 would need to also be known and used by the STUN-receiving element to 582 be useful. 584 There also needs to be some mechanism for the browser to verify that 585 the target of the traffic continues to wish to receive it. Because 586 ICE keepalives are indications, they will not work here. [RFC7675] 587 describes the mechanism for providing consent freshness. 589 4.2.2. Masking 591 Once consent is verified, there still is some concern about 592 misinterpretation attacks as described by Huang et al.[huang-w2sp]. 593 Where TCP is used the risk is substantial due to the potential 594 presence of transparent proxies and therefore if TCP is to be used, 595 then WebSockets style masking MUST be employed. 597 Since DTLS (with the anti-chosen plaintext mechanisms required by TLS 598 1.1) does not allow the attacker to generate predictable ciphertext, 599 there is no need for masking of protocols running over DTLS (e.g. 600 SCTP over DTLS, UDP over DTLS, etc.). 602 Note that in principle an attacker could exert some control over SRTP 603 packets by using a combination of the WebAudio API and extremely 604 tight timing control. The primary risk here seems to be carriage of 605 SRTP over TURN TCP. However, as SRTP packets have an extremely 606 characteristic packet header it seems unlikely that any but the most 607 aggressive intermediaries would be confused into thinking that 608 another application layer protocol was in use. 610 4.2.3. Backward Compatibility 612 A requirement to use ICE limits compatibility with legacy non-ICE 613 clients. It seems unsafe to completely remove the requirement for 614 some check. All proposed checks have the common feature that the 615 browser sends some message to the candidate traffic recipient and 616 refuses to send other traffic until that message has been replied to. 617 The message/reply pair must be generated in such a way that an 618 attacker who controls the Web application cannot forge them, 619 generally by having the message contain some secret value that must 620 be incorporated (e.g., echoed, hashed into, etc.). Non-ICE 621 candidates for this role (in cases where the legacy endpoint has a 622 public address) include: 624 o STUN checks without using ICE (i.e., the non-RTC-web endpoint sets 625 up a STUN responder.) 627 o Use of RTCP as an implicit reachability check. 629 In the RTCP approach, the WebRTC endpoint is allowed to send a 630 limited number of RTP packets prior to receiving consent. This 631 allows a short window of attack. In addition, some legacy endpoints 632 do not support RTCP, so this is a much more expensive solution for 633 such endpoints, for which it would likely be easier to implement ICE. 634 For these two reasons, an RTCP-based approach does not seem to 635 address the security issue satisfactorily. 637 In the STUN approach, the WebRTC endpoint is able to verify that the 638 recipient is running some kind of STUN endpoint but unless the STUN 639 responder is integrated with the ICE username/password establishment 640 system, the WebRTC endpoint cannot verify that the recipient consents 641 to this particular call. This may be an issue if existing STUN 642 servers are operated at addresses that are not able to handle 643 bandwidth-based attacks. Thus, this approach does not seem 644 satisfactory either. 646 If the systems are tightly integrated (i.e., the STUN endpoint 647 responds with responses authenticated with ICE credentials) then this 648 issue does not exist. However, such a design is very close to an 649 ICE-Lite implementation (indeed, arguably is one). An intermediate 650 approach would be to have a STUN extension that indicated that one 651 was responding to WebRTC checks but not computing integrity checks 652 based on the ICE credentials. This would allow the use of standalone 653 STUN servers without the risk of confusing them with legacy STUN 654 servers. If a non-ICE legacy solution is needed, then this is 655 probably the best choice. 657 Once initial consent is verified, we also need to verify continuing 658 consent, in order to avoid attacks where two people briefly share an 659 IP (e.g., behind a NAT in an Internet cafe) and the attacker arranges 660 for a large, unstoppable, traffic flow to the network and then 661 leaves. The appropriate technologies here are fairly similar to 662 those for initial consent, though are perhaps weaker since the 663 threats is less severe. 665 4.2.4. IP Location Privacy 667 Note that as soon as the callee sends their ICE candidates, the 668 caller learns the callee's IP addresses. The callee's server 669 reflexive address reveals a lot of information about the callee's 670 location. In order to avoid tracking, implementations may wish to 671 suppress the start of ICE negotiation until the callee has answered. 672 In addition, either side may wish to hide their location from the 673 other side entirely by forcing all traffic through a TURN server. 675 In ordinary operation, the site learns the browser's IP address, 676 though it may be hidden via mechanisms like Tor 677 [http://www.torproject.org] or a VPN. However, because sites can 678 cause the browser to provide IP addresses, this provides a mechanism 679 for sites to learn about the user's network environment even if the 680 user is behind a VPN that masks their IP address. Implementations 681 may wish to provide settings which suppress all non-VPN candidates if 682 the user is on certain kinds of VPN, especially privacy-oriented 683 systems such as Tor. 685 4.3. Communications Security 687 Finally, we consider a problem familiar from the SIP world: 688 communications security. For obvious reasons, it MUST be possible 689 for the communicating parties to establish a channel which is secure 690 against both message recovery and message modification. (See 691 [RFC5479] for more details.) This service must be provided for both 692 data and voice/video. Ideally the same security mechanisms would be 693 used for both types of content. Technology for providing this 694 service (for instance, SRTP [RFC3711], DTLS [RFC6347] and DTLS-SRTP 695 [RFC5763]) is well understood. However, we must examine this 696 technology in the WebRTC context, where the threat model is somewhat 697 different. 699 In general, it is important to understand that unlike a conventional 700 SIP proxy, the calling service (i.e., the Web server) controls not 701 only the channel between the communicating endpoints but also the 702 application running on the user's browser. While in principle it is 703 possible for the browser to cut the calling service out of the loop 704 and directly present trusted information (and perhaps get consent), 705 practice in modern browsers is to avoid this whenever possible. "In- 706 flow" modal dialogs which require the user to consent to specific 707 actions are particularly disfavored as human factors research 708 indicates that unless they are made extremely invasive, users simply 709 agree to them without actually consciously giving consent. 710 [abarth-rtcweb]. Thus, nearly all the UI will necessarily be 711 rendered by the browser but under control of the calling service. 712 This likely includes the peer's identity information, which, after 713 all, is only meaningful in the context of some calling service. 715 This limitation does not mean that preventing attack by the calling 716 service is completely hopeless. However, we need to distinguish 717 between two classes of attack: 719 Retrospective compromise of calling service. 721 The calling service is is non-malicious during a call but 722 subsequently is compromised and wishes to attack an older call 723 (often called a "passive attack") 725 During-call attack by calling service. 727 The calling service is compromised during the call it wishes to 728 attack (often called an "active attack"). 730 Providing security against the former type of attack is practical 731 using the techniques discussed in Section 4.3.1. However, it is 732 extremely difficult to prevent a trusted but malicious calling 733 service from actively attacking a user's calls, either by mounting a 734 Man-in-the-Middle (MITM) attack or by diverting them entirely. (Note 735 that this attack applies equally to a network attacker if 736 communications to the calling service are not secured.) We discuss 737 some potential approaches and why they are likely to be impractical 738 in Section 4.3.2. 740 4.3.1. Protecting Against Retrospective Compromise 742 In a retrospective attack, the calling service was uncompromised 743 during the call, but that an attacker subsequently wants to recover 744 the content of the call. We assume that the attacker has access to 745 the protected media stream as well as having full control of the 746 calling service. 748 If the calling service has access to the traffic keying material (as 749 in SDES [RFC4568]), then retrospective attack is trivial. This form 750 of attack is particularly serious in the Web context because it is 751 standard practice in Web services to run extensive logging and 752 monitoring. Thus, it is highly likely that if the traffic key is 753 part of any HTTP request it will be logged somewhere and thus subject 754 to subsequent compromise. It is this consideration that makes an 755 automatic, public key-based key exchange mechanism imperative for 756 WebRTC (this is a good idea for any communications security system) 757 and this mechanism SHOULD provide perfect forward secrecy (PFS). The 758 signaling channel/calling service can be used to authenticate this 759 mechanism. 761 In addition, if end-to-end keying is in used, the system MUST NOT 762 provide any APIs to extract either long-term keying material or to 763 directly access any stored traffic keys. Otherwise, an attacker who 764 subsequently compromised the calling service might be able to use 765 those APIs to recover the traffic keys and thus compromise the 766 traffic. 768 4.3.2. Protecting Against During-Call Attack 770 Protecting against attacks during a call is a more difficult 771 proposition. Even if the calling service cannot directly access 772 keying material (as recommended in the previous section), it can 773 simply mount a man-in-the-middle attack on the connection, telling 774 Alice that she is calling Bob and Bob that he is calling Alice, while 775 in fact the calling service is acting as a calling bridge and 776 capturing all the traffic. Protecting against this form of attack 777 requires positive authentication of the remote endpoint such as 778 explicit out-of-band key verification (e.g., by a fingerprint) or a 779 third-party identity service as described in 780 [I-D.ietf-rtcweb-security-arch]. 782 4.3.2.1. Key Continuity 784 One natural approach is to use "key continuity". While a malicious 785 calling service can present any identity it chooses to the user, it 786 cannot produce a private key that maps to a given public key. Thus, 787 it is possible for the browser to note a given user's public key and 788 generate an alarm whenever that user's key changes. SSH [RFC4251] 789 uses a similar technique. (Note that the need to avoid explicit user 790 consent on every call precludes the browser requiring an immediate 791 manual check of the peer's key). 793 Unfortunately, this sort of key continuity mechanism is far less 794 useful in the WebRTC context. First, much of the virtue of WebRTC 795 (and any Web application) is that it is not bound to particular piece 796 of client software. Thus, it will be not only possible but routine 797 for a user to use multiple browsers on different computers which will 798 of course have different keying material (SACRED [RFC3760] 799 notwithstanding.) Thus, users will frequently be alerted to key 800 mismatches which are in fact completely legitimate, with the result 801 that they are trained to simply click through them. As it is known 802 that users routinely will click through far more dire warnings 803 [cranor-wolf], it seems extremely unlikely that any key continuity 804 mechanism will be effective rather than simply annoying. 806 Moreover, it is trivial to bypass even this kind of mechanism. 807 Recall that unlike the case of SSH, the browser never directly gets 808 the peer's identity from the user. Rather, it is provided by the 809 calling service. Even enabling a mechanism of this type would 810 require an API to allow the calling service to tell the browser "this 811 is a call to user X". All the calling service needs to do to avoid 812 triggering a key continuity warning is to tell the browser that "this 813 is a call to user Y" where Y is confusable with X. Even if the user 814 actually checks the other side's name (which all available evidence 815 indicates is unlikely), this would require (a) the browser to trusted 816 UI to provide the name and (b) the user to not be fooled by similar 817 appearing names. 819 4.3.2.2. Short Authentication Strings 821 ZRTP [RFC6189] uses a "short authentication string" (SAS) which is 822 derived from the key agreement protocol. This SAS is designed to be 823 compared by the users (e.g., read aloud over the the voice channel or 824 transmitted via an out of band channel) and if confirmed by both 825 sides precludes MITM attack. The intention is that the SAS is used 826 once and then key continuity (though a different mechanism from that 827 discussed above) is used thereafter. 829 Unfortunately, the SAS does not offer a practical solution to the 830 problem of a compromised calling service. "Voice conversion" 831 systems, which modify voice from one speaker to make it sound like 832 another, are an active area of research. These systems are already 833 good enough to fool both automatic recognition systems 834 [farus-conversion] and humans [kain-conversion] in many cases, and 835 are of course likely to improve in future, especially in an 836 environment where the user just wants to get on with the phone call. 837 Thus, even if SAS is effective today, it is likely not to be so for 838 much longer. 840 Additionally, it is unclear that users will actually use an SAS. As 841 discussed above, the browser UI constraints preclude requiring the 842 SAS exchange prior to completing the call and so it must be 843 voluntary; at most the browser will provide some UI indicator that 844 the SAS has not yet been checked. However, it it is well-known that 845 when faced with optional security mechanisms, many users simply 846 ignore them [whitten-johnny]. 848 Once users have checked the SAS once, key continuity is required to 849 avoid them needing to check it on every call. However, this is 850 problematic for reasons indicated in Section 4.3.2.1. In principle 851 it is of course possible to render a different UI element to indicate 852 that calls are using an unauthenticated set of keying material 853 (recall that the attacker can just present a slightly different name 854 so that the attack shows the same UI as a call to a new device or to 855 someone you haven't called before) but as a practical matter, users 856 simply ignore such indicators even in the rather more dire case of 857 mixed content warnings. 859 4.3.2.3. Third Party Identity 861 The conventional approach to providing communications identity has of 862 course been to have some third party identity system (e.g., PKI) to 863 authenticate the endpoints. Such mechanisms have proven to be too 864 cumbersome for use by typical users (and nearly too cumbersome for 865 administrators). However, a new generation of Web-based identity 866 providers (BrowserID, Federated Google Login, Facebook Connect, OAuth 867 [RFC6749], OpenID [OpenID], WebFinger [RFC7033]), has recently been 868 developed and use Web technologies to provide lightweight (from the 869 user's perspective) third-party authenticated transactions. It is 870 possible to use systems of this type to authenticate WebRTC calls, 871 linking them to existing user notions of identity (e.g., Facebook 872 adjacencies). Specifically, the third-party identity system is used 873 to bind the user's identity to cryptographic keying material which is 874 then used to authenticate the calling endpoints. Calls which are 875 authenticated in this fashion are naturally resistant even to active 876 MITM attack by the calling site. 878 Note that there is one special case in which PKI-style certificates 879 do provide a practical solution: calls from end-users to large sites. 880 For instance, if you are making a call to Amazon.com, then Amazon can 881 easily get a certificate to authenticate their media traffic, just as 882 they get one to authenticate their Web traffic. This does not 883 provide additional security value in cases in which the calling site 884 and the media peer are one in the same, but might be useful in cases 885 in which third parties (e.g., ad networks or retailers) arrange for 886 calls but do not participate in them. 888 4.3.2.4. Page Access to Media 890 Identifying the identity of the far media endpoint is a necessary but 891 not sufficient condition for providing media security. In WebRTC, 892 media flows are rendered into HTML5 MediaStreams which can be 893 manipulated by the calling site. Obviously, if the site can modify 894 or view the media, then the user is not getting the level of 895 assurance they would expect from being able to authenticate their 896 peer. In many cases, this is acceptable because the user values 897 site-based special effects over complete security from the site. 898 However, there are also cases where users wish to know that the site 899 cannot interfere. In order to facilitate that, it will be necessary 900 to provide features whereby the site can verifiably give up access to 901 the media streams. This verification must be possible both from the 902 local side and the remote side. I.e., I must be able to verify that 903 the person I am calling has engaged a secure media mode (see 904 Section 4.3.3). In order to achieve this it will be necessary to 905 cryptographically bind an indication of the local media access policy 906 into the cryptographic authentication procedures detailed in the 907 previous sections. 909 4.3.3. Malicious Peers 911 One class of attack that we do not generally try to prevent is 912 malicious peers. For instance, no matter what confidentiality 913 measures you employ the person you are talking to might record the 914 call and publish it on the Internet. Similarly, we do not attempt to 915 prevent them from using voice or video processing technology from 916 hiding or changing their appearance. While technologies (DRM, etc.) 917 do exist to attempt to address these issues, they are generally not 918 compatible with open systems and WebRTC does not address them. 920 Similarly, we make no attempt to prevent prank calling or other 921 unwanted calls. In general, this is in the scope of the calling 922 site, though because WebRTC does offer some forms of strong 923 authentication, that may be useful as part of a defense against such 924 attacks. 926 4.4. Privacy Considerations 928 4.4.1. Correlation of Anonymous Calls 930 While persistent endpoint identifiers can be a useful security 931 feature (see Section 4.3.2.1) they can also represent a privacy 932 threat in settings where the user wishes to be anonymous. WebRTC 933 provides a number of possible persistent identifiers such as DTLS 934 certificates (if they are reused between connections) and RTCP CNAMES 935 (if generated according to [RFC6222] rather than the privacy 936 preserving mode of [RFC7022]). In order to prevent this type of 937 correlation, browsers need to provide mechanisms to reset these 938 identifiers (e.g., with the same lifetime as cookies). Moreover, the 939 API should provide mechanisms to allow sites intended for anonymous 940 calling to force the minting of fresh identifiers. In addition, IP 941 addresses can be a source of call linkage 942 [I-D.ietf-rtcweb-ip-handling] 944 4.4.2. Browser Fingerprinting 946 Any new set of API features adds a risk of browser fingerprinting, 947 and WebRTC is no exception. Specifically, sites can use the presence 948 or absence of specific devices as a browser fingerprint. In general, 949 the API needs to be balanced between functionality and the 950 incremental fingerprint risk. See [Fingerprinting] 952 5. Security Considerations 954 This entire document is about security. 956 6. Acknowledgements 958 Bernard Aboba, Harald Alvestrand, Dan Druta, Cullen Jennings, Alan 959 Johnston, Hadriel Kaplan (S 4.2.1), Matthew Kaufman, Martin Thomson, 960 Magnus Westerlund. 962 7. IANA Considerations 964 There are no IANA considerations. 966 8. Changes Since -04 968 o Replaced RTCWEB and RTC-Web with WebRTC, except when referring to 969 the IETF WG 971 o Removed discussion of the IFRAMEd advertisement case, since we 972 decided not to treat it specially. 974 o Added a privacy section considerations section. 976 o Significant edits to the SAS section to reflect Alan Johnston's 977 comments. 979 o Added some discussion if IP location privacy and Tor. 981 o Updated the "communications consent" section to reflrect draft- 982 ietf. 984 o Added a section about "malicious peers". 986 o Added a section describing screen sharing threats. 988 o Assorted editorial changes. 990 9. References 992 9.1. Normative References 994 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 995 Requirement Levels", BCP 14, RFC 2119, 996 DOI 10.17487/RFC2119, March 1997, 997 . 999 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1000 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1001 May 2017, . 1003 9.2. Informative References 1005 [abarth-rtcweb] 1006 Barth, A., "Prompting the user is security failure", RTC- 1007 Web Workshop, September 2010. 1009 [CORS] van Kesteren, A., "Cross-Origin Resource Sharing", January 1010 2014. 1012 [cranor-wolf] 1013 Sunshine, J., Egelman, S., Almuhimedi, H., Atri, N., and 1014 L. cranor, "Crying Wolf: An Empirical Study of SSL Warning 1015 Effectiveness", Proceedings of the 18th USENIX Security 1016 Symposium, 2009, August 2009. 1018 [farus-conversion] 1019 Farrus, M., Erro, D., and J. Hernando, "Speaker 1020 Recognition Robustness to Voice Conversion", January 2008. 1022 [finer-grained] 1023 Barth, A. and C. Jackson, "Beware of Finer-Grained 1024 Origins", W2SP, 2008, July 2008. 1026 [Fingerprinting] 1027 W3C, "Fingerprinting Guidance for Web Specification 1028 Authors (Draft)", November 2013. 1030 [huang-w2sp] 1031 Huang, L-S., Chen, E., Barth, A., Rescorla, E., and C. 1032 Jackson, "Talking to Yourself for Fun and Profit", W2SP, 1033 2011, May 2011. 1035 [I-D.ietf-rtcweb-ip-handling] 1036 Uberti, J., "WebRTC IP Address Handling Requirements", 1037 draft-ietf-rtcweb-ip-handling-11 (work in progress), 1038 November 2018. 1040 [I-D.ietf-rtcweb-overview] 1041 Alvestrand, H., "Overview: Real Time Protocols for 1042 Browser-based Applications", draft-ietf-rtcweb-overview-19 1043 (work in progress), November 2017. 1045 [I-D.ietf-rtcweb-security-arch] 1046 Rescorla, E., "WebRTC Security Architecture", draft-ietf- 1047 rtcweb-security-arch-17 (work in progress), November 2018. 1049 [kain-conversion] 1050 Kain, A. and M. Macon, "Design and Evaluation of a Voice 1051 Conversion Algorithm based on Spectral Envelope Mapping 1052 and Residual Prediction", Proceedings of ICASSP, May 1053 2001, May 2001. 1055 [OpenID] Sakimura, N., Bradley, J., Jones, M., de Medeiros, B., and 1056 C. Mortimore, "Fingerprinting Guidance for Web 1057 Specification Authors (Draft)", November 2014. 1059 [RFC2818] Rescorla, E., "HTTP Over TLS", RFC 2818, 1060 DOI 10.17487/RFC2818, May 2000, 1061 . 1063 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1064 A., Peterson, J., Sparks, R., Handley, M., and E. 1065 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1066 DOI 10.17487/RFC3261, June 2002, 1067 . 1069 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 1070 Text on Security Considerations", BCP 72, RFC 3552, 1071 DOI 10.17487/RFC3552, July 2003, 1072 . 1074 [RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K. 1075 Norrman, "The Secure Real-time Transport Protocol (SRTP)", 1076 RFC 3711, DOI 10.17487/RFC3711, March 2004, 1077 . 1079 [RFC3760] Gustafson, D., Just, M., and M. Nystrom, "Securely 1080 Available Credentials (SACRED) - Credential Server 1081 Framework", RFC 3760, DOI 10.17487/RFC3760, April 2004, 1082 . 1084 [RFC4251] Ylonen, T. and C. Lonvick, Ed., "The Secure Shell (SSH) 1085 Protocol Architecture", RFC 4251, DOI 10.17487/RFC4251, 1086 January 2006, . 1088 [RFC4568] Andreasen, F., Baugher, M., and D. Wing, "Session 1089 Description Protocol (SDP) Security Descriptions for Media 1090 Streams", RFC 4568, DOI 10.17487/RFC4568, July 2006, 1091 . 1093 [RFC5479] Wing, D., Ed., Fries, S., Tschofenig, H., and F. Audet, 1094 "Requirements and Analysis of Media Security Management 1095 Protocols", RFC 5479, DOI 10.17487/RFC5479, April 2009, 1096 . 1098 [RFC5763] Fischl, J., Tschofenig, H., and E. Rescorla, "Framework 1099 for Establishing a Secure Real-time Transport Protocol 1100 (SRTP) Security Context Using Datagram Transport Layer 1101 Security (DTLS)", RFC 5763, DOI 10.17487/RFC5763, May 1102 2010, . 1104 [RFC6189] Zimmermann, P., Johnston, A., Ed., and J. Callas, "ZRTP: 1105 Media Path Key Agreement for Unicast Secure RTP", 1106 RFC 6189, DOI 10.17487/RFC6189, April 2011, 1107 . 1109 [RFC6222] Begen, A., Perkins, C., and D. Wing, "Guidelines for 1110 Choosing RTP Control Protocol (RTCP) Canonical Names 1111 (CNAMEs)", RFC 6222, DOI 10.17487/RFC6222, April 2011, 1112 . 1114 [RFC6347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer 1115 Security Version 1.2", RFC 6347, DOI 10.17487/RFC6347, 1116 January 2012, . 1118 [RFC6454] Barth, A., "The Web Origin Concept", RFC 6454, 1119 DOI 10.17487/RFC6454, December 2011, 1120 . 1122 [RFC6455] Fette, I. and A. Melnikov, "The WebSocket Protocol", 1123 RFC 6455, DOI 10.17487/RFC6455, December 2011, 1124 . 1126 [RFC6749] Hardt, D., Ed., "The OAuth 2.0 Authorization Framework", 1127 RFC 6749, DOI 10.17487/RFC6749, October 2012, 1128 . 1130 [RFC7022] Begen, A., Perkins, C., Wing, D., and E. Rescorla, 1131 "Guidelines for Choosing RTP Control Protocol (RTCP) 1132 Canonical Names (CNAMEs)", RFC 7022, DOI 10.17487/RFC7022, 1133 September 2013, . 1135 [RFC7033] Jones, P., Salgueiro, G., Jones, M., and J. Smarr, 1136 "WebFinger", RFC 7033, DOI 10.17487/RFC7033, September 1137 2013, . 1139 [RFC7675] Perumal, M., Wing, D., Ravindranath, R., Reddy, T., and M. 1140 Thomson, "Session Traversal Utilities for NAT (STUN) Usage 1141 for Consent Freshness", RFC 7675, DOI 10.17487/RFC7675, 1142 October 2015, . 1144 [RFC8445] Keranen, A., Holmberg, C., and J. Rosenberg, "Interactive 1145 Connectivity Establishment (ICE): A Protocol for Network 1146 Address Translator (NAT) Traversal", RFC 8445, 1147 DOI 10.17487/RFC8445, July 2018, 1148 . 1150 [SWF] Adobe, "SWF File Format Specification Version 19", April 1151 2013. 1153 [whitten-johnny] 1154 Whitten, A. and J. Tygar, "Why Johnny Can't Encrypt: A 1155 Usability Evaluation of PGP 5.0", Proceedings of the 8th 1156 USENIX Security Symposium, 1999, August 1999. 1158 [XmlHttpRequest] 1159 van Kesteren, A., "XMLHttpRequesti Level 2", January 2012. 1161 Author's Address 1163 Eric Rescorla 1164 RTFM, Inc. 1165 2064 Edgewood Drive 1166 Palo Alto, CA 94303 1167 USA 1169 Phone: +1 650 678 2350 1170 Email: ekr@rtfm.com