idnits 2.17.1 draft-ietf-rtcweb-security-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 5, 2019) is 1749 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-20) exists of draft-ietf-rtcweb-security-arch-18 -- Obsolete informational reference (is this intentional?): RFC 2818 (Obsoleted by RFC 9110) -- Obsolete informational reference (is this intentional?): RFC 6222 (Obsoleted by RFC 7022) -- Obsolete informational reference (is this intentional?): RFC 6347 (Obsoleted by RFC 9147) Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RTC-Web E. Rescorla 3 Internet-Draft RTFM, Inc. 4 Intended status: Standards Track July 5, 2019 5 Expires: January 6, 2020 7 Security Considerations for WebRTC 8 draft-ietf-rtcweb-security-12 10 Abstract 12 WebRTC is a protocol suite for use with real-time applications that 13 can be deployed in browsers - "real time communication on the Web". 14 This document defines the WebRTC threat model and analyzes the 15 security threats of WebRTC in that model. 17 Status of This Memo 19 This Internet-Draft is submitted in full conformance with the 20 provisions of BCP 78 and BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF). Note that other groups may also distribute 24 working documents as Internet-Drafts. The list of current Internet- 25 Drafts is at https://datatracker.ietf.org/drafts/current/. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 This Internet-Draft will expire on January 6, 2020. 34 Copyright Notice 36 Copyright (c) 2019 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (https://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with respect 44 to this document. Code Components extracted from this document must 45 include Simplified BSD License text as described in Section 4.e of 46 the Trust Legal Provisions and are provided without warranty as 47 described in the Simplified BSD License. 49 This document may contain material from IETF Documents or IETF 50 Contributions published or made publicly available before November 51 10, 2008. The person(s) controlling the copyright in some of this 52 material may not have granted the IETF Trust the right to allow 53 modifications of such material outside the IETF Standards Process. 54 Without obtaining an adequate license from the person(s) controlling 55 the copyright in such materials, this document may not be modified 56 outside the IETF Standards Process, and derivative works of it may 57 not be created outside the IETF Standards Process, except to format 58 it for publication as an RFC or to translate it into languages other 59 than English. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 65 3. The Browser Threat Model . . . . . . . . . . . . . . . . . . 4 66 3.1. Access to Local Resources . . . . . . . . . . . . . . . . 5 67 3.2. Same-Origin Policy . . . . . . . . . . . . . . . . . . . 5 68 3.3. Bypassing SOP: CORS, WebSockets, and consent to 69 communicate . . . . . . . . . . . . . . . . . . . . . . . 6 70 4. Security for WebRTC Applications . . . . . . . . . . . . . . 7 71 4.1. Access to Local Devices . . . . . . . . . . . . . . . . . 7 72 4.1.1. Threats from Screen Sharing . . . . . . . . . . . . . 8 73 4.1.2. Calling Scenarios and User Expectations . . . . . . . 8 74 4.1.2.1. Dedicated Calling Services . . . . . . . . . . . 9 75 4.1.2.2. Calling the Site You're On . . . . . . . . . . . 9 76 4.1.3. Origin-Based Security . . . . . . . . . . . . . . . . 10 77 4.1.4. Security Properties of the Calling Page . . . . . . . 11 78 4.2. Communications Consent Verification . . . . . . . . . . . 12 79 4.2.1. ICE . . . . . . . . . . . . . . . . . . . . . . . . . 13 80 4.2.2. Masking . . . . . . . . . . . . . . . . . . . . . . . 13 81 4.2.3. Backward Compatibility . . . . . . . . . . . . . . . 14 82 4.2.4. IP Location Privacy . . . . . . . . . . . . . . . . . 15 83 4.3. Communications Security . . . . . . . . . . . . . . . . . 15 84 4.3.1. Protecting Against Retrospective Compromise . . . . . 16 85 4.3.2. Protecting Against During-Call Attack . . . . . . . . 17 86 4.3.2.1. Key Continuity . . . . . . . . . . . . . . . . . 17 87 4.3.2.2. Short Authentication Strings . . . . . . . . . . 18 88 4.3.2.3. Third Party Identity . . . . . . . . . . . . . . 19 89 4.3.2.4. Page Access to Media . . . . . . . . . . . . . . 19 90 4.3.3. Malicious Peers . . . . . . . . . . . . . . . . . . . 20 91 4.4. Privacy Considerations . . . . . . . . . . . . . . . . . 20 92 4.4.1. Correlation of Anonymous Calls . . . . . . . . . . . 21 93 4.4.2. Browser Fingerprinting . . . . . . . . . . . . . . . 21 94 5. Security Considerations . . . . . . . . . . . . . . . . . . . 21 95 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21 96 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 97 8. Changes Since -04 . . . . . . . . . . . . . . . . . . . . . . 21 98 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 99 9.1. Normative References . . . . . . . . . . . . . . . . . . 22 100 9.2. Informative References . . . . . . . . . . . . . . . . . 22 101 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 25 103 1. Introduction 105 The Real-Time Communications on the Web (RTCWEB) working group has 106 standardized protocols for real-time communications between Web 107 browsers, generally called "WebRTC" [I-D.ietf-rtcweb-overview]. The 108 major use cases for WebRTC technology are real-time audio and/or 109 video calls, Web conferencing, and direct data transfer. Unlike most 110 conventional real-time systems, (e.g., SIP-based [RFC3261] soft 111 phones) WebRTC communications are directly controlled by some Web 112 server. A simple case is shown below. 114 +----------------+ 115 | | 116 | Web Server | 117 | | 118 +----------------+ 119 ^ ^ 120 / \ 121 HTTP / \ HTTP 122 or / \ or 123 WebSockets / \ WebSockets 124 v v 125 JS API JS API 126 +-----------+ +-----------+ 127 | | Media | | 128 | Browser |<---------->| Browser | 129 | | | | 130 +-----------+ +-----------+ 131 Alice Bob 133 Figure 1: A simple WebRTC system 135 In the system shown in Figure 1, Alice and Bob both have WebRTC- 136 enabled browsers and they visit some Web server which operates a 137 calling service. Each of their browsers exposes standardized 138 JavaScript calling APIs (implemented as browser built-ins) which are 139 used by the Web server to set up a call between Alice and Bob. The 140 Web server also serves as the signaling channel to transport control 141 messages between the browsers. While this system is topologically 142 similar to a conventional SIP-based system (with the Web server 143 acting as the signaling service and browsers acting as softphones), 144 control has moved to the central Web server; the browser simply 145 provides API points that are used by the calling service. As with 146 any Web application, the Web server can move logic between the server 147 and JavaScript in the browser, but regardless of where the code is 148 executing, it is ultimately under control of the server. 150 It should be immediately apparent that this type of system poses new 151 security challenges beyond those of a conventional VoIP system. In 152 particular, it needs to contend with malicious calling services. For 153 example, if the calling service can cause the browser to make a call 154 at any time to any callee of its choice, then this facility can be 155 used to bug a user's computer without their knowledge, simply by 156 placing a call to some recording service. More subtly, if the 157 exposed APIs allow the server to instruct the browser to send 158 arbitrary content, then they can be used to bypass firewalls or mount 159 denial of service attacks. Any successful system will need to be 160 resistant to this and other attacks. 162 A companion document [I-D.ietf-rtcweb-security-arch] describes a 163 security architecture intended to address the issues raised in this 164 document. 166 2. Terminology 168 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 169 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 170 "OPTIONAL" in this document are to be interpreted as described in BCP 171 14 [RFC2119] [RFC8174] when, and only when, they appear in all 172 capitals, as shown here. 174 3. The Browser Threat Model 176 The security requirements for WebRTC follow directly from the 177 requirement that the browser's job is to protect the user. Huang et 178 al. [huang-w2sp] summarize the core browser security guarantee as: 180 Users can safely visit arbitrary web sites and execute scripts 181 provided by those sites. 183 It is important to realize that this includes sites hosting arbitrary 184 malicious scripts. The motivation for this requirement is simple: it 185 is trivial for attackers to divert users to sites of their choice. 186 For instance, an attacker can purchase display advertisements which 187 direct the user (either automatically or via user clicking) to their 188 site, at which point the browser will execute the attacker's scripts. 189 Thus, it is important that it be safe to view arbitrarily malicious 190 pages. Of course, browsers inevitably have bugs which cause them to 191 fall short of this goal, but any new WebRTC functionality must be 192 designed with the intent to meet this standard. The remainder of 193 this section provides more background on the existing Web security 194 model. 196 In this model, then, the browser acts as a Trusted Coomputing Base 197 (TCB) both from the user's perspective and to some extent from the 198 server's. While HTML and JavaScript (JS) provided by the server can 199 cause the browser to execute a variety of actions, those scripts 200 operate in a sandbox that isolates them both from the user's computer 201 and from each other, as detailed below. 203 Conventionally, we refer to either web attackers, who are able to 204 induce you to visit their sites but do not control the network, and 205 network attackers, who are able to control your network. Network 206 attackers correspond to the [RFC3552] "Internet Threat Model". Note 207 that in some cases, a network attacker is also a web attacker, since 208 transport protocols that do not provide integrity protection allow 209 the network to inject traffic as if they were any communications 210 peer. TLS, and HTTPS in particular, prevent against these attacks, 211 but when analyzing HTTP connections, we must assume that traffic is 212 going to the attacker. 214 3.1. Access to Local Resources 216 While the browser has access to local resources such as keying 217 material, files, the camera, and the microphone, it strictly limits 218 or forbids web servers from accessing those same resources. For 219 instance, while it is possible to produce an HTML form which will 220 allow file upload, a script cannot do so without user consent and in 221 fact cannot even suggest a specific file (e.g., /etc/passwd); the 222 user must explicitly select the file and consent to its upload. 223 [Note: in many cases browsers are explicitly designed to avoid 224 dialogs with the semantics of "click here to bypass security checks", 225 as extensive research [cranor-wolf] shows that users are prone to 226 consent under such circumstances.] 228 Similarly, while Flash programs (SWFs) [SWF] can access the camera 229 and microphone, they explicitly require that the user consent to that 230 access. In addition, some resources simply cannot be accessed from 231 the browser at all. For instance, there is no real way to run 232 specific executables directly from a script (though the user can of 233 course be induced to download executable files and run them). 235 3.2. Same-Origin Policy 237 Many other resources are accessible but isolated. For instance, 238 while scripts are allowed to make HTTP requests via the 239 XMLHttpRequest() API (see [XmlHttpRequest]) those requests are not 240 allowed to be made to any server, but rather solely to the same 241 ORIGIN from whence the script came [RFC6454] (although CORS [CORS] 242 and WebSockets [RFC6455] provide an escape hatch from this 243 restriction, as described below.) This SAME ORIGIN POLICY (SOP) 244 prevents server A from mounting attacks on server B via the user's 245 browser, which protects both the user (e.g., from misuse of his 246 credentials) and the server B (e.g., from DoS attack). 248 More generally, SOP forces scripts from each site to run in their 249 own, isolated, sandboxes. While there are techniques to allow them 250 to interact, those interactions generally must be mutually consensual 251 (by each site) and are limited to certain channels. For instance, 252 multiple pages/browser panes from the same origin can read each 253 other's JS variables, but pages from the different origins--or even 254 iframes from different origins on the same page--cannot. 256 3.3. Bypassing SOP: CORS, WebSockets, and consent to communicate 258 While SOP serves an important security function, it also makes it 259 inconvenient to write certain classes of applications. In 260 particular, mash-ups, in which a script from origin A uses resources 261 from origin B, can only be achieved via a certain amount of hackery. 262 The W3C Cross-Origin Resource Sharing (CORS) spec [CORS] is a 263 response to this demand. In CORS, when a script from origin A 264 executes what would otherwise be a forbidden cross-origin request, 265 the browser instead contacts the target server to determine whether 266 it is willing to allow cross-origin requests from A. If it is so 267 willing, the browser then allows the request. This consent 268 verification process is designed to safely allow cross-origin 269 requests. 271 While CORS is designed to allow cross-origin HTTP requests, 272 WebSockets [RFC6455] allows cross-origin establishment of transparent 273 channels. Once a WebSockets connection has been established from a 274 script to a site, the script can exchange any traffic it likes 275 without being required to frame it as a series of HTTP request/ 276 response transactions. As with CORS, a WebSockets transaction starts 277 with a consent verification stage to avoid allowing scripts to simply 278 send arbitrary data to another origin. 280 While consent verification is conceptually simple--just do a 281 handshake before you start exchanging the real data--experience has 282 shown that designing a correct consent verification system is 283 difficult. In particular, Huang et al. [huang-w2sp] have shown 284 vulnerabilities in the existing Java and Flash consent verification 285 techniques and in a simplified version of the WebSockets handshake. 286 In particular, it is important to be wary of CROSS-PROTOCOL attacks 287 in which the attacking script generates traffic which is acceptable 288 to some non-Web protocol state machine. In order to resist this form 289 of attack, WebSockets incorporates a masking technique intended to 290 randomize the bits on the wire, thus making it more difficult to 291 generate traffic which resembles a given protocol. 293 4. Security for WebRTC Applications 295 4.1. Access to Local Devices 297 As discussed in Section 1, allowing arbitrary sites to initiate calls 298 violates the core Web security guarantee; without some access 299 restrictions on local devices, any malicious site could simply bug a 300 user. At minimum, then, it MUST NOT be possible for arbitrary sites 301 to initiate calls to arbitrary locations without user consent. This 302 immediately raises the question, however, of what should be the scope 303 of user consent. 305 In order for the user to make an intelligent decision about whether 306 to allow a call (and hence his camera and microphone input to be 307 routed somewhere), he must understand either who is requesting 308 access, where the media is going, or both. As detailed below, there 309 are two basic conceptual models: 311 1. You are sending your media to entity A because you want to talk 312 to Entity A (e.g., your mother). 314 2. Entity A (e.g., a calling service) asks to access the user's 315 devices with the assurance that it will transfer the media to 316 entity B (e.g., your mother) 318 In either case, identity is at the heart of any consent decision. 319 Moreover, the identity of the party the browser is connecting to is 320 all that the browser can meaningfully enforce; if you are calling A, 321 A can simply forward the media to C. Similarly, if you authorize A 322 to place a call to B, A can call C instead. In either cases, all the 323 browser is able to do is verify and check authorization for whoever 324 is controlling where the media goes. The target of the media can of 325 course advertise a security/privacy policy, but this is not something 326 that the browser can enforce. Even so, there are a variety of 327 different consent scenarios that motivate different technical consent 328 mechanisms. We discuss these mechanisms in the sections below. 330 It's important to understand that consent to access local devices is 331 largely orthogonal to consent to transmit various kinds of data over 332 the network (see Section 4.2). Consent for device access is largely 333 a matter of protecting the user's privacy from malicious sites. By 334 contrast, consent to send network traffic is about preventing the 335 user's browser from being used to attack its local network. Thus, we 336 need to ensure communications consent even if the site is not able to 337 access the camera and microphone at all (hence WebSockets's consent 338 mechanism) and similarly we need to be concerned with the site 339 accessing the user's camera and microphone even if the data is to be 340 sent back to the site via conventional HTTP-based network mechanisms 341 such as HTTP POST. 343 4.1.1. Threats from Screen Sharing 345 In addition to camera and microphone access, there has been demand 346 for screen and/or application sharing functionality. Unfortunately, 347 the security implications of this functionality are much harder for 348 users to intuitively analyze than for camera and microphone access. 349 (See http://lists.w3.org/Archives/Public/public- 350 webrtc/2013Mar/0024.html for a full analysis.) 352 The most obvious threats are simply those of "oversharing". I.e., 353 the user may believe they are sharing a window when in fact they are 354 sharing an application, or may forget they are sharing their whole 355 screen, icons, notifications, and all. This is already an issue with 356 existing screen sharing technologies and is made somewhat worse if a 357 partially trusted site is responsible for asking for the resource to 358 be shared rather than having the user propose it. 360 A less obvious threat involves the impact of screen sharing on the 361 Web security model. A key part of the Same-Origin Policy is that 362 HTML or JS from site A can reference content from site B and cause 363 the browser to load it, but (unless explicitly permitted) cannot see 364 the result. However, if a web application from a site is screen 365 sharing the browser, then this violates that invariant, with serious 366 security consequences. For example, an attacker site might request 367 screen sharing and then briefly open up a new Window to the user's 368 bank or webmail account, using screen sharing to read the resulting 369 displayed content. A more sophisticated attack would be open up a 370 source view window to a site and use the screen sharing result to 371 view anti cross-site request forgery tokens. 373 These threats suggest that screen/application sharing might need a 374 higher level of user consent than access to the camera or microphone. 376 4.1.2. Calling Scenarios and User Expectations 378 While a large number of possible calling scenarios are possible, the 379 scenarios discussed in this section illustrate many of the 380 difficulties of identifying the relevant scope of consent. 382 4.1.2.1. Dedicated Calling Services 384 The first scenario we consider is a dedicated calling service. In 385 this case, the user has a relationship with a calling site and 386 repeatedly makes calls on it. It is likely that rather than having 387 to give permission for each call that the user will want to give the 388 calling service long-term access to the camera and microphone. This 389 is a natural fit for a long-term consent mechanism (e.g., installing 390 an app store "application" to indicate permission for the calling 391 service.) A variant of the dedicated calling service is a gaming 392 site (e.g., a poker site) which hosts a dedicated calling service to 393 allow players to call each other. 395 With any kind of service where the user may use the same service to 396 talk to many different people, there is a question about whether the 397 user can know who they are talking to. If I grant permission to 398 calling service A to make calls on my behalf, then I am implicitly 399 granting it permission to bug my computer whenever it wants. This 400 suggests another consent model in which a site is authorized to make 401 calls but only to certain target entities (identified via media-plane 402 cryptographic mechanisms as described in Section 4.3.2 and especially 403 Section 4.3.2.3.) Note that the question of consent here is related 404 to but distinct from the question of peer identity: I might be 405 willing to allow a calling site to in general initiate calls on my 406 behalf but still have some calls via that site where I can be sure 407 that the site is not listening in. 409 4.1.2.2. Calling the Site You're On 411 Another simple scenario is calling the site you're actually visiting. 412 The paradigmatic case here is the "click here to talk to a 413 representative" windows that appear on many shopping sites. In this 414 case, the user's expectation is that they are calling the site 415 they're actually visiting. However, it is unlikely that they want to 416 provide a general consent to such a site; just because I want some 417 information on a car doesn't mean that I want the car manufacturer to 418 be able to activate my microphone whenever they please. Thus, this 419 suggests the need for a second consent mechanism where I only grant 420 consent for the duration of a given call. As described in 421 Section 3.1, great care must be taken in the design of this interface 422 to avoid the users just clicking through. Note also that the user 423 interface chrome, which is the representation through which the user 424 interacts with the user agent itself, must clearly display elements 425 showing that the call is continuing in order to avoid attacks where 426 the calling site just leaves it up indefinitely but shows a Web UI 427 that implies otherwise. 429 4.1.3. Origin-Based Security 431 Now that we have described the calling scenarios, we can start to 432 reason about the security requirements. 434 As discussed in Section 3.2, the basic unit of Web sandboxing is the 435 origin, and so it is natural to scope consent to origin. 436 Specifically, a script from origin A MUST only be allowed to initiate 437 communications (and hence to access camera and microphone) if the 438 user has specifically authorized access for that origin. It is of 439 course technically possible to have coarser-scoped permissions, but 440 because the Web model is scoped to origin, this creates a difficult 441 mismatch. 443 Arguably, origin is not fine-grained enough. Consider the situation 444 where Alice visits a site and authorizes it to make a single call. 445 If consent is expressed solely in terms of origin, then at any future 446 visit to that site (including one induced via mash-up or ad network), 447 the site can bug Alice's computer, use the computer to place bogus 448 calls, etc. While in principle Alice could grant and then revoke the 449 privilege, in practice privileges accumulate; if we are concerned 450 about this attack, something else is needed. There are a number of 451 potential countermeasures to this sort of issue. 453 Individual Consent 455 Ask the user for permission for each call. 457 Callee-oriented Consent 459 Only allow calls to a given user. 461 Cryptographic Consent 463 Only allow calls to a given set of peer keying material or to a 464 cryptographically established identity. 466 Unfortunately, none of these approaches is satisfactory for all 467 cases. As discussed above, individual consent puts the user's 468 approval in the UI flow for every call. Not only does this quickly 469 become annoying but it can train the user to simply click "OK", at 470 which point the consent becomes useless. Thus, while it may be 471 necessary to have individual consent in some case, this is not a 472 suitable solution for (for instance) the calling service case. Where 473 necessary, in-flow user interfaces must be carefully designed to 474 avoid the risk of the user blindly clicking through. 476 The other two options are designed to restrict calls to a given 477 target. Callee-oriented consent provided by the calling site would 478 not work well because a malicious site can claim that the user is 479 calling any user of his choice. One fix for this is to tie calls to 480 a cryptographically-established identity. While not suitable for all 481 cases, this approach may be useful for some. If we consider the case 482 of advertising, it's not particularly convenient to require the 483 advertiser to instantiate an iframe on the hosting site just to get 484 permission; a more convenient approach is to cryptographically tie 485 the advertiser's certificate to the communication directly. We're 486 still tying permissions to origin here, but to the media origin (and- 487 or destination) rather than to the Web origin. 488 [I-D.ietf-rtcweb-security-arch] describes mechanisms which facilitate 489 this sort of consent. 491 Another case where media-level cryptographic identity makes sense is 492 when a user really does not trust the calling site. For instance, I 493 might be worried that the calling service will attempt to bug my 494 computer, but I also want to be able to conveniently call my friends. 495 If consent is tied to particular communications endpoints, then my 496 risk is limited. Naturally, it is somewhat challenging to design UI 497 primitives which express this sort of policy. The problem becomes 498 even more challenging in multi-user calling cases. 500 4.1.4. Security Properties of the Calling Page 502 Origin-based security is intended to secure against web attackers. 503 However, we must also consider the case of network attackers. 504 Consider the case where I have granted permission to a calling 505 service by an origin that has the HTTP scheme, e.g., http://calling- 506 service.example.com. If I ever use my computer on an unsecured 507 network (e.g., a hotspot or if my own home wireless network is 508 insecure), and browse any HTTP site, then an attacker can bug my 509 computer. The attack proceeds like this: 511 1. I connect to http://anything.example.org/. Note that this site is 512 unaffiliated with the calling service. 514 2. The attacker modifies my HTTP connection to inject an IFRAME (or 515 a redirect) to http://calling-service.example.com 517 3. The attacker forges the response from http://calling- 518 service.example.com/ to inject JS to initiate a call to himself. 520 Note that this attack does not depend on the media being insecure. 521 Because the call is to the attacker, it is also encrypted to him. 522 Moreover, it need not be executed immediately; the attacker can 523 "infect" the origin semi-permanently (e.g., with a web worker or a 524 popped-up window that is hidden under the main window.) and thus be 525 able to bug me long after I have left the infected network. This 526 risk is created by allowing calls at all from a page fetched over 527 HTTP. 529 Even if calls are only possible from HTTPS [RFC2818] sites, if those 530 sites include active content (e.g., JavaScript) from an untrusted 531 site, that JavaScript is executed in the security context of the page 532 [finer-grained]. This could lead to compromise of a call even if the 533 parent page is safe. Note: this issue is not restricted to PAGES 534 which contain untrusted content. If any page from a given origin 535 ever loads JavaScript from an attacker, then it is possible for that 536 attacker to infect the browser's notion of that origin semi- 537 permanently. 539 4.2. Communications Consent Verification 541 As discussed in Section 3.3, allowing web applications unrestricted 542 network access via the browser introduces the risk of using the 543 browser as an attack platform against machines which would not 544 otherwise be accessible to the malicious site, for instance because 545 they are topologically restricted (e.g., behind a firewall or NAT). 546 In order to prevent this form of attack as well as cross-protocol 547 attacks it is important to require that the target of traffic 548 explicitly consent to receiving the traffic in question. Until that 549 consent has been verified for a given endpoint, traffic other than 550 the consent handshake MUST NOT be sent to that endpoint. 552 Note that consent verification is not sufficient to prevent overuse 553 of network resources. Because WebRTC allows for a Web site to create 554 data flows between two browser instances without user consent, it is 555 possible for a malicious site to chew up a significant amount of a 556 user's bandwidth without incurring significant costs to himself by 557 setting up such a channel to another user. However, as a practical 558 matter there are a large number of Web sites which can act as data 559 sources, so an attacker can at least use downlink bandwidth with 560 existing Web APIs. However, this potential DoS vector reinforces the 561 need for adequate congestion control for WebRTC protocols to ensure 562 that they play fair with other demands on the user's bandwidth. 564 4.2.1. ICE 566 Verifying receiver consent requires some sort of explicit handshake, 567 but conveniently we already need one in order to do NAT hole- 568 punching. Interactive Connectivity Establishment (ICE) [RFC8445] 569 includes a handshake designed to verify that the receiving element 570 wishes to receive traffic from the sender. It is important to 571 remember here that the site initiating ICE is presumed malicious; in 572 order for the handshake to be secure the receiving element MUST 573 demonstrate receipt/knowledge of some value not available to the site 574 (thus preventing the site from forging responses). In order to 575 achieve this objective with ICE, the STUN transaction IDs must be 576 generated by the browser and MUST NOT be made available to the 577 initiating script, even via a diagnostic interface. Verifying 578 receiver consent also requires verifying the receiver wants to 579 receive traffic from a particular sender, and at this time; for 580 example a malicious site may simply attempt ICE to known servers that 581 are using ICE for other sessions. ICE provides this verification as 582 well, by using the STUN credentials as a form of per-session shared 583 secret. Those credentials are known to the Web application, but 584 would need to also be known and used by the STUN-receiving element to 585 be useful. 587 There also needs to be some mechanism for the browser to verify that 588 the target of the traffic continues to wish to receive it. Because 589 ICE keepalives are indications, they will not work here. [RFC7675] 590 describes the mechanism for providing consent freshness. 592 4.2.2. Masking 594 Once consent is verified, there still is some concern about 595 misinterpretation attacks as described by Huang et al.[huang-w2sp]. 596 Where TCP is used the risk is substantial due to the potential 597 presence of transparent proxies and therefore if TCP is to be used, 598 then WebSockets style masking MUST be employed. 600 Since DTLS (with the anti-chosen plaintext mechanisms required by TLS 601 1.1) does not allow the attacker to generate predictable ciphertext, 602 there is no need for masking of protocols running over DTLS (e.g. 603 SCTP over DTLS, UDP over DTLS, etc.). 605 Note that in principle an attacker could exert some control over SRTP 606 packets by using a combination of the WebAudio API and extremely 607 tight timing control. The primary risk here seems to be carriage of 608 SRTP over TURN TCP. However, as SRTP packets have an extremely 609 characteristic packet header it seems unlikely that any but the most 610 aggressive intermediaries would be confused into thinking that 611 another application layer protocol was in use. 613 4.2.3. Backward Compatibility 615 A requirement to use ICE limits compatibility with legacy non-ICE 616 clients. It seems unsafe to completely remove the requirement for 617 some check. All proposed checks have the common feature that the 618 browser sends some message to the candidate traffic recipient and 619 refuses to send other traffic until that message has been replied to. 620 The message/reply pair must be generated in such a way that an 621 attacker who controls the Web application cannot forge them, 622 generally by having the message contain some secret value that must 623 be incorporated (e.g., echoed, hashed into, etc.). Non-ICE 624 candidates for this role (in cases where the legacy endpoint has a 625 public address) include: 627 o STUN checks without using ICE (i.e., the non-RTC-web endpoint sets 628 up a STUN responder.) 630 o Use of RTCP as an implicit reachability check. 632 In the RTCP approach, the WebRTC endpoint is allowed to send a 633 limited number of RTP packets prior to receiving consent. This 634 allows a short window of attack. In addition, some legacy endpoints 635 do not support RTCP, so this is a much more expensive solution for 636 such endpoints, for which it would likely be easier to implement ICE. 637 For these two reasons, an RTCP-based approach does not seem to 638 address the security issue satisfactorily. 640 In the STUN approach, the WebRTC endpoint is able to verify that the 641 recipient is running some kind of STUN endpoint but unless the STUN 642 responder is integrated with the ICE username/password establishment 643 system, the WebRTC endpoint cannot verify that the recipient consents 644 to this particular call. This may be an issue if existing STUN 645 servers are operated at addresses that are not able to handle 646 bandwidth-based attacks. Thus, this approach does not seem 647 satisfactory either. 649 If the systems are tightly integrated (i.e., the STUN endpoint 650 responds with responses authenticated with ICE credentials) then this 651 issue does not exist. However, such a design is very close to an 652 ICE-Lite implementation (indeed, arguably is one). An intermediate 653 approach would be to have a STUN extension that indicated that one 654 was responding to WebRTC checks but not computing integrity checks 655 based on the ICE credentials. This would allow the use of standalone 656 STUN servers without the risk of confusing them with legacy STUN 657 servers. If a non-ICE legacy solution is needed, then this is 658 probably the best choice. 660 Once initial consent is verified, we also need to verify continuing 661 consent, in order to avoid attacks where two people briefly share an 662 IP (e.g., behind a NAT in an Internet cafe) and the attacker arranges 663 for a large, unstoppable, traffic flow to the network and then 664 leaves. The appropriate technologies here are fairly similar to 665 those for initial consent, though are perhaps weaker since the 666 threats are less severe. 668 4.2.4. IP Location Privacy 670 Note that as soon as the callee sends their ICE candidates, the 671 caller learns the callee's IP addresses. The callee's server 672 reflexive address reveals a lot of information about the callee's 673 location. In order to avoid tracking, implementations may wish to 674 suppress the start of ICE negotiation until the callee has answered. 675 In addition, either side may wish to hide their location from the 676 other side entirely by forcing all traffic through a TURN server. 678 In ordinary operation, the site learns the browser's IP address, 679 though it may be hidden via mechanisms like Tor 680 [http://www.torproject.org] or a VPN. However, because sites can 681 cause the browser to provide IP addresses, this provides a mechanism 682 for sites to learn about the user's network environment even if the 683 user is behind a VPN that masks their IP address. Implementations 684 may wish to provide settings which suppress all non-VPN candidates if 685 the user is on certain kinds of VPN, especially privacy-oriented 686 systems such as Tor. See [I-D.ietf-rtcweb-ip-handling] for 687 additional information. 689 4.3. Communications Security 691 Finally, we consider a problem familiar from the SIP world: 692 communications security. For obvious reasons, it MUST be possible 693 for the communicating parties to establish a channel which is secure 694 against both message recovery and message modification. (See 695 [RFC5479] for more details.) This service must be provided for both 696 data and voice/video. Ideally the same security mechanisms would be 697 used for both types of content. Technology for providing this 698 service (for instance, SRTP [RFC3711], DTLS [RFC6347] and DTLS-SRTP 699 [RFC5763]) is well understood. However, we must examine this 700 technology in the WebRTC context, where the threat model is somewhat 701 different. 703 In general, it is important to understand that unlike a conventional 704 SIP proxy, the calling service (i.e., the Web server) controls not 705 only the channel between the communicating endpoints but also the 706 application running on the user's browser. While in principle it is 707 possible for the browser to cut the calling service out of the loop 708 and directly present trusted information (and perhaps get consent), 709 practice in modern browsers is to avoid this whenever possible. "In- 710 flow" modal dialogs which require the user to consent to specific 711 actions are particularly disfavored as human factors research 712 indicates that unless they are made extremely invasive, users simply 713 agree to them without actually consciously giving consent. 714 [abarth-rtcweb]. Thus, nearly all the UI will necessarily be 715 rendered by the browser but under control of the calling service. 716 This likely includes the peer's identity information, which, after 717 all, is only meaningful in the context of some calling service. 719 This limitation does not mean that preventing attack by the calling 720 service is completely hopeless. However, we need to distinguish 721 between two classes of attack: 723 Retrospective compromise of calling service. 725 The calling service is non-malicious during a call but 726 subsequently is compromised and wishes to attack an older call 727 (often called a "passive attack") 729 During-call attack by calling service. 731 The calling service is compromised during the call it wishes to 732 attack (often called an "active attack"). 734 Providing security against the former type of attack is practical 735 using the techniques discussed in Section 4.3.1. However, it is 736 extremely difficult to prevent a trusted but malicious calling 737 service from actively attacking a user's calls, either by mounting a 738 Man-in-the-Middle (MITM) attack or by diverting them entirely. (Note 739 that this attack applies equally to a network attacker if 740 communications to the calling service are not secured.) We discuss 741 some potential approaches and why they are likely to be impractical 742 in Section 4.3.2. 744 4.3.1. Protecting Against Retrospective Compromise 746 In a retrospective attack, the calling service was uncompromised 747 during the call, but that an attacker subsequently wants to recover 748 the content of the call. We assume that the attacker has access to 749 the protected media stream as well as having full control of the 750 calling service. 752 If the calling service has access to the traffic keying material (as 753 in SDES [RFC4568]), then retrospective attack is trivial. This form 754 of attack is particularly serious in the Web context because it is 755 standard practice in Web services to run extensive logging and 756 monitoring. Thus, it is highly likely that if the traffic key is 757 part of any HTTP request it will be logged somewhere and thus subject 758 to subsequent compromise. It is this consideration that makes an 759 automatic, public key-based key exchange mechanism imperative for 760 WebRTC (this is a good idea for any communications security system) 761 and this mechanism SHOULD provide perfect forward secrecy (PFS). The 762 signaling channel/calling service can be used to authenticate this 763 mechanism. 765 In addition, if end-to-end keying is in used, the system MUST NOT 766 provide any APIs to extract either long-term keying material or to 767 directly access any stored traffic keys. Otherwise, an attacker who 768 subsequently compromised the calling service might be able to use 769 those APIs to recover the traffic keys and thus compromise the 770 traffic. 772 4.3.2. Protecting Against During-Call Attack 774 Protecting against attacks during a call is a more difficult 775 proposition. Even if the calling service cannot directly access 776 keying material (as recommended in the previous section), it can 777 simply mount a man-in-the-middle attack on the connection, telling 778 Alice that she is calling Bob and Bob that he is calling Alice, while 779 in fact the calling service is acting as a calling bridge and 780 capturing all the traffic. Protecting against this form of attack 781 requires positive authentication of the remote endpoint such as 782 explicit out-of-band key verification (e.g., by a fingerprint) or a 783 third-party identity service as described in 784 [I-D.ietf-rtcweb-security-arch]. 786 4.3.2.1. Key Continuity 788 One natural approach is to use "key continuity". While a malicious 789 calling service can present any identity it chooses to the user, it 790 cannot produce a private key that maps to a given public key. Thus, 791 it is possible for the browser to note a given user's public key and 792 generate an alarm whenever that user's key changes. SSH [RFC4251] 793 uses a similar technique. (Note that the need to avoid explicit user 794 consent on every call precludes the browser requiring an immediate 795 manual check of the peer's key). 797 Unfortunately, this sort of key continuity mechanism is far less 798 useful in the WebRTC context. First, much of the virtue of WebRTC 799 (and any Web application) is that it is not bound to particular piece 800 of client software. Thus, it will be not only possible but routine 801 for a user to use multiple browsers on different computers which will 802 of course have different keying material (SACRED [RFC3760] 803 notwithstanding.) Thus, users will frequently be alerted to key 804 mismatches which are in fact completely legitimate, with the result 805 that they are trained to simply click through them. As it is known 806 that users routinely will click through far more dire warnings 807 [cranor-wolf], it seems extremely unlikely that any key continuity 808 mechanism will be effective rather than simply annoying. 810 Moreover, it is trivial to bypass even this kind of mechanism. 811 Recall that unlike the case of SSH, the browser never directly gets 812 the peer's identity from the user. Rather, it is provided by the 813 calling service. Even enabling a mechanism of this type would 814 require an API to allow the calling service to tell the browser "this 815 is a call to user X". All the calling service needs to do to avoid 816 triggering a key continuity warning is to tell the browser that "this 817 is a call to user Y" where Y is confusable with X. Even if the user 818 actually checks the other side's name (which all available evidence 819 indicates is unlikely), this would require (a) the browser to use the 820 trusted UI to provide the name and (b) the user to not be fooled by 821 similar appearing names. 823 4.3.2.2. Short Authentication Strings 825 ZRTP [RFC6189] uses a "short authentication string" (SAS) which is 826 derived from the key agreement protocol. This SAS is designed to be 827 compared by the users (e.g., read aloud over the voice channel or 828 transmitted via an out of band channel) and if confirmed by both 829 sides precludes MITM attack. The intention is that the SAS is used 830 once and then key continuity (though a different mechanism from that 831 discussed above) is used thereafter. 833 Unfortunately, the SAS does not offer a practical solution to the 834 problem of a compromised calling service. "Voice conversion" 835 systems, which modify voice from one speaker to make it sound like 836 another, are an active area of research. These systems are already 837 good enough to fool both automatic recognition systems 838 [farus-conversion] and humans [kain-conversion] in many cases, and 839 are of course likely to improve in future, especially in an 840 environment where the user just wants to get on with the phone call. 841 Thus, even if SAS is effective today, it is likely not to be so for 842 much longer. 844 Additionally, it is unclear that users will actually use an SAS. As 845 discussed above, the browser UI constraints preclude requiring the 846 SAS exchange prior to completing the call and so it must be 847 voluntary; at most the browser will provide some UI indicator that 848 the SAS has not yet been checked. However, it is well-known that 849 when faced with optional security mechanisms, many users simply 850 ignore them [whitten-johnny]. 852 Once users have checked the SAS once, key continuity is required to 853 avoid them needing to check it on every call. However, this is 854 problematic for reasons indicated in Section 4.3.2.1. In principle 855 it is of course possible to render a different UI element to indicate 856 that calls are using an unauthenticated set of keying material 857 (recall that the attacker can just present a slightly different name 858 so that the attack shows the same UI as a call to a new device or to 859 someone you haven't called before) but as a practical matter, users 860 simply ignore such indicators even in the rather more dire case of 861 mixed content warnings. 863 4.3.2.3. Third Party Identity 865 The conventional approach to providing communications identity has of 866 course been to have some third party identity system (e.g., PKI) to 867 authenticate the endpoints. Such mechanisms have proven to be too 868 cumbersome for use by typical users (and nearly too cumbersome for 869 administrators). However, a new generation of Web-based identity 870 providers (BrowserID, Federated Google Login, Facebook Connect, OAuth 871 [RFC6749], OpenID [OpenID], WebFinger [RFC7033]), has recently been 872 developed and use Web technologies to provide lightweight (from the 873 user's perspective) third-party authenticated transactions. It is 874 possible to use systems of this type to authenticate WebRTC calls, 875 linking them to existing user notions of identity (e.g., Facebook 876 adjacencies). Specifically, the third-party identity system is used 877 to bind the user's identity to cryptographic keying material which is 878 then used to authenticate the calling endpoints. Calls which are 879 authenticated in this fashion are naturally resistant even to active 880 MITM attack by the calling site. 882 Note that there is one special case in which PKI-style certificates 883 do provide a practical solution: calls from end-users to large sites. 884 For instance, if you are making a call to Amazon.com, then Amazon can 885 easily get a certificate to authenticate their media traffic, just as 886 they get one to authenticate their Web traffic. This does not 887 provide additional security value in cases in which the calling site 888 and the media peer are one in the same, but might be useful in cases 889 in which third parties (e.g., ad networks or retailers) arrange for 890 calls but do not participate in them. 892 4.3.2.4. Page Access to Media 894 Identifying the identity of the far media endpoint is a necessary but 895 not sufficient condition for providing media security. In WebRTC, 896 media flows are rendered into HTML5 MediaStreams which can be 897 manipulated by the calling site. Obviously, if the site can modify 898 or view the media, then the user is not getting the level of 899 assurance they would expect from being able to authenticate their 900 peer. In many cases, this is acceptable because the user values 901 site-based special effects over complete security from the site. 902 However, there are also cases where users wish to know that the site 903 cannot interfere. In order to facilitate that, it will be necessary 904 to provide features whereby the site can verifiably give up access to 905 the media streams. This verification must be possible both from the 906 local side and the remote side. I.e., users must be able to verify 907 that the person called has engaged a secure media mode (see 908 Section 4.3.3). In order to achieve this it will be necessary to 909 cryptographically bind an indication of the local media access policy 910 into the cryptographic authentication procedures detailed in the 911 previous sections. 913 It should be noted that the use of this secure media mode is left to 914 the discretion of the site. When such a mode is engaged, the browser 915 will need to provide indicia to the user that the associated media 916 has been authenticated as coming from the identified user. This 917 allows WebRTC services that wish to claim end-to-end security to do 918 so in a way that can be easily verified by the user. This model 919 requires that the remote party's browser be included in the TCB, as 920 described in Section 3. 922 4.3.3. Malicious Peers 924 One class of attack that we do not generally try to prevent is 925 malicious peers. For instance, no matter what confidentiality 926 measures you employ the person you are talking to might record the 927 call and publish it on the Internet. Similarly, we do not attempt to 928 prevent them from using voice or video processing technology from 929 hiding or changing their appearance. While technologies (DRM, etc.) 930 do exist to attempt to address these issues, they are generally not 931 compatible with open systems and WebRTC does not address them. 933 Similarly, we make no attempt to prevent prank calling or other 934 unwanted calls. In general, this is in the scope of the calling 935 site, though because WebRTC does offer some forms of strong 936 authentication, that may be useful as part of a defense against such 937 attacks. 939 4.4. Privacy Considerations 940 4.4.1. Correlation of Anonymous Calls 942 While persistent endpoint identifiers can be a useful security 943 feature (see Section 4.3.2.1) they can also represent a privacy 944 threat in settings where the user wishes to be anonymous. WebRTC 945 provides a number of possible persistent identifiers such as DTLS 946 certificates (if they are reused between connections) and RTCP CNAMES 947 (if generated according to [RFC6222] rather than the privacy 948 preserving mode of [RFC7022]). In order to prevent this type of 949 correlation, browsers need to provide mechanisms to reset these 950 identifiers (e.g., with the same lifetime as cookies). Moreover, the 951 API should provide mechanisms to allow sites intended for anonymous 952 calling to force the minting of fresh identifiers. In addition, IP 953 addresses can be a source of call linkage 954 [I-D.ietf-rtcweb-ip-handling]. 956 4.4.2. Browser Fingerprinting 958 Any new set of API features adds a risk of browser fingerprinting, 959 and WebRTC is no exception. Specifically, sites can use the presence 960 or absence of specific devices as a browser fingerprint. In general, 961 the API needs to be balanced between functionality and the 962 incremental fingerprint risk. See [Fingerprinting]. 964 5. Security Considerations 966 This entire document is about security. 968 6. Acknowledgements 970 Bernard Aboba, Harald Alvestrand, Dan Druta, Cullen Jennings, Alan 971 Johnston, Hadriel Kaplan (S 4.2.1), Matthew Kaufman, Martin Thomson, 972 Magnus Westerlund. 974 7. IANA Considerations 976 There are no IANA considerations. 978 8. Changes Since -04 980 o Replaced RTCWEB and RTC-Web with WebRTC, except when referring to 981 the IETF WG 983 o Removed discussion of the IFRAMEd advertisement case, since we 984 decided not to treat it specially. 986 o Added a privacy section considerations section. 988 o Significant edits to the SAS section to reflect Alan Johnston's 989 comments. 991 o Added some discussion if IP location privacy and Tor. 993 o Updated the "communications consent" section to reflrect draft- 994 ietf. 996 o Added a section about "malicious peers". 998 o Added a section describing screen sharing threats. 1000 o Assorted editorial changes. 1002 9. References 1004 9.1. Normative References 1006 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1007 Requirement Levels", BCP 14, RFC 2119, 1008 DOI 10.17487/RFC2119, March 1997, 1009 . 1011 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1012 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1013 May 2017, . 1015 9.2. Informative References 1017 [abarth-rtcweb] 1018 Barth, A., "Prompting the user is security failure", RTC- 1019 Web Workshop, September 2010. 1021 [CORS] van Kesteren, A., "Cross-Origin Resource Sharing", January 1022 2014. 1024 [cranor-wolf] 1025 Sunshine, J., Egelman, S., Almuhimedi, H., Atri, N., and 1026 L. cranor, "Crying Wolf: An Empirical Study of SSL Warning 1027 Effectiveness", Proceedings of the 18th USENIX Security 1028 Symposium, 2009, August 2009. 1030 [farus-conversion] 1031 Farrus, M., Erro, D., and J. Hernando, "Speaker 1032 Recognition Robustness to Voice Conversion", January 2008. 1034 [finer-grained] 1035 Barth, A. and C. Jackson, "Beware of Finer-Grained 1036 Origins", W2SP, 2008, July 2008. 1038 [Fingerprinting] 1039 "Fingerprinting Guidance for Web Specification Authors 1040 (Draft)", November 2013. 1042 [huang-w2sp] 1043 Huang, L-S., Chen, E., Barth, A., Rescorla, E., and C. 1044 Jackson, "Talking to Yourself for Fun and Profit", W2SP, 1045 2011, May 2011. 1047 [I-D.ietf-rtcweb-ip-handling] 1048 Uberti, J., "WebRTC IP Address Handling Requirements", 1049 draft-ietf-rtcweb-ip-handling-12 (work in progress), July 1050 2019. 1052 [I-D.ietf-rtcweb-overview] 1053 Alvestrand, H., "Overview: Real Time Protocols for 1054 Browser-based Applications", draft-ietf-rtcweb-overview-19 1055 (work in progress), November 2017. 1057 [I-D.ietf-rtcweb-security-arch] 1058 Rescorla, E., "WebRTC Security Architecture", draft-ietf- 1059 rtcweb-security-arch-18 (work in progress), February 2019. 1061 [kain-conversion] 1062 Kain, A. and M. Macon, "Design and Evaluation of a Voice 1063 Conversion Algorithm based on Spectral Envelope Mapping 1064 and Residual Prediction", Proceedings of ICASSP, May 1065 2001, May 2001. 1067 [OpenID] Sakimura, N., Bradley, J., Jones, M., de Medeiros, B., and 1068 C. Mortimore, "OpenID Connect Core 1.0", November 2014. 1070 [RFC2818] Rescorla, E., "HTTP Over TLS", RFC 2818, 1071 DOI 10.17487/RFC2818, May 2000, 1072 . 1074 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1075 A., Peterson, J., Sparks, R., Handley, M., and E. 1076 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1077 DOI 10.17487/RFC3261, June 2002, 1078 . 1080 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 1081 Text on Security Considerations", BCP 72, RFC 3552, 1082 DOI 10.17487/RFC3552, July 2003, 1083 . 1085 [RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K. 1086 Norrman, "The Secure Real-time Transport Protocol (SRTP)", 1087 RFC 3711, DOI 10.17487/RFC3711, March 2004, 1088 . 1090 [RFC3760] Gustafson, D., Just, M., and M. Nystrom, "Securely 1091 Available Credentials (SACRED) - Credential Server 1092 Framework", RFC 3760, DOI 10.17487/RFC3760, April 2004, 1093 . 1095 [RFC4251] Ylonen, T. and C. Lonvick, Ed., "The Secure Shell (SSH) 1096 Protocol Architecture", RFC 4251, DOI 10.17487/RFC4251, 1097 January 2006, . 1099 [RFC4568] Andreasen, F., Baugher, M., and D. Wing, "Session 1100 Description Protocol (SDP) Security Descriptions for Media 1101 Streams", RFC 4568, DOI 10.17487/RFC4568, July 2006, 1102 . 1104 [RFC5479] Wing, D., Ed., Fries, S., Tschofenig, H., and F. Audet, 1105 "Requirements and Analysis of Media Security Management 1106 Protocols", RFC 5479, DOI 10.17487/RFC5479, April 2009, 1107 . 1109 [RFC5763] Fischl, J., Tschofenig, H., and E. Rescorla, "Framework 1110 for Establishing a Secure Real-time Transport Protocol 1111 (SRTP) Security Context Using Datagram Transport Layer 1112 Security (DTLS)", RFC 5763, DOI 10.17487/RFC5763, May 1113 2010, . 1115 [RFC6189] Zimmermann, P., Johnston, A., Ed., and J. Callas, "ZRTP: 1116 Media Path Key Agreement for Unicast Secure RTP", 1117 RFC 6189, DOI 10.17487/RFC6189, April 2011, 1118 . 1120 [RFC6222] Begen, A., Perkins, C., and D. Wing, "Guidelines for 1121 Choosing RTP Control Protocol (RTCP) Canonical Names 1122 (CNAMEs)", RFC 6222, DOI 10.17487/RFC6222, April 2011, 1123 . 1125 [RFC6347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer 1126 Security Version 1.2", RFC 6347, DOI 10.17487/RFC6347, 1127 January 2012, . 1129 [RFC6454] Barth, A., "The Web Origin Concept", RFC 6454, 1130 DOI 10.17487/RFC6454, December 2011, 1131 . 1133 [RFC6455] Fette, I. and A. Melnikov, "The WebSocket Protocol", 1134 RFC 6455, DOI 10.17487/RFC6455, December 2011, 1135 . 1137 [RFC6749] Hardt, D., Ed., "The OAuth 2.0 Authorization Framework", 1138 RFC 6749, DOI 10.17487/RFC6749, October 2012, 1139 . 1141 [RFC7022] Begen, A., Perkins, C., Wing, D., and E. Rescorla, 1142 "Guidelines for Choosing RTP Control Protocol (RTCP) 1143 Canonical Names (CNAMEs)", RFC 7022, DOI 10.17487/RFC7022, 1144 September 2013, . 1146 [RFC7033] Jones, P., Salgueiro, G., Jones, M., and J. Smarr, 1147 "WebFinger", RFC 7033, DOI 10.17487/RFC7033, September 1148 2013, . 1150 [RFC7675] Perumal, M., Wing, D., Ravindranath, R., Reddy, T., and M. 1151 Thomson, "Session Traversal Utilities for NAT (STUN) Usage 1152 for Consent Freshness", RFC 7675, DOI 10.17487/RFC7675, 1153 October 2015, . 1155 [RFC8445] Keranen, A., Holmberg, C., and J. Rosenberg, "Interactive 1156 Connectivity Establishment (ICE): A Protocol for Network 1157 Address Translator (NAT) Traversal", RFC 8445, 1158 DOI 10.17487/RFC8445, July 2018, 1159 . 1161 [SWF] "SWF File Format Specification Version 19", April 2013. 1163 [whitten-johnny] 1164 Whitten, A. and J. Tygar, "Why Johnny Can't Encrypt: A 1165 Usability Evaluation of PGP 5.0", Proceedings of the 8th 1166 USENIX Security Symposium, 1999, August 1999. 1168 [XmlHttpRequest] 1169 van Kesteren, A., "XMLHttpRequesti Level 2", January 2012. 1171 Author's Address 1172 Eric Rescorla 1173 RTFM, Inc. 1174 2064 Edgewood Drive 1175 Palo Alto, CA 94303 1176 USA 1178 Phone: +1 650 678 2350 1179 Email: ekr@rtfm.com