idnits 2.17.1 draft-ietf-rtcweb-rtp-usage-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 16, 2014) is 3626 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-13) exists of draft-ietf-avtcore-multi-media-rtp-session-05 == Outdated reference: A later version (-18) exists of draft-ietf-avtcore-rtp-circuit-breakers-05 == Outdated reference: A later version (-12) exists of draft-ietf-avtcore-rtp-multi-stream-optimisation-02 == Outdated reference: A later version (-11) exists of draft-ietf-avtcore-rtp-multi-stream-03 == Outdated reference: A later version (-20) exists of draft-ietf-rtcweb-security-arch-09 == Outdated reference: A later version (-12) exists of draft-ietf-rtcweb-security-06 ** Obsolete normative reference: RFC 4566 (Obsoleted by RFC 8866) ** Obsolete normative reference: RFC 5285 (Obsoleted by RFC 8285) == Outdated reference: A later version (-12) exists of draft-ietf-avtcore-multiplex-guidelines-02 == Outdated reference: A later version (-10) exists of draft-ietf-avtcore-rtp-topologies-update-01 == Outdated reference: A later version (-08) exists of draft-ietf-avtext-rtp-grouping-taxonomy-01 == Outdated reference: A later version (-17) exists of draft-ietf-mmusic-msid-05 == Outdated reference: A later version (-54) exists of draft-ietf-mmusic-sdp-bundle-negotiation-07 == Outdated reference: A later version (-14) exists of draft-ietf-payload-rtp-howto-13 == Outdated reference: A later version (-09) exists of draft-ietf-rmcat-cc-requirements-04 == Outdated reference: A later version (-11) exists of draft-ietf-rtcweb-audio-05 == Outdated reference: A later version (-19) exists of draft-ietf-rtcweb-overview-09 == Outdated reference: A later version (-16) exists of draft-ietf-rtcweb-use-cases-and-requirements-14 == Outdated reference: A later version (-18) exists of draft-ietf-tsvwg-rtcweb-qos-00 -- Obsolete informational reference (is this intentional?): RFC 5245 (Obsoleted by RFC 8445, RFC 8839) Summary: 2 errors (**), 0 flaws (~~), 18 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RTCWEB Working Group C. Perkins 3 Internet-Draft University of Glasgow 4 Intended status: Standards Track M. Westerlund 5 Expires: November 17, 2014 Ericsson 6 J. Ott 7 Aalto University 8 May 16, 2014 10 Web Real-Time Communication (WebRTC): Media Transport and Use of RTP 11 draft-ietf-rtcweb-rtp-usage-14 13 Abstract 15 The Web Real-Time Communication (WebRTC) framework provides support 16 for direct interactive rich communication using audio, video, text, 17 collaboration, games, etc. between two peers' web-browsers. This 18 memo describes the media transport aspects of the WebRTC framework. 19 It specifies how the Real-time Transport Protocol (RTP) is used in 20 the WebRTC context, and gives requirements for which RTP features, 21 profiles, and extensions need to be supported. 23 Status of This Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on November 17, 2014. 40 Copyright Notice 42 Copyright (c) 2014 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . 4 59 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 60 4. WebRTC Use of RTP: Core Protocols . . . . . . . . . . . . . . 5 61 4.1. RTP and RTCP . . . . . . . . . . . . . . . . . . . . . . 5 62 4.2. Choice of the RTP Profile . . . . . . . . . . . . . . . . 7 63 4.3. Choice of RTP Payload Formats . . . . . . . . . . . . . . 8 64 4.4. Use of RTP Sessions . . . . . . . . . . . . . . . . . . . 9 65 4.5. RTP and RTCP Multiplexing . . . . . . . . . . . . . . . . 10 66 4.6. Reduced Size RTCP . . . . . . . . . . . . . . . . . . . . 10 67 4.7. Symmetric RTP/RTCP . . . . . . . . . . . . . . . . . . . 11 68 4.8. Choice of RTP Synchronisation Source (SSRC) . . . . . . . 11 69 4.9. Generation of the RTCP Canonical Name (CNAME) . . . . . . 12 70 4.10. Handling of Leap Seconds . . . . . . . . . . . . . . . . 13 71 5. WebRTC Use of RTP: Extensions . . . . . . . . . . . . . . . . 13 72 5.1. Conferencing Extensions and Topologies . . . . . . . . . 13 73 5.1.1. Full Intra Request (FIR) . . . . . . . . . . . . . . 15 74 5.1.2. Picture Loss Indication (PLI) . . . . . . . . . . . . 15 75 5.1.3. Slice Loss Indication (SLI) . . . . . . . . . . . . . 15 76 5.1.4. Reference Picture Selection Indication (RPSI) . . . . 15 77 5.1.5. Temporal-Spatial Trade-off Request (TSTR) . . . . . . 16 78 5.1.6. Temporary Maximum Media Stream Bit Rate Request 79 (TMMBR) . . . . . . . . . . . . . . . . . . . . . . . 16 80 5.2. Header Extensions . . . . . . . . . . . . . . . . . . . . 16 81 5.2.1. Rapid Synchronisation . . . . . . . . . . . . . . . . 17 82 5.2.2. Client-to-Mixer Audio Level . . . . . . . . . . . . . 17 83 5.2.3. Mixer-to-Client Audio Level . . . . . . . . . . . . . 17 84 6. WebRTC Use of RTP: Improving Transport Robustness . . . . . . 18 85 6.1. Negative Acknowledgements and RTP Retransmission . . . . 18 86 6.2. Forward Error Correction (FEC) . . . . . . . . . . . . . 19 87 7. WebRTC Use of RTP: Rate Control and Media Adaptation . . . . 19 88 7.1. Boundary Conditions and Circuit Breakers . . . . . . . . 20 89 7.2. Congestion Control Interoperability and Legacy Systems . 21 90 8. WebRTC Use of RTP: Performance Monitoring . . . . . . . . . . 22 91 9. WebRTC Use of RTP: Future Extensions . . . . . . . . . . . . 22 92 10. Signalling Considerations . . . . . . . . . . . . . . . . . . 22 93 11. WebRTC API Considerations . . . . . . . . . . . . . . . . . . 24 94 12. RTP Implementation Considerations . . . . . . . . . . . . . . 26 95 12.1. Configuration and Use of RTP Sessions . . . . . . . . . 26 96 12.1.1. Use of Multiple Media Sources Within an RTP Session 26 97 12.1.2. Use of Multiple RTP Sessions . . . . . . . . . . . . 27 98 12.1.3. Differentiated Treatment of RTP Packet Streams . . . 32 99 12.2. Media Source, RTP Packet Streams, and Participant 100 Identification . . . . . . . . . . . . . . . . . . . . . 34 101 12.2.1. Media Source Identification . . . . . . . . . . . . 34 102 12.2.2. SSRC Collision Detection . . . . . . . . . . . . . . 34 103 12.2.3. Media Synchronisation Context . . . . . . . . . . . 36 104 13. Security Considerations . . . . . . . . . . . . . . . . . . . 36 105 14. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 37 106 15. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 37 107 16. References . . . . . . . . . . . . . . . . . . . . . . . . . 37 108 16.1. Normative References . . . . . . . . . . . . . . . . . . 37 109 16.2. Informative References . . . . . . . . . . . . . . . . . 40 110 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 42 112 1. Introduction 114 The Real-time Transport Protocol (RTP) [RFC3550] provides a framework 115 for delivery of audio and video teleconferencing data and other real- 116 time media applications. Previous work has defined the RTP protocol, 117 along with numerous profiles, payload formats, and other extensions. 118 When combined with appropriate signalling, these form the basis for 119 many teleconferencing systems. 121 The Web Real-Time communication (WebRTC) framework provides the 122 protocol building blocks to support direct, interactive, real-time 123 communication using audio, video, collaboration, games, etc., between 124 two peers' web-browsers. This memo describes how the RTP framework 125 is to be used in the WebRTC context. It proposes a baseline set of 126 RTP features that are to be implemented by all WebRTC-aware end- 127 points, along with suggested extensions for enhanced functionality. 129 This memo specifies a protocol intended for use within the WebRTC 130 framework, but is not restricted to that context. An overview of the 131 WebRTC framework is given in [I-D.ietf-rtcweb-overview]. 133 The structure of this memo is as follows. Section 2 outlines our 134 rationale in preparing this memo and choosing these RTP features. 135 Section 3 defines terminology. Requirements for core RTP protocols 136 are described in Section 4 and suggested RTP extensions are described 137 in Section 5. Section 6 outlines mechanisms that can increase 138 robustness to network problems, while Section 7 describes congestion 139 control and rate adaptation mechanisms. The discussion of mandated 140 RTP mechanisms concludes in Section 8 with a review of performance 141 monitoring and network management tools that can be used in the 142 WebRTC context. Section 9 gives some guidelines for future 143 incorporation of other RTP and RTP Control Protocol (RTCP) extensions 144 into this framework. Section 10 describes requirements placed on the 145 signalling channel. Section 11 discusses the relationship between 146 features of the RTP framework and the WebRTC application programming 147 interface (API), and Section 12 discusses RTP implementation 148 considerations. The memo concludes with security considerations 149 (Section 13) and IANA considerations (Section 14). 151 2. Rationale 153 The RTP framework comprises the RTP data transfer protocol, the RTP 154 control protocol, and numerous RTP payload formats, profiles, and 155 extensions. This range of add-ons has allowed RTP to meet various 156 needs that were not envisaged by the original protocol designers, and 157 to support many new media encodings, but raises the question of what 158 extensions are to be supported by new implementations. The 159 development of the WebRTC framework provides an opportunity to review 160 the available RTP features and extensions, and to define a common 161 baseline feature set for all WebRTC implementations of RTP. This 162 builds on the past 20 years development of RTP to mandate the use of 163 extensions that have shown widespread utility, while still remaining 164 compatible with the wide installed base of RTP implementations where 165 possible. 167 RTP and RTCP extensions that are not discussed in this document can 168 be implemented by WebRTC end-points if they are beneficial for new 169 use cases. However, they are not necessary to address the WebRTC use 170 cases and requirements identified in 171 [I-D.ietf-rtcweb-use-cases-and-requirements]. 173 While the baseline set of RTP features and extensions defined in this 174 memo is targeted at the requirements of the WebRTC framework, it is 175 expected to be broadly useful for other conferencing-related uses of 176 RTP. In particular, it is likely that this set of RTP features and 177 extensions will be appropriate for other desktop or mobile video 178 conferencing systems, or for room-based high-quality telepresence 179 applications. 181 3. Terminology 183 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 184 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 185 document are to be interpreted as described in [RFC2119]. The RFC 186 2119 interpretation of these key words applies only when written in 187 ALL CAPS. Lower- or mixed-case uses of these key words are not to be 188 interpreted as carrying special significance in this memo. 190 We define the following additional terms: 192 WebRTC MediaStream: The MediaStream concept defined by the W3C in 193 the WebRTC API [W3C.WD-mediacapture-streams-20130903]. 195 Transport-layer Flow: A uni-directional flow of transport packets 196 that are identified by having a particular 5-tuple of source IP 197 address, source port, destination IP address, destination port, 198 and transport protocol used. 200 Bi-directional Transport-layer Flow: A bi-directional transport- 201 layer flow is a transport-layer flow that is symmetric. That is, 202 the transport-layer flow in the reverse direction has a 5-tuple 203 where the source and destination address and ports are swapped 204 compared to the forward path transport-layer flow, and the 205 transport protocol is the same. 207 This document uses the terminology from 208 [I-D.ietf-avtext-rtp-grouping-taxonomy]. Other terms are used 209 according to their definitions from the RTP Specification [RFC3550]. 210 Especially note the following frequently used terms: RTP Packet 211 Stream, RTP Session, and End-point. 213 4. WebRTC Use of RTP: Core Protocols 215 The following sections describe the core features of RTP and RTCP 216 that need to be implemented, along with the mandated RTP profiles. 217 Also described are the core extensions providing essential features 218 that all WebRTC implementations need to implement to function 219 effectively on today's networks. 221 4.1. RTP and RTCP 223 The Real-time Transport Protocol (RTP) [RFC3550] is REQUIRED to be 224 implemented as the media transport protocol for WebRTC. RTP itself 225 comprises two parts: the RTP data transfer protocol, and the RTP 226 control protocol (RTCP). RTCP is a fundamental and integral part of 227 RTP, and MUST be implemented in all WebRTC applications. 229 The following RTP and RTCP features are sometimes omitted in limited 230 functionality implementations of RTP, but are REQUIRED in all WebRTC 231 implementations: 233 o Support for use of multiple simultaneous SSRC values in a single 234 RTP session, including support for RTP end-points that send many 235 SSRC values simultaneously, following [RFC3550] and 236 [I-D.ietf-avtcore-rtp-multi-stream]. Support for the RTCP 237 optimisations for multi-SSRC sessions defined in 238 [I-D.ietf-avtcore-rtp-multi-stream-optimisation] is RECOMMENDED. 240 o Random choice of SSRC on joining a session; collision detection 241 and resolution for SSRC values (see also Section 4.8). 243 o Support for reception of RTP data packets containing CSRC lists, 244 as generated by RTP mixers, and RTCP packets relating to CSRCs. 246 o Sending correct synchronisation information in the RTCP Sender 247 Reports, to allow receivers to implement lip-synchronisation; see 248 Section 5.2.1 regarding support for the rapid RTP synchronisation 249 extensions. 251 o Support for multiple synchronisation contexts. Participants that 252 send multiple simultaneous RTP packet streams SHOULD do so as part 253 of a single synchronisation context, using a single RTCP CNAME for 254 all streams and allowing receivers to play the streams out in a 255 synchronised manner. For compatibility with potential future 256 versions of this specification, or for interoperability with non- 257 WebRTC devices through a gateway, receivers MUST support multiple 258 synchronisation contexts, indicated by the use of multiple RTCP 259 CNAMEs in an RTP session. This specification requires the usage 260 of a single CNAME when sending RTP Packet Streams in some 261 circumstances, see Section 4.9. 263 o Support for sending and receiving RTCP SR, RR, SDES, and BYE 264 packet types, with OPTIONAL support for other RTCP packet types 265 unless mandated by other parts of this specification. Note that 266 additional RTCP Packet types are used by the RTP/SAVPF Profile 267 (Section 4.2) and the other RTCP extensions (Section 5). 269 o Support for multiple end-points in a single RTP session, and for 270 scaling the RTCP transmission interval according to the number of 271 participants in the session; support for randomised RTCP 272 transmission intervals to avoid synchronisation of RTCP reports; 273 support for RTCP timer reconsideration (Section 6.3.6 of 274 [RFC3550]) and reverse reconsideration (Section 6.3.4 of 275 [RFC3550]). 277 o Support for configuring the RTCP bandwidth as a fraction of the 278 media bandwidth, and for configuring the fraction of the RTCP 279 bandwidth allocated to senders, e.g., using the SDP "b=" line 280 [RFC4566][RFC3556]. 282 o Support for the reduced minimum RTCP reporting interval described 283 in Section 6.2 of [RFC3550] is REQUIRED. When using the reduced 284 minimum RTCP reporting interval, the fixed (non-reduced) minimum 285 interval MUST be used when calculating the participant timeout 286 interval (see Sections 6.2 and 6.3.5 of [RFC3550]). The delay 287 before sending the initial compound RTCP packet can be set to zero 288 (see Section 6.2 of [RFC3550] as updated by 289 [I-D.ietf-avtcore-rtp-multi-stream]). 291 o Ignore unknown RTCP packet types and RTP header extensions. This 292 to ensure robust handling of future extensions, middlebox 293 behaviours, etc., that can result in not signalled RTCP packet 294 types or RTP header extensions being received. If a compound RTCP 295 packet is received that contains a mixture of known and unknown 296 RTCP packet types, the known packets types need to be processed as 297 usual, with only the unknown packet types being discarded. 299 It is known that a significant number of legacy RTP implementations, 300 especially those targeted at VoIP-only systems, do not support all of 301 the above features, and in some cases do not support RTCP at all. 302 Implementers are advised to consider the requirements for graceful 303 degradation when interoperating with legacy implementations. 305 Other implementation considerations are discussed in Section 12. 307 4.2. Choice of the RTP Profile 309 The complete specification of RTP for a particular application domain 310 requires the choice of an RTP Profile. For WebRTC use, the Extended 311 Secure RTP Profile for RTCP-Based Feedback (RTP/SAVPF) [RFC5124], as 312 extended by [RFC7007], MUST be implemented. The RTP/SAVPF profile is 313 the combination of basic RTP/AVP profile [RFC3551], the RTP profile 314 for RTCP-based feedback (RTP/AVPF) [RFC4585], and the secure RTP 315 profile (RTP/SAVP) [RFC3711]. 317 The RTCP-based feedback extensions [RFC4585] are needed for the 318 improved RTCP timer model. This allows more flexible transmission of 319 RTCP packets in response to events, rather than strictly according to 320 bandwidth, and is vital for being able to report congestion signals 321 as well as media events. These extensions also allow saving RTCP 322 bandwidth, and an end-point will commonly only use the full RTCP 323 bandwidth allocation if there are many events that require feedback. 324 The timer rules are also needed to make use of the RTP conferencing 325 extensions discussed in Section 5.1. 327 Note: The enhanced RTCP timer model defined in the RTP/AVPF 328 profile is backwards compatible with legacy systems that implement 329 only the RTP/AVP or RTP/SAVP profile, given some constraints on 330 parameter configuration such as the RTCP bandwidth value and "trr- 331 int" (the most important factor for interworking with RTP/(S)AVP 332 end-points via a gateway is to set the trr-int parameter to a 333 value representing 4 seconds). 335 The secure RTP (SRTP) profile extensions [RFC3711] are needed to 336 provide media encryption, integrity protection, replay protection and 337 a limited form of source authentication. WebRTC implementations MUST 338 NOT send packets using the basic RTP/AVP profile or the RTP/AVPF 339 profile; they MUST employ the full RTP/SAVPF profile to protect all 340 RTP and RTCP packets that are generated (i.e., implementations MUST 341 use SRTP and SRTCP). The RTP/SAVPF profile MUST be configured using 342 the cipher suites, DTLS-SRTP protection profiles, keying mechanisms, 343 and other parameters described in [I-D.ietf-rtcweb-security-arch]. 345 4.3. Choice of RTP Payload Formats 347 The set of mandatory to implement codecs and RTP payload formats for 348 WebRTC is not specified in this memo, instead they are defined in 349 separate specifications, such as [I-D.ietf-rtcweb-audio]. 350 Implementations can support any codec for which an RTP payload format 351 and associated signalling is defined. Implementation cannot assume 352 that the other participants in an RTP session understand any RTP 353 payload format, no matter how common; the mapping between RTP payload 354 type numbers and specific configurations of particular RTP payload 355 formats MUST be agreed before those payload types/formats can be 356 used. In an SDP context, this can be done using the "a=rtpmap:" and 357 "a=fmtp:" attributes associated with an "m=" line, along with any 358 other SDP attributes needed to configure the RTP payload format. 360 End-points can signal support for multiple RTP payload formats, or 361 multiple configurations of a single RTP payload format, as long as 362 each unique RTP payload format configuration uses a different RTP 363 payload type number. As outlined in Section 4.8, the RTP payload 364 type number is sometimes used to associate an RTP packet stream with 365 a signalling context. This association is possible provided unique 366 RTP payload type numbers are used in each context. For example, an 367 RTP packet stream can be associated with an SDP "m=" line by 368 comparing the RTP payload type numbers used by the RTP packet stream 369 with payload types signalled in the "a=rtpmap:" lines in the media 370 sections of the SDP. This leads to the following considerations: 372 If RTP packet streams are being associated with signalling 373 contexts based on the RTP payload type, then the assignment of RTP 374 payload type numbers MUST be unique across signalling contexts. 376 If the same RTP payload format configuration is used in multiple 377 contexts, then a different RTP payload type number has to be 378 assigned in each context to ensure uniqueness. 380 If the RTP payload type number is not being used to associate RTP 381 packet streams with a signalling context, then the same RTP 382 payload type number can be used to indicate the exact same RTP 383 payload format configuration in multiple contexts. 385 A single RTP payload type number MUST NOT be assigned to different 386 RTP payload formats, or different configurations of the same RTP 387 payload format, within a single RTP session (note that the "m=" lines 388 in an SDP bundle group [I-D.ietf-mmusic-sdp-bundle-negotiation] form 389 a single RTP session). 391 An end-point that has signalled support for multiple RTP payload 392 formats MUST be able to accept data in any of those payload formats 393 at any time, unless it has previously signalled limitations on its 394 decoding capability. This requirement is constrained if several 395 types of media (e.g., audio and video) are sent in the same RTP 396 session. In such a case, a source (SSRC) is restricted to switching 397 only between the RTP payload formats signalled for the type of media 398 that is being sent by that source; see Section 4.4. To support rapid 399 rate adaptation by changing codec, RTP does not require advance 400 signalling for changes between RTP payload formats used by a single 401 SSRC that were signalled during session set-up. 403 If performing changes between two RTP payload types that use 404 different RTP clock rates, an RTP sender MUST follow the 405 recommendations in Section 4.1 of [RFC7160]. RTP receivers MUST 406 follow the recommendations in Section 4.3 of [RFC7160] in order to 407 support sources that switch between clock rates in an RTP session 408 (these recommendations for receivers are backwards compatible with 409 the case where senders use only a single clock rate). 411 4.4. Use of RTP Sessions 413 An association amongst a set of end-points communicating using RTP is 414 known as an RTP session [RFC3550]. An end-point can be involved in 415 several RTP sessions at the same time. In a multimedia session, each 416 type of media has typically been carried in a separate RTP session 417 (e.g., using one RTP session for the audio, and a separate RTP 418 session using a different transport-layer flow for the video). 419 WebRTC implementations of RTP are REQUIRED to implement support for 420 multimedia sessions in this way, separating each session using 421 different transport-layer flows for compatibility with legacy 422 systems. 424 In modern day networks, however, with the widespread use of network 425 address/port translators (NAT/NAPT) and firewalls, it is desirable to 426 reduce the number of transport-layer flows used by RTP applications. 427 This can be done by sending all the RTP packet streams in a single 428 RTP session, which will comprise a single transport-layer flow (this 429 will prevent the use of some quality-of-service mechanisms, as 430 discussed in Section 12.1.3). Implementations are therefore also 431 REQUIRED to support transport of all RTP packet streams, independent 432 of media type, in a single RTP session using a single transport layer 433 flow, according to [I-D.ietf-avtcore-multi-media-rtp-session]. If 434 multiple types of media are to be used in a single RTP session, all 435 participants in that RTP session MUST agree to this usage. In an SDP 436 context, [I-D.ietf-mmusic-sdp-bundle-negotiation] can be used to 437 signal such a bundle of RTP packet streams forming a single RTP 438 session. 440 Further discussion about the suitability of different RTP session 441 structures and multiplexing methods to different scenarios can be 442 found in [I-D.ietf-avtcore-multiplex-guidelines]. 444 4.5. RTP and RTCP Multiplexing 446 Historically, RTP and RTCP have been run on separate transport layer 447 flows (e.g., two UDP ports for each RTP session, one port for RTP and 448 one port for RTCP). With the increased use of Network Address/Port 449 Translation (NAT/NAPT) this has become problematic, since maintaining 450 multiple NAT bindings can be costly. It also complicates firewall 451 administration, since multiple ports need to be opened to allow RTP 452 traffic. To reduce these costs and session set-up times, 453 implementations are REQUIRED to support multiplexing RTP data packets 454 and RTCP control packets on a single transport-layer flow [RFC5761]. 455 Such RTP and RTCP multiplexing MUST be negotiated in the signalling 456 channel before it is used. If SDP is used for signalling, this 457 negotiation MUST use the attributes defined in [RFC5761]. For 458 backwards compatibility, implementations are also REQUIRED to support 459 RTP and RTCP sent on separate transport-layer flows. 461 Note that the use of RTP and RTCP multiplexed onto a single 462 transport-layer flow ensures that there is occasional traffic sent on 463 that port, even if there is no active media traffic. This can be 464 useful to keep NAT bindings alive [RFC6263]. 466 4.6. Reduced Size RTCP 468 RTCP packets are usually sent as compound RTCP packets, and [RFC3550] 469 requires that those compound packets start with an Sender Report (SR) 470 or Receiver Report (RR) packet. When using frequent RTCP feedback 471 messages under the RTP/AVPF Profile [RFC4585] these statistics are 472 not needed in every packet, and unnecessarily increase the mean RTCP 473 packet size. This can limit the frequency at which RTCP packets can 474 be sent within the RTCP bandwidth share. 476 To avoid this problem, [RFC5506] specifies how to reduce the mean 477 RTCP message size and allow for more frequent feedback. Frequent 478 feedback, in turn, is essential to make real-time applications 479 quickly aware of changing network conditions, and to allow them to 480 adapt their transmission and encoding behaviour. Implementations 481 MUST support sending and receiving non-compound RTCP feedback packets 482 [RFC5506]. Use of non-compound RTCP packets MUST be negotiated using 483 the signalling channel. If SDP is used for signalling, this 484 negotiation MUST use the attributes defined in [RFC5506]. For 485 backwards compatibility, implementations are also REQUIRED to support 486 the use of compound RTCP feedback packets if the remote end-point 487 does not agree to the use of non-compound RTCP in the signalling 488 exchange. 490 4.7. Symmetric RTP/RTCP 492 To ease traversal of NAT and firewall devices, implementations are 493 REQUIRED to implement and use Symmetric RTP [RFC4961]. The reason 494 for using symmetric RTP is primarily to avoid issues with NATs and 495 Firewalls by ensuring that the send and receive RTP packet streams, 496 as well as RTCP, are actually bi-directional transport-layer flows. 497 This will keep alive the NAT and firewall pinholes, and help indicate 498 consent that the receive direction is a transport-layer flow the 499 intended recipient actually wants. In addition, it saves resources, 500 specifically ports at the end-points, but also in the network as NAT 501 mappings or firewall state is not unnecessary bloated. The amount of 502 per flow QoS state kept in the network is also reduced. 504 4.8. Choice of RTP Synchronisation Source (SSRC) 506 Implementations are REQUIRED to support signalled RTP synchronisation 507 source (SSRC) identifiers. If SDP is used, this MUST be done using 508 the "a=ssrc:" SDP attribute defined in Section 4.1 and Section 5 of 509 [RFC5576] and the "previous-ssrc" source attribute defined in 510 Section 6.2 of [RFC5576]; other per-SSRC attributes defined in 511 [RFC5576] MAY be supported. 513 While support for signalled SSRC identifiers is mandated, their use 514 in an RTP session is OPTIONAL. Implementations MUST be prepared to 515 accept RTP and RTCP packets using SSRCs that have not been explicitly 516 signalled ahead of time. Implementations MUST support random SSRC 517 assignment, and MUST support SSRC collision detection and resolution, 518 according to [RFC3550]. When using signalled SSRC values, collision 519 detection MUST be performed as described in Section 5 of [RFC5576]. 521 It is often desirable to associate an RTP packet stream with a non- 522 RTP context. For users of the WebRTC API a mapping between SSRCs and 523 MediaStreamTracks are provided per Section 11. For gateways or other 524 usages it is possible to associate an RTP packet stream with an "m=" 525 line in a session description formatted using SDP. If SSRCs are 526 signalled this is straightforward (in SDP the "a=ssrc:" line will be 527 at the media level, allowing a direct association with an "m=" line). 528 If SSRCs are not signalled, the RTP payload type numbers used in an 529 RTP packet stream are often sufficient to associate that packet 530 stream with a signalling context (e.g., if RTP payload type numbers 531 are assigned as described in Section 4.3 of this memo, the RTP 532 payload types used by an RTP packet stream can be compared with 533 values in SDP "a=rtpmap:" lines, which are at the media level in SDP, 534 and so map to an "m=" line). 536 4.9. Generation of the RTCP Canonical Name (CNAME) 538 The RTCP Canonical Name (CNAME) provides a persistent transport-level 539 identifier for an RTP end-point. While the Synchronisation Source 540 (SSRC) identifier for an RTP end-point can change if a collision is 541 detected, or when the RTP application is restarted, its RTCP CNAME is 542 meant to stay unchanged for the duration of a RTCPeerConnection 543 [W3C.WD-webrtc-20130910], so that RTP end-points can be uniquely 544 identified and associated with their RTP packet streams within a set 545 of related RTP sessions. 547 Each RTP end-point MUST have at least one RTCP CNAME, and that RTCP 548 CNAME MUST be unique within the RTCPeerConnection. RTCP CNAMEs 549 identify a particular synchronisation context, i.e., all SSRCs 550 associated with a single RTCP CNAME share a common reference clock. 551 If an end-point has SSRCs that are associated with several 552 unsynchronised reference clocks, and hence different synchronisation 553 contexts, it will need to use multiple RTCP CNAMEs, one for each 554 synchronisation context. 556 Taking the discussion in Section 11 into account, a WebRTC end-point 557 MUST NOT use more than one RTCP CNAME in the RTP sessions belonging 558 to single RTCPeerConnection (that is, an RTCPeerConnection forms a 559 synchronisation context). RTP middleboxes MAY generate RTP packet 560 streams associated with more than one RTCP CNAME, to allow them to 561 avoid having to resynchronize media from multiple different end- 562 points part of a multi-party RTP session. 564 The RTP specification [RFC3550] includes guidelines for choosing a 565 unique RTP CNAME, but these are not sufficient in the presence of NAT 566 devices. In addition, long-term persistent identifiers can be 567 problematic from a privacy viewpoint (Section 13). Accordingly, a 568 WebRTC endpoint MUST generate a new, unique, short-term persistent 569 RTCP CNAME for each RTCPeerConnection, following [RFC7022], with a 570 single exception; if explicitly requested at creation an 571 RTCPeerConnection MAY use the same CNAME as as an existing 572 RTCPeerConnection within their common same-origin context. 574 An WebRTC end-point MUST support reception of any CNAME that matches 575 the syntax limitations specified by the RTP specification [RFC3550] 576 and cannot assume that any CNAME will be chosen according to the form 577 suggested above. 579 4.10. Handling of Leap Seconds 581 The guidelines regarding handling of leap seconds to limit their 582 impact on RTP media play-out and synchronization given in [RFC7164] 583 SHOULD be followed. 585 5. WebRTC Use of RTP: Extensions 587 There are a number of RTP extensions that are either needed to obtain 588 full functionality, or extremely useful to improve on the baseline 589 performance, in the WebRTC application context. One set of these 590 extensions is related to conferencing, while others are more generic 591 in nature. The following subsections describe the various RTP 592 extensions mandated or suggested for use within the WebRTC context. 594 5.1. Conferencing Extensions and Topologies 596 RTP is a protocol that inherently supports group communication. 597 Groups can be implemented by having each endpoint send its RTP packet 598 streams to an RTP middlebox that redistributes the traffic, by using 599 a mesh of unicast RTP packet streams between endpoints, or by using 600 an IP multicast group to distribute the RTP packet streams. These 601 topologies can be implemented in a number of ways as discussed in 602 [I-D.ietf-avtcore-rtp-topologies-update]. 604 While the use of IP multicast groups is popular in IPTV systems, the 605 topologies based on RTP middleboxes are dominant in interactive video 606 conferencing environments. Topologies based on a mesh of unicast 607 transport-layer flows to create a common RTP session have not seen 608 widespread deployment to date. Accordingly, WebRTC implementations 609 are not expected to support topologies based on IP multicast groups 610 or to support mesh-based topologies, such as a point-to-multipoint 611 mesh configured as a single RTP session (Topo-Mesh in the terminology 612 of [I-D.ietf-avtcore-rtp-topologies-update]). However, a point-to- 613 multipoint mesh constructed using several RTP sessions, implemented 614 in the WebRTC context using independent RTCPeerConnections 615 [W3C.WD-webrtc-20130910], can be expected to be utilised by WebRTC 616 applications and needs to be supported. 618 WebRTC implementations of RTP endpoints implemented according to this 619 memo are expected to support all the topologies described in 620 [I-D.ietf-avtcore-rtp-topologies-update] where the RTP endpoints send 621 and receive unicast RTP packet streams to and from some peer device, 622 provided that peer can participate in performing congestion control 623 on the RTP packet streams. The peer device could be another RTP 624 endpoint, or it could be an RTP middlebox that redistributes the RTP 625 packet streams to other RTP endpoints. This limitation means that 626 some of the RTP middlebox-based topologies are not suitable for use 627 in the WebRTC environment. Specifically: 629 o Video switching MCUs (Topo-Video-switch-MCU) SHOULD NOT be used, 630 since they make the use of RTCP for congestion control and quality 631 of service reports problematic (see Section 3.8 of 632 [I-D.ietf-avtcore-rtp-topologies-update]). 634 o The Relay-Transport Translator (Topo-PtM-Trn-Translator) topology 635 SHOULD NOT be used because its safe use requires a congestion 636 control algorithm or RTP circuit breaker that handles point to 637 multipoint, which has not yet been standardised. 639 The following topology can be used, however it has some issues worth 640 noting: 642 o Content modifying MCUs with RTCP termination (Topo-RTCP- 643 terminating-MCU) MAY be used. Note that in this RTP Topology, RTP 644 loop detection and identification of active senders is the 645 responsibility of the WebRTC application; since the clients are 646 isolated from each other at the RTP layer, RTP cannot assist with 647 these functions (see section 3.9 of 648 [I-D.ietf-avtcore-rtp-topologies-update]). 650 The RTP extensions described in Section 5.1.1 to Section 5.1.6 are 651 designed to be used with centralised conferencing, where an RTP 652 middlebox (e.g., a conference bridge) receives a participant's RTP 653 packet streams and distributes them to the other participants. These 654 extensions are not necessary for interoperability; an RTP end-point 655 that does not implement these extensions will work correctly, but 656 might offer poor performance. Support for the listed extensions will 657 greatly improve the quality of experience and, to provide a 658 reasonable baseline quality, some of these extensions are mandatory 659 to be supported by WebRTC end-points. 661 The RTCP conferencing extensions are defined in Extended RTP Profile 662 for Real-time Transport Control Protocol (RTCP)-Based Feedback (RTP/ 663 AVPF) [RFC4585] and the memo on Codec Control Messages (CCM) in RTP/ 664 AVPF [RFC5104]; they are fully usable by the Secure variant of this 665 profile (RTP/SAVPF) [RFC5124]. 667 5.1.1. Full Intra Request (FIR) 669 The Full Intra Request message is defined in Sections 3.5.1 and 4.3.1 670 of the Codec Control Messages [RFC5104]. It is used to make the 671 mixer request a new Intra picture from a participant in the session. 672 This is used when switching between sources to ensure that the 673 receivers can decode the video or other predictive media encoding 674 with long prediction chains. WebRTC senders MUST understand and 675 react to FIR feedback messages they receive, since this greatly 676 improves the user experience when using centralised mixer-based 677 conferencing. Support for sending FIR messages is OPTIONAL. 679 5.1.2. Picture Loss Indication (PLI) 681 The Picture Loss Indication message is defined in Section 6.3.1 of 682 the RTP/AVPF profile [RFC4585]. It is used by a receiver to tell the 683 sending encoder that it lost the decoder context and would like to 684 have it repaired somehow. This is semantically different from the 685 Full Intra Request above as there could be multiple ways to fulfil 686 the request. WebRTC senders MUST understand and react to PLI 687 feedback messages as a loss tolerance mechanism. Receivers MAY send 688 PLI messages. 690 5.1.3. Slice Loss Indication (SLI) 692 The Slice Loss Indication message is defined in Section 6.3.2 of the 693 RTP/AVPF profile [RFC4585]. It is used by a receiver to tell the 694 encoder that it has detected the loss or corruption of one or more 695 consecutive macro blocks, and would like to have these repaired 696 somehow. It is RECOMMENDED that receivers generate SLI feedback 697 messages if slices are lost when using a codec that supports the 698 concept of macro blocks. A sender that receives an SLI feedback 699 message SHOULD attempt to repair the lost slice(s). 701 5.1.4. Reference Picture Selection Indication (RPSI) 703 Reference Picture Selection Indication (RPSI) messages are defined in 704 Section 6.3.3 of the RTP/AVPF profile [RFC4585]. Some video encoding 705 standards allow the use of older reference pictures than the most 706 recent one for predictive coding. If such a codec is in use, and if 707 the encoder has learnt that encoder-decoder synchronisation has been 708 lost, then a known as correct reference picture can be used as a base 709 for future coding. The RPSI message allows this to be signalled. 710 Receivers that detect that encoder-decoder synchronisation has been 711 lost SHOULD generate an RPSI feedback message if codec being used 712 supports reference picture selection. A RTP packet stream sender 713 that receives such an RPSI message SHOULD act on that messages to 714 change the reference picture, if it is possible to do so within the 715 available bandwidth constraints, and with the codec being used. 717 5.1.5. Temporal-Spatial Trade-off Request (TSTR) 719 The temporal-spatial trade-off request and notification are defined 720 in Sections 3.5.2 and 4.3.2 of [RFC5104]. This request can be used 721 to ask the video encoder to change the trade-off it makes between 722 temporal and spatial resolution, for example to prefer high spatial 723 image quality but low frame rate. Support for TSTR requests and 724 notifications is OPTIONAL. 726 5.1.6. Temporary Maximum Media Stream Bit Rate Request (TMMBR) 728 The TMMBR feedback message is defined in Sections 3.5.4 and 4.2.1 of 729 the Codec Control Messages [RFC5104]. This request and its 730 notification message are used by a media receiver to inform the 731 sending party that there is a current limitation on the amount of 732 bandwidth available to this receiver. This can be various reasons 733 for this: for example, an RTP mixer can use this message to limit the 734 media rate of the sender being forwarded by the mixer (without doing 735 media transcoding) to fit the bottlenecks existing towards the other 736 session participants. WebRTC senders are REQUIRED to implement 737 support for TMMBR messages, and MUST follow bandwidth limitations set 738 by a TMMBR message received for their SSRC. The sending of TMMBR 739 requests is OPTIONAL. 741 5.2. Header Extensions 743 The RTP specification [RFC3550] provides the capability to include 744 RTP header extensions containing in-band data, but the format and 745 semantics of the extensions are poorly specified. The use of header 746 extensions is OPTIONAL in the WebRTC context, but if they are used, 747 they MUST be formatted and signalled following the general mechanism 748 for RTP header extensions defined in [RFC5285], since this gives 749 well-defined semantics to RTP header extensions. 751 As noted in [RFC5285], the requirement from the RTP specification 752 that header extensions are "designed so that the header extension may 753 be ignored" [RFC3550] stands. To be specific, header extensions MUST 754 only be used for data that can safely be ignored by the recipient 755 without affecting interoperability, and MUST NOT be used when the 756 presence of the extension has changed the form or nature of the rest 757 of the packet in a way that is not compatible with the way the stream 758 is signalled (e.g., as defined by the payload type). Valid examples 759 of RTP header extensions might include metadata that is additional to 760 the usual RTP information, but that can safely be ignored without 761 compromising interoperability. 763 5.2.1. Rapid Synchronisation 765 Many RTP sessions require synchronisation between audio, video, and 766 other content. This synchronisation is performed by receivers, using 767 information contained in RTCP SR packets, as described in the RTP 768 specification [RFC3550]. This basic mechanism can be slow, however, 769 so it is RECOMMENDED that the rapid RTP synchronisation extensions 770 described in [RFC6051] be implemented in addition to RTCP SR-based 771 synchronisation. The rapid synchronisation extensions use the 772 general RTP header extension mechanism [RFC5285], which requires 773 signalling, but are otherwise backwards compatible. 775 5.2.2. Client-to-Mixer Audio Level 777 The Client to Mixer Audio Level extension [RFC6464] is an RTP header 778 extension used by an endpoint to inform a mixer about the level of 779 audio activity in the packet to which the header is attached. This 780 enables an RTP middlebox to make mixing or selection decisions 781 without decoding or detailed inspection of the payload, reducing the 782 complexity in some types of mixers. It can also save decoding 783 resources in receivers, which can choose to decode only the most 784 relevant RTP packet streams based on audio activity levels. 786 The Client-to-Mixer Audio Level [RFC6464] header extension is 787 RECOMMENDED to be implemented. If this header extension is 788 implemented, it is REQUIRED that implementations are capable of 789 encrypting the header extension according to [RFC6904] since the 790 information contained in these header extensions can be considered 791 sensitive. The use of this encryption is RECOMMENDED, however usage 792 of the encryption can be explicitly disabled through API or 793 signalling. 795 5.2.3. Mixer-to-Client Audio Level 797 The Mixer to Client Audio Level header extension [RFC6465] provides 798 an endpoint with the audio level of the different sources mixed into 799 a common source stream by a RTP mixer. This enables a user interface 800 to indicate the relative activity level of each session participant, 801 rather than just being included or not based on the CSRC field. This 802 is a pure optimisation of non critical functions, and is hence 803 OPTIONAL to implement. If this header extension is implemented, it 804 is REQUIRED that implementations are capable of encrypting the header 805 extension according to [RFC6904] since the information contained in 806 these header extensions can be considered sensitive. It is further 807 RECOMMENDED that this encryption is used, unless the encryption has 808 been explicitly disabled through API or signalling. 810 6. WebRTC Use of RTP: Improving Transport Robustness 812 There are tools that can make RTP packet streams robust against 813 packet loss and reduce the impact of loss on media quality. However, 814 they generally some add overhead compared to a non-robust stream. 815 The overhead needs to be considered, and the aggregate bit-rate MUST 816 be rate controlled to avoid causing network congestion (see 817 Section 7). As a result, improving robustness might require a lower 818 base encoding quality, but has the potential to deliver that quality 819 with fewer errors. The mechanisms described in the following sub- 820 sections can be used to improve tolerance to packet loss. 822 6.1. Negative Acknowledgements and RTP Retransmission 824 As a consequence of supporting the RTP/SAVPF profile, implementations 825 can send negative acknowledgements (NACKs) for RTP data packets 826 [RFC4585]. This feedback can be used to inform a sender of the loss 827 of particular RTP packets, subject to the capacity limitations of the 828 RTCP feedback channel. A sender can use this information to optimise 829 the user experience by adapting the media encoding to compensate for 830 known lost packets. 832 RTP packet stream senders are REQUIRED to understand the Generic NACK 833 message defined in Section 6.2.1 of [RFC4585], but MAY choose to 834 ignore some or all of this feedback (following Section 4.2 of 835 [RFC4585]). Receivers MAY send NACKs for missing RTP packets. 836 Guidelines on when to send NACKs are provided in [RFC4585]. It is 837 not expected that a receiver will send a NACK for every lost RTP 838 packet, rather it needs to consider the cost of sending NACK 839 feedback, and the importance of the lost packet, to make an informed 840 decision on whether it is worth telling the sender about a packet 841 loss event. 843 The RTP Retransmission Payload Format [RFC4588] offers the ability to 844 retransmit lost packets based on NACK feedback. Retransmission needs 845 to be used with care in interactive real-time applications to ensure 846 that the retransmitted packet arrives in time to be useful, but can 847 be effective in environments with relatively low network RTT (an RTP 848 sender can estimate the RTT to the receivers using the information in 849 RTCP SR and RR packets, as described at the end of Section 6.4.1 of 850 [RFC3550]). The use of retransmissions can also increase the forward 851 RTP bandwidth, and can potentially caused increased packet loss if 852 the original packet loss was caused by network congestion. Note, 853 however, that retransmission of an important lost packet to repair 854 decoder state can have lower cost than sending a full intra frame. 855 It is not appropriate to blindly retransmit RTP packets in response 856 to a NACK. The importance of lost packets and the likelihood of them 857 arriving in time to be useful needs to be considered before RTP 858 retransmission is used. 860 Receivers are REQUIRED to implement support for RTP retransmission 861 packets [RFC4588]. Senders MAY send RTP retransmission packets in 862 response to NACKs if the RTP retransmission payload format has been 863 negotiated for the session, and if the sender believes it is useful 864 to send a retransmission of the packet(s) referenced in the NACK. An 865 RTP sender does not need to retransmit every NACKed packet. 867 6.2. Forward Error Correction (FEC) 869 The use of Forward Error Correction (FEC) can provide an effective 870 protection against some degree of packet loss, at the cost of steady 871 bandwidth overhead. There are several FEC schemes that are defined 872 for use with RTP. Some of these schemes are specific to a particular 873 RTP payload format, others operate across RTP packets and can be used 874 with any payload format. It needs to be noted that using redundant 875 encoding or FEC will lead to increased play out delay, which needs to 876 be considered when choosing the redundancy or FEC formats and their 877 respective parameters. 879 If an RTP payload format negotiated for use in a RTCPeerConnection 880 supports redundant transmission or FEC as a standard feature of that 881 payload format, then that support MAY be used in the 882 RTCPeerConnection, subject to any appropriate signalling. 884 There are several block-based FEC schemes that are designed for use 885 with RTP independent of the chosen RTP payload format. At the time 886 of this writing there is no consensus on which, if any, of these FEC 887 schemes is appropriate for use in the WebRTC context. Accordingly, 888 this memo makes no recommendation on the choice of block-based FEC 889 for WebRTC use. 891 7. WebRTC Use of RTP: Rate Control and Media Adaptation 893 WebRTC will be used in heterogeneous network environments using a 894 variety set of link technologies, including both wired and wireless 895 links, to interconnect potentially large groups of users around the 896 world. As a result, the network paths between users can have widely 897 varying one-way delays, available bit-rates, load levels, and traffic 898 mixtures. Individual end-points can send one or more RTP packet 899 streams to each participant in a WebRTC conference, and there can be 900 several participants. Each of these RTP packet streams can contain 901 different types of media, and the type of media, bit rate, and number 902 of RTP packet streams as well as transport-layer flows can be highly 903 asymmetric. Non-RTP traffic can share the network paths with RTP 904 transport-layer flows. Since the network environment is not 905 predictable or stable, WebRTC end-points MUST ensure that the RTP 906 traffic they generate can adapt to match changes in the available 907 network capacity. 909 The quality of experience for users of WebRTC implementation is very 910 dependent on effective adaptation of the media to the limitations of 911 the network. End-points have to be designed so they do not transmit 912 significantly more data than the network path can support, except for 913 very short time periods, otherwise high levels of network packet loss 914 or delay spikes will occur, causing media quality degradation. The 915 limiting factor on the capacity of the network path might be the link 916 bandwidth, or it might be competition with other traffic on the link 917 (this can be non-WebRTC traffic, traffic due to other WebRTC flows, 918 or even competition with other WebRTC flows in the same session). 920 An effective media congestion control algorithm is therefore an 921 essential part of the WebRTC framework. However, at the time of this 922 writing, there is no standard congestion control algorithm that can 923 be used for interactive media applications such as WebRTC's flows. 924 Some requirements for congestion control algorithms for 925 RTCPeerConnections are discussed in [I-D.ietf-rmcat-cc-requirements]. 926 A future version of this memo will mandate the use of a congestion 927 control algorithm that satisfies these requirements. 929 7.1. Boundary Conditions and Circuit Breakers 931 WebRTC implementations MUST implement the RTP circuit breaker 932 algorithm that is described in 933 [I-D.ietf-avtcore-rtp-circuit-breakers]. The RTP circuit breaker is 934 designed to enable applications to recognise and react to situations 935 of extreme network congestion. However, since the RTP circuit 936 breaker might not be triggered until congestion becomes extreme, it 937 cannot be considered a substitute for congestion control, and 938 applications MUST also implement congestion control to allow them to 939 adapt to changes in network capacity. Any future RTP congestion 940 control algorithms are expected to operate within the envelope 941 allowed by the circuit breaker. 943 The session establishment signalling will also necessarily establish 944 boundaries to which the media bit-rate will conform. The choice of 945 media codecs provides upper- and lower-bounds on the supported bit- 946 rates that the application can utilise to provide useful quality, and 947 the packetisation choices that exist. In addition, the signalling 948 channel can establish maximum media bit-rate boundaries using, for 949 example, the SDP "b=AS:" or "b=CT:" lines and the RTP/AVPF Temporary 950 Maximum Media Stream Bit Rate (TMMBR) Requests (see Section 5.1.6 of 951 this memo). Signalled bandwidth limitations, such as SDP "b=AS:" or 952 "b=CT:" lines received from the peer, MUST be followed when sending 953 RTP packet streams. A WebRTC endpoint receiving media SHOULD signal 954 its bandwidth limitations, these limitations have to be based on 955 known bandwidth limitations, for example the capacity of the edge 956 links. 958 7.2. Congestion Control Interoperability and Legacy Systems 960 There are legacy RTP implementations that do not implement RTCP, and 961 hence do not provide any congestion feedback. Congestion control 962 cannot be performed with these end-points. WebRTC implementations 963 that need to interwork with such end-points MUST limit their 964 transmission to a low rate, equivalent to a VoIP call using a low 965 bandwidth codec, that is unlikely to cause any significant 966 congestion. 968 When interworking with legacy implementations that support RTCP using 969 the RTP/AVP profile [RFC3551], congestion feedback is provided in 970 RTCP RR packets every few seconds. Implementations that have to 971 interwork with such end-points MUST ensure that they keep within the 972 RTP circuit breaker [I-D.ietf-avtcore-rtp-circuit-breakers] 973 constraints to limit the congestion they can cause. 975 If a legacy end-point supports RTP/AVPF, this enables negotiation of 976 important parameters for frequent reporting, such as the "trr-int" 977 parameter, and the possibility that the end-point supports some 978 useful feedback format for congestion control purpose such as TMMBR 979 [RFC5104]. Implementations that have to interwork with such end- 980 points MUST ensure that they stay within the RTP circuit breaker 981 [I-D.ietf-avtcore-rtp-circuit-breakers] constraints to limit the 982 congestion they can cause, but might find that they can achieve 983 better congestion response depending on the amount of feedback that 984 is available. 986 With proprietary congestion control algorithms issues can arise when 987 different algorithms and implementations interact in a communication 988 session. If the different implementations have made different 989 choices in regards to the type of adaptation, for example one sender 990 based, and one receiver based, then one could end up in situation 991 where one direction is dual controlled, when the other direction is 992 not controlled. This memo cannot mandate behaviour for proprietary 993 congestion control algorithms, but implementations that use such 994 algorithms ought to be aware of this issue, and try to ensure that 995 effective congestion control is negotiated for media flowing in both 996 directions. If the IETF were to standardise both sender- and 997 receiver-based congestion control algorithms for WebRTC traffic in 998 the future, the issues of interoperability, control, and ensuring 999 that both directions of media flow are congestion controlled would 1000 also need to be considered. 1002 8. WebRTC Use of RTP: Performance Monitoring 1004 As described in Section 4.1, implementations are REQUIRED to generate 1005 RTCP Sender Report (SR) and Reception Report (RR) packets relating to 1006 the RTP packet streams they send and receive. These RTCP reports can 1007 be used for performance monitoring purposes, since they include basic 1008 packet loss and jitter statistics. 1010 A large number of additional performance metrics are supported by the 1011 RTCP Extended Reports (XR) framework [RFC3611][RFC6792]. At the time 1012 of this writing, it is not clear what extended metrics are suitable 1013 for use in the WebRTC context, so there is no requirement that 1014 implementations generate RTCP XR packets. However, implementations 1015 that can use detailed performance monitoring data MAY generate RTCP 1016 XR packets as appropriate; the use of such packets SHOULD be 1017 signalled in advance. 1019 9. WebRTC Use of RTP: Future Extensions 1021 It is possible that the core set of RTP protocols and RTP extensions 1022 specified in this memo will prove insufficient for the future needs 1023 of WebRTC applications. In this case, future updates to this memo 1024 MUST be made following the Guidelines for Writers of RTP Payload 1025 Format Specifications [RFC2736], How to Write an RTP Payload Format 1026 [I-D.ietf-payload-rtp-howto] and Guidelines for Extending the RTP 1027 Control Protocol [RFC5968], and SHOULD take into account any future 1028 guidelines for extending RTP and related protocols that have been 1029 developed. 1031 Authors of future extensions are urged to consider the wide range of 1032 environments in which RTP is used when recommending extensions, since 1033 extensions that are applicable in some scenarios can be problematic 1034 in others. Where possible, the WebRTC framework will adopt RTP 1035 extensions that are of general utility, to enable easy implementation 1036 of a gateway to other applications using RTP, rather than adopt 1037 mechanisms that are narrowly targeted at specific WebRTC use cases. 1039 10. Signalling Considerations 1041 RTP is built with the assumption that an external signalling channel 1042 exists, and can be used to configure RTP sessions and their features. 1043 The basic configuration of an RTP session consists of the following 1044 parameters: 1046 RTP Profile: The name of the RTP profile to be used in session. The 1047 RTP/AVP [RFC3551] and RTP/AVPF [RFC4585] profiles can interoperate 1048 on basic level, as can their secure variants RTP/SAVP [RFC3711] 1049 and RTP/SAVPF [RFC5124]. The secure variants of the profiles do 1050 not directly interoperate with the non-secure variants, due to the 1051 presence of additional header fields for authentication in SRTP 1052 packets and cryptographic transformation of the payload. WebRTC 1053 requires the use of the RTP/SAVPF profile, and this MUST be 1054 signalled. Interworking functions might transform this into the 1055 RTP/SAVP profile for a legacy use case, by indicating to the 1056 WebRTC end-point that the RTP/SAVPF is used and configuring a trr- 1057 int value of 4 seconds. 1059 Transport Information: Source and destination IP address(s) and 1060 ports for RTP and RTCP MUST be signalled for each RTP session. In 1061 WebRTC these transport addresses will be provided by ICE [RFC5245] 1062 that signals candidates and arrives at nominated candidate address 1063 pairs. If RTP and RTCP multiplexing [RFC5761] is to be used, such 1064 that a single port, i.e. transport-layer flow, is used for RTP and 1065 RTCP flows, this MUST be signalled (see Section 4.5). 1067 RTP Payload Types, media formats, and format parameters: The mapping 1068 between media type names (and hence the RTP payload formats to be 1069 used), and the RTP payload type numbers MUST be signalled. Each 1070 media type MAY also have a number of media type parameters that 1071 MUST also be signalled to configure the codec and RTP payload 1072 format (the "a=fmtp:" line from SDP). Section 4.3 of this memo 1073 discusses requirements for uniqueness of payload types. 1075 RTP Extensions: The use of any additional RTP header extensions and 1076 RTCP packet types, including any necessary parameters, SHOULD be 1077 signalled. For robustness, and for compatibility with non-WebRTC 1078 systems that might be connected to a WebRTC session via a gateway, 1079 implementations are required to ignore unknown RTCP packets and 1080 RTP header extensions (See Section 4.1). 1082 RTCP Bandwidth: Support for exchanging RTCP Bandwidth values to the 1083 end-points will be necessary. This SHALL be done as described in 1084 "Session Description Protocol (SDP) Bandwidth Modifiers for RTP 1085 Control Protocol (RTCP) Bandwidth" [RFC3556] if using SDP, or 1086 something semantically equivalent. This also ensures that the 1087 end-points have a common view of the RTCP bandwidth. A common 1088 RTCP bandwidth is important as a too different view of the 1089 bandwidths can lead to failure to interoperate. 1091 These parameters are often expressed in SDP messages conveyed within 1092 an offer/answer exchange. RTP does not depend on SDP or on the offer 1093 /answer model, but does require all the necessary parameters to be 1094 agreed upon, and provided to the RTP implementation. Note that in 1095 the WebRTC context it will depend on the signalling model and API how 1096 these parameters need to be configured but they will be need to 1097 either be set in the API or explicitly signalled between the peers. 1099 11. WebRTC API Considerations 1101 The WebRTC API [W3C.WD-webrtc-20130910] and the Media Capture and 1102 Streams API [W3C.WD-mediacapture-streams-20130903] defines and uses 1103 the concept of a MediaStream that consists of zero or more 1104 MediaStreamTracks. A MediaStreamTrack is an individual stream of 1105 media from any type of media source like a microphone or a camera, 1106 but also conceptual sources, like a audio mix or a video composition, 1107 are possible. The MediaStreamTracks within a MediaStream need to be 1108 possible to play out synchronised. 1110 A MediaStreamTrack's realisation in RTP in the context of an 1111 RTCPeerConnection consists of a source packet stream identified with 1112 an SSRC within an RTP session part of the RTCPeerConnection. The 1113 MediaStreamTrack can also result in additional packet streams, and 1114 thus SSRCs, in the same RTP session. These can be dependent packet 1115 streams from scalable encoding of the source stream associated with 1116 the MediaStreamTrack, if such a media encoder is used. They can also 1117 be redundancy packet streams, these are created when applying Forward 1118 Error Correction (Section 6.2) or RTP retransmission (Section 6.1) to 1119 the source packet stream. 1121 It is important to note that the same media source can be feeding 1122 multiple MediaStreamTracks. As different sets of constraints or 1123 other parameters can be applied to the MediaStreamTrack, each 1124 MediaStreamTrack instance added to a RTCPeerConnection SHALL result 1125 in an independent source packet stream, with its own set of 1126 associated packet streams, and thus different SSRC(s). It will 1127 depend on applied constraints and parameters if the source stream and 1128 the encoding configuration will be identical between different 1129 MediaStreamTracks sharing the same media source. If the encoding 1130 parameters and constraints are the same, an implementation could 1131 choose to use only one encoded stream to create the different RTP 1132 packet streams. Note that such optimisations would need to take into 1133 account that the constraints for one of the MediaStreamTracks can at 1134 any moment change, meaning that the encoding configurations might no 1135 longer be identical and two different encoder instances would then be 1136 needed. 1138 The same MediaStreamTrack can also be included in multiple 1139 MediaStreams, thus multiple sets of MediaStreams can implicitly need 1140 to use the same synchronisation base. To ensure that this works in 1141 all cases, and does not force an end-point to to disrupt the media by 1142 changing synchronisation base and CNAME during delivery of any 1143 ongoing packet streams, all MediaStreamTracks and their associated 1144 SSRCs originating from the same end-point need to be sent using the 1145 same CNAME within one RTCPeerConnection. This is motivating the 1146 strong recommendation in Section 4.9 to only use a single CNAME. 1148 The requirement on using the same CNAME for all SSRCs that 1149 originate from the same end-point, does not require a middlebox 1150 that forwards traffic from multiple end-points to only use a 1151 single CNAME. 1153 Different CNAMEs normally need to be used for different 1154 RTCPeerConnection instances, as specified in Section 4.9. Having two 1155 communication sessions with the same CNAME could enable tracking of a 1156 user or device across different services (see Section 4.4.1 of 1157 [I-D.ietf-rtcweb-security] for details). A web application can 1158 request that the CNAMEs used in different RTCPeerConnections (within 1159 a same-orign context) be the same, this allows for synchronization of 1160 the endpoint's RTP packet streams across the different 1161 RTCPeerConnections. 1163 Note: this doesn't result in a tracking issue, since the creation 1164 of matching CNAMEs depends on existing tracking. 1166 The above will currently force a WebRTC end-point that receives a 1167 MediaStreamTrack on one RTCPeerConnection and adds it as an outgoing 1168 on any RTCPeerConnection to perform resynchronisation of the stream. 1169 This, as the sending party needs to change the CNAME to the one it 1170 uses, which implies that the sender has to use a local system clock 1171 as timebase for the synchronisation. Thus, the relative relation 1172 between the timebase of the incoming stream and the system sending 1173 out needs to defined. This relation also needs monitoring for clock 1174 drift and likely adjustments of the synchronisation. The sending 1175 entity is also responsible for congestion control for its sent 1176 streams. In cases of packet loss the loss of incoming data also 1177 needs to be handled. This leads to the observation that the method 1178 that is least likely to cause issues or interruptions in the outgoing 1179 source packet stream is a model of full decoding, including repair 1180 etc., followed by encoding of the media again into the outgoing 1181 packet stream. Optimisations of this method is clearly possible and 1182 implementation specific. 1184 A WebRTC end-point MUST support receiving multiple MediaStreamTracks, 1185 where each of different MediaStreamTracks (and their sets of 1186 associated packet streams) uses different CNAMEs. However, 1187 MediaStreamTracks that are received with different CNAMEs have no 1188 defined synchronisation. 1190 Note: The motivation for supporting reception of multiple CNAMEs 1191 is to allow for forward compatibility with any future changes that 1192 enables more efficient stream handling when end-points relay/ 1193 forward streams. It also ensures that end-points can interoperate 1194 with certain types of multi-stream middleboxes or end-points that 1195 are not WebRTC. 1197 The binding between the WebRTC MediaStreams, MediaStreamTracks and 1198 the SSRC is done as specified in "Cross Session Stream Identification 1199 in the Session Description Protocol" [I-D.ietf-mmusic-msid]. This 1200 document [I-D.ietf-mmusic-msid] also defines, in section 4.1, how to 1201 map unknown source packet stream SSRCs to MediaStreamTracks and 1202 MediaStreams. This later is relevant to handle some cases of legacy 1203 interop. Commonly the RTP Payload Type of any incoming packets will 1204 reveal if the packet stream is a source stream or a redundancy or 1205 dependent packet stream. The association to the correct source 1206 packet stream depends on the payload format in use for the packet 1207 stream. 1209 Finally this specification puts a requirement on the WebRTC API to 1210 realize a method for determining the CSRC list (Section 4.1) as well 1211 as the Mixer-to-Client audio levels (Section 5.2.3) (when supported) 1212 and the basic requirements for this is further discussed in 1213 Section 12.2.1. 1215 12. RTP Implementation Considerations 1217 The following discussion provides some guidance on the implementation 1218 of the RTP features described in this memo. The focus is on a WebRTC 1219 end-point implementation perspective, and while some mention is made 1220 of the behaviour of middleboxes, that is not the focus of this memo. 1222 12.1. Configuration and Use of RTP Sessions 1224 A WebRTC end-point will be a simultaneous participant in one or more 1225 RTP sessions. Each RTP session can convey multiple media sources, 1226 and can include media data from multiple end-points. In the 1227 following, some ways in which WebRTC end-points can configure and use 1228 RTP sessions is outlined. 1230 12.1.1. Use of Multiple Media Sources Within an RTP Session 1232 RTP is a group communication protocol, and every RTP session can 1233 potentially contain multiple RTP packet streams. There are several 1234 reasons why this might be desirable: 1236 Multiple media types: Outside of WebRTC, it is common to use one RTP 1237 session for each type of media sources (e.g., one RTP session for 1238 audio sources and one for video sources, each sent over different 1239 transport layer flows). However, to reduce the number of UDP 1240 ports used, the default in WebRTC is to send all types of media in 1241 a single RTP session, as described in Section 4.4, using RTP and 1242 RTCP multiplexing (Section 4.5) to further reduce the number of 1243 UDP ports needed. This RTP session then uses only one bi- 1244 directional transport-layer flow, but will contain multiple RTP 1245 packet streams, each containing a different type of media. A 1246 common example might be an end-point with a camera and microphone 1247 that sends two RTP packet streams, one video and one audio, into a 1248 single RTP session. 1250 Multiple Capture Devices: A WebRTC end-point might have multiple 1251 cameras, microphones, or other media capture devices, and so might 1252 want to generate several RTP packet streams of the same media 1253 type. Alternatively, it might want to send media from a single 1254 capture device in several different formats or quality settings at 1255 once. Both can result in a single end-point sending multiple RTP 1256 packet streams of the same media type into a single RTP session at 1257 the same time. 1259 Associated Repair Data: An end-point might send a RTP packet stream 1260 that is somehow associated with another stream. For example, it 1261 might send an RTP packet stream that contains FEC or 1262 retransmission data relating to another stream. Some RTP payload 1263 formats send this sort of associated repair data as part of the 1264 source packet stream, while others send it as a separate packet 1265 stream. 1267 Layered or Multiple Description Coding: An end-point can use a 1268 layered media codec, for example H.264 SVC, or a multiple 1269 description codec, that generates multiple RTP packet streams, 1270 each with a distinct RTP SSRC, within a single RTP session. 1272 RTP Mixers, Translators, and Other Middleboxes: An RTP session, in 1273 the WebRTC context, is a point-to-point association between an 1274 end-point and some other peer device, where those devices share a 1275 common SSRC space. The peer device might be another WebRTC end- 1276 point, or it might be an RTP mixer, translator, or some other form 1277 of media processing middlebox. In the latter cases, the middlebox 1278 might send mixed or relayed RTP streams from several participants, 1279 that the WebRTC end-point will need to render. Thus, even though 1280 a WebRTC end-point might only be a member of a single RTP session, 1281 the peer device might be extending that RTP session to incorporate 1282 other end-points. WebRTC is a group communication environment and 1283 end-points need to be capable of receiving, decoding, and playing 1284 out multiple RTP packet streams at once, even in a single RTP 1285 session. 1287 12.1.2. Use of Multiple RTP Sessions 1289 In addition to sending and receiving multiple RTP packet streams 1290 within a single RTP session, a WebRTC end-point might participate in 1291 multiple RTP sessions. There are several reasons why a WebRTC end- 1292 point might choose to do this: 1294 To interoperate with legacy devices: The common practice in the non- 1295 WebRTC world is to send different types of media in separate RTP 1296 sessions, for example using one RTP session for audio and another 1297 RTP session, on a separate transport layer flow, for video. All 1298 WebRTC end-points need to support the option of sending different 1299 types of media on different RTP sessions, so they can interwork 1300 with such legacy devices. This is discussed further in 1301 Section 4.4. 1303 To provide enhanced quality of service: Some network-based quality 1304 of service mechanisms operate on the granularity of transport 1305 layer flows. If it is desired to use these mechanisms to provide 1306 differentiated quality of service for some RTP packet streams, 1307 then those RTP packet streams need to be sent in a separate RTP 1308 session using a different transport-layer flow, and with 1309 appropriate quality of service marking. This is discussed further 1310 in Section 12.1.3. 1312 To separate media with different purposes: An end-point might want 1313 to send RTP packet streams that have different purposes on 1314 different RTP sessions, to make it easy for the peer device to 1315 distinguish them. For example, some centralised multiparty 1316 conferencing systems display the active speaker in high 1317 resolution, but show low resolution "thumbnails" of other 1318 participants. Such systems might configure the end-points to send 1319 simulcast high- and low-resolution versions of their video using 1320 separate RTP sessions, to simplify the operation of the RTP 1321 middlebox. In the WebRTC context this is currently possible by 1322 establishing multiple WebRTC MediaStreamTracks that have the same 1323 media source in one (or more) RTCPeerConnection. Each 1324 MediaStreamTrack is then configured to deliver a particular media 1325 quality and thus media bit-rate, and will produce an independently 1326 encoded version with the codec parameters agreed specifically in 1327 the context of that RTCPeerConnection. The RTP middlebox can 1328 distinguish packets corresponding to the low- and high-resolution 1329 streams by inspecting their SSRC, RTP payload type, or some other 1330 information contained in RTP payload, RTP header extension or RTCP 1331 packets, but it can be easier to distinguish the RTP packet 1332 streams if they arrive on separate RTP sessions on separate 1333 transport-layer flows. 1335 To directly connect with multiple peers: A multi-party conference 1336 does not need to use an RTP middlebox. Rather, a multi-unicast 1337 mesh can be created, comprising several distinct RTP sessions, 1338 with each participant sending RTP traffic over a separate RTP 1339 session (that is, using an independent RTCPeerConnection object) 1340 to every other participant, as shown in Figure 1. This topology 1341 has the benefit of not requiring an RTP middlebox node that is 1342 trusted to access and manipulate the media data. The downside is 1343 that it increases the used bandwidth at each sender by requiring 1344 one copy of the RTP packet streams for each participant that are 1345 part of the same session beyond the sender itself. 1347 +---+ +---+ 1348 | A |<--->| B | 1349 +---+ +---+ 1350 ^ ^ 1351 \ / 1352 \ / 1353 v v 1354 +---+ 1355 | C | 1356 +---+ 1358 Figure 1: Multi-unicast using several RTP sessions 1360 The multi-unicast topology could also be implemented as a single 1361 RTP session, spanning multiple peer-to-peer transport layer 1362 connections, or as several pairwise RTP sessions, one between each 1363 pair of peers. To maintain a coherent mapping between the 1364 relation between RTP sessions and RTCPeerConnection objects it is 1365 recommend that this is implemented as several individual RTP 1366 sessions. The only downside is that end-point A will not learn of 1367 the quality of any transmission happening between B and C, since 1368 it will not see RTCP reports for the RTP session between B and C, 1369 whereas it would it all three participants were part of a single 1370 RTP session. Experience with the Mbone tools (experimental RTP- 1371 based multicast conferencing tools from the late 1990s) has showed 1372 that RTCP reception quality reports for third parties can be 1373 presented to users in a way that helps them understand asymmetric 1374 network problems, and the approach of using separate RTP sessions 1375 prevents this. However, an advantage of using separate RTP 1376 sessions is that it enables using different media bit-rates and 1377 RTP session configurations between the different peers, thus not 1378 forcing B to endure the same quality reductions if there are 1379 limitations in the transport from A to C as C will. It is 1380 believed that these advantages outweigh the limitations in 1381 debugging power. 1383 To indirectly connect with multiple peers: A common scenario in 1384 multi-party conferencing is to create indirect connections to 1385 multiple peers, using an RTP mixer, translator, or some other type 1386 of RTP middlebox. Figure 2 outlines a simple topology that might 1387 be used in a four-person centralised conference. The middlebox 1388 acts to optimise the transmission of RTP packet streams from 1389 certain perspectives, either by only sending some of the received 1390 RTP packet stream to any given receiver, or by providing a 1391 combined RTP packet stream out of a set of contributing streams. 1393 +---+ +-------------+ +---+ 1394 | A |<---->| |<---->| B | 1395 +---+ | RTP mixer, | +---+ 1396 | translator, | 1397 | or other | 1398 +---+ | middlebox | +---+ 1399 | C |<---->| |<---->| D | 1400 +---+ +-------------+ +---+ 1402 Figure 2: RTP mixer with only unicast paths 1404 There are various methods of implementation for the middlebox. If 1405 implemented as a standard RTP mixer or translator, a single RTP 1406 session will extend across the middlebox and encompass all the 1407 end-points in one multi-party session. Other types of middlebox 1408 might use separate RTP sessions between each end-point and the 1409 middlebox. A common aspect is that these RTP middleboxes can use 1410 a number of tools to control the media encoding provided by a 1411 WebRTC end-point. This includes functions like requesting the 1412 breaking of the encoding chain and have the encoder produce a so 1413 called Intra frame. Another is limiting the bit-rate of a given 1414 stream to better suit the mixer view of the multiple down-streams. 1415 Others are controlling the most suitable frame-rate, picture 1416 resolution, the trade-off between frame-rate and spatial quality. 1417 The middlebox has the responsibility to correctly perform 1418 congestion control, source identification, manage synchronisation 1419 while providing the application with suitable media optimisations. 1420 The middlebox also has to be a trusted node when it comes to 1421 security, since it manipulates either the RTP header or the media 1422 itself (or both) received from one end-point, before sending it on 1423 towards the end-point(s), thus they need to be able to decrypt and 1424 then re-encrypt the RTP packet stream before sending it out. 1426 RTP Mixers can create a situation where an end-point experiences a 1427 situation in-between a session with only two end-points and 1428 multiple RTP sessions. Mixers are expected to not forward RTCP 1429 reports regarding RTP packet streams across themselves. This is 1430 due to the difference in the RTP packet streams provided to the 1431 different end-points. The original media source lacks information 1432 about a mixer's manipulations prior to sending it the different 1433 receivers. This scenario also results in that an end-point's 1434 feedback or requests goes to the mixer. When the mixer can't act 1435 on this by itself, it is forced to go to the original media source 1436 to fulfil the receivers request. This will not necessarily be 1437 explicitly visible any RTP and RTCP traffic, but the interactions 1438 and the time to complete them will indicate such dependencies. 1440 Providing source authentication in multi-party scenarios is a 1441 challenge. In the mixer-based topologies, end-points source 1442 authentication is based on, firstly, verifying that media comes 1443 from the mixer by cryptographic verification and, secondly, trust 1444 in the mixer to correctly identify any source towards the end- 1445 point. In RTP sessions where multiple end-points are directly 1446 visible to an end-point, all end-points will have knowledge about 1447 each others' master keys, and can thus inject packets claimed to 1448 come from another end-point in the session. Any node performing 1449 relay can perform non-cryptographic mitigation by preventing 1450 forwarding of packets that have SSRC fields that came from other 1451 end-points before. For cryptographic verification of the source, 1452 SRTP would require additional security mechanisms, for example 1453 TESLA for SRTP [RFC4383], that are not part of the base WebRTC 1454 standards. 1456 To forward media between multiple peers: It is sometimes desirable 1457 for an end-point that receives an RTP packet stream to be able to 1458 forward that RTP packet stream to a third party. The are some 1459 obvious security and privacy implications in supporting this, but 1460 also potential uses. This is supported in the W3C API by taking 1461 the received and decoded media and using it as media source that 1462 is re-encoding and transmitted as a new stream. 1464 At the RTP layer, media forwarding acts as a back-to-back RTP 1465 receiver and RTP sender. The receiving side terminates the RTP 1466 session and decodes the media, while the sender side re-encodes 1467 and transmits the media using an entirely separate RTP session. 1468 The original sender will only see a single receiver of the media, 1469 and will not be able to tell that forwarding is happening based on 1470 RTP-layer information since the RTP session that is used to send 1471 the forwarded media is not connected to the RTP session on which 1472 the media was received by the node doing the forwarding. 1474 The end-point that is performing the forwarding is responsible for 1475 producing an RTP packet stream suitable for onwards transmission. 1476 The outgoing RTP session that is used to send the forwarded media 1477 is entirely separate to the RTP session on which the media was 1478 received. This will require media transcoding for congestion 1479 control purpose to produce a suitable bit-rate for the outgoing 1480 RTP session, reducing media quality and forcing the forwarding 1481 end-point to spend the resource on the transcoding. The media 1482 transcoding does result in a separation of the two different legs 1483 removing almost all dependencies, and allowing the forwarding end- 1484 point to optimise its media transcoding operation. The cost is 1485 greatly increased computational complexity on the forwarding node. 1486 Receivers of the forwarded stream will see the forwarding device 1487 as the sender of the stream, and will not be able to tell from the 1488 RTP layer that they are receiving a forwarded stream rather than 1489 an entirely new RTP packet stream generated by the forwarding 1490 device. 1492 12.1.3. Differentiated Treatment of RTP Packet Streams 1494 There are use cases for differentiated treatment of RTP packet 1495 streams. Such differentiation can happen at several places in the 1496 system. First of all is the prioritization within the end-point 1497 sending the media, which controls, both which RTP packet streams that 1498 will be sent, and their allocation of bit-rate out of the current 1499 available aggregate as determined by the congestion control. 1501 It is expected that the WebRTC API [W3C.WD-webrtc-20130910] will 1502 allow the application to indicate relative priorities for different 1503 MediaStreamTracks. These priorities can then be used to influence 1504 the local RTP processing, especially when it comes to congestion 1505 control response in how to divide the available bandwidth between the 1506 RTP packet streams. Any changes in relative priority will also need 1507 to be considered for RTP packet streams that are associated with the 1508 main RTP packet streams, such as redundant streams for RTP 1509 retransmission and FEC. The importance of such redundant RTP packet 1510 streams is dependent on the media type and codec used, in regards to 1511 how robust that codec is to packet loss. However, a default policy 1512 might to be to use the same priority for redundant RTP packet stream 1513 as for the source RTP packet stream. 1515 Secondly, the network can prioritize transport-layer flows and sub- 1516 flows, including RTP packet streams. Typically, differential 1517 treatment includes two steps, the first being identifying whether an 1518 IP packet belongs to a class that has to be treated differently, the 1519 second consisting of the actual mechanism to prioritize packets. 1520 This is done according to three methods: 1522 DiffServ: The end-point marks a packet with a DiffServ code point to 1523 indicate to the network that the packet belongs to a particular 1524 class. 1526 Flow based: Packets that need to be given a particular treatment are 1527 identified using a combination of IP and port address. 1529 Deep Packet Inspection: A network classifier (DPI) inspects the 1530 packet and tries to determine if the packet represents a 1531 particular application and type that is to be prioritized. 1533 Flow-based differentiation will provide the same treatment to all 1534 packets within a transport-layer flow, i.e., relative prioritization 1535 is not possible. Moreover, if the resources are limited it might not 1536 be possible to provide differential treatment compared to best-effort 1537 for all the RTP packet streams in a WebRTC application. When flow- 1538 based differentiation is available the WebRTC application needs to 1539 know about it so that it can provide the separation of the RTP packet 1540 streams onto different UDP flows to enable a more granular usage of 1541 flow based differentiation. That way at least providing different 1542 prioritization of audio and video if desired by application. 1544 DiffServ assumes that either the end-point or a classifier can mark 1545 the packets with an appropriate DSCP so that the packets are treated 1546 according to that marking. If the end-point is to mark the traffic 1547 two requirements arise in the WebRTC context: 1) The WebRTC 1548 application or browser has to know which DSCP to use and that it can 1549 use them on some set of RTP packet streams. 2) The information needs 1550 to be propagated to the operating system when transmitting the 1551 packet. Details of this process are outside the scope of this memo 1552 and are further discussed in "DSCP and other packet markings for 1553 RTCWeb QoS" [I-D.ietf-tsvwg-rtcweb-qos]. 1555 For packet based marking schemes it might be possible to mark 1556 individual RTP packets differently based on the relative priority of 1557 the RTP payload. For example video codecs that have I, P, and B 1558 pictures could prioritise any payloads carrying only B frames less, 1559 as these are less damaging to loose. However, depending on the QoS 1560 mechanism and what markings that are applied, this can result in not 1561 only different packet drop probabilities but also packet reordering, 1562 see [I-D.ietf-tsvwg-rtcweb-qos] for further discussion. As a default 1563 policy all RTP packets related to a RTP packet stream ought to be 1564 provided with the same prioritization; per-packet prioritization is 1565 outside the scope of this memo, but might be specified elsewhere in 1566 future. 1568 It is also important to consider how RTCP packets associated with a 1569 particular RTP packet stream need to be marked. RTCP compound 1570 packets with Sender Reports (SR), ought to be marked with the same 1571 priority as the RTP packet stream itself, so the RTCP-based round- 1572 trip time (RTT) measurements are done using the same transport-layer 1573 flow priority as the RTP packet stream experiences. RTCP compound 1574 packets containing RR packet ought to be sent with the priority used 1575 by the majority of the RTP packet streams reported on. RTCP packets 1576 containing time-critical feedback packets can use higher priority to 1577 improve the timeliness and likelihood of delivery of such feedback. 1579 12.2. Media Source, RTP Packet Streams, and Participant Identification 1581 12.2.1. Media Source Identification 1583 Each RTP packet stream is identified by a unique synchronisation 1584 source (SSRC) identifier. The SSRC identifier is carried in each of 1585 the RTP packets comprising a RTP packet stream, and is also used to 1586 identify that stream in the corresponding RTCP reports. The SSRC is 1587 chosen as discussed in Section 4.8. The first stage in 1588 demultiplexing RTP and RTCP packets received on a single transport 1589 layer flow at a WebRTC end-point is to separate the RTP packet 1590 streams based on their SSRC value; once that is done, additional 1591 demultiplexing steps can determine how and where to render the media. 1593 RTP allows a mixer, or other RTP-layer middlebox, to combine encoded 1594 streams from multiple media sources to form a new encoded stream from 1595 a new media source (the mixer). The RTP packets in that new RTP 1596 packet stream can include a Contributing Source (CSRC) list, 1597 indicating which original SSRCs contributed to the combined source 1598 stream. As described in Section 4.1, implementations need to support 1599 reception of RTP data packets containing a CSRC list and RTCP packets 1600 that relate to sources present in the CSRC list. The CSRC list can 1601 change on a packet-by-packet basis, depending on the mixing operation 1602 being performed. Knowledge of what media sources contributed to a 1603 particular RTP packet can be important if the user interface 1604 indicates which participants are active in the session. Changes in 1605 the CSRC list included in packets needs to be exposed to the WebRTC 1606 application using some API, if the application is to be able to track 1607 changes in session participation. It is desirable to map CSRC values 1608 back into WebRTC MediaStream identities as they cross this API, to 1609 avoid exposing the SSRC/CSRC name space to JavaScript applications. 1611 If the mixer-to-client audio level extension [RFC6465] is being used 1612 in the session (see Section 5.2.3), the information in the CSRC list 1613 is augmented by audio level information for each contributing source. 1614 It is desirable to expose this information to the WebRTC application 1615 using some API, after mapping the CSRC values to WebRTC MediaStream 1616 identities, so it can be exposed in the user interface. 1618 12.2.2. SSRC Collision Detection 1620 The RTP standard requires RTP implementations to have support for 1621 detecting and handling SSRC collisions, i.e., resolve the conflict 1622 when two different end-points use the same SSRC value (see section 1623 8.2 of [RFC3550]). This requirement also applies to WebRTC end- 1624 points. There are several scenarios where SSRC collisions can occur: 1626 o In a point-to-point session where each SSRC is associated with 1627 either of the two end-points and where the main media carrying 1628 SSRC identifier will be announced in the signalling channel, a 1629 collision is less likely to occur due to the information about 1630 used SSRCs. If SDP is used, this information is provided by 1631 Source-Specific SDP Attributes [RFC5576]. Still, collisions can 1632 occur if both end-points start using a new SSRC identifier prior 1633 to having signalled it to the peer and received acknowledgement on 1634 the signalling message. The Source-Specific SDP Attributes 1635 [RFC5576] contains a mechanism to signal how the end-point 1636 resolved the SSRC collision. 1638 o SSRC values that have not been signalled could also appear in an 1639 RTP session. This is more likely than it appears, since some RTP 1640 functions use extra SSRCs to provide their functionality. For 1641 example, retransmission data might be transmitted using a separate 1642 RTP packet stream that requires its own SSRC, separate to the SSRC 1643 of the source RTP packet stream [RFC4588]. In those cases, an 1644 end-point can create a new SSRC that strictly doesn't need to be 1645 announced over the signalling channel to function correctly on 1646 both RTP and RTCPeerConnection level. 1648 o Multiple end-points in a multiparty conference can create new 1649 sources and signal those towards the RTP middlebox. In cases 1650 where the SSRC/CSRC are propagated between the different end- 1651 points from the RTP middlebox collisions can occur. 1653 o An RTP middlebox could connect an end-point's RTCPeerConnection to 1654 another RTCPeerConnection from the same end-point, thus forming a 1655 loop where the end-point will receive its own traffic. While it 1656 is clearly considered a bug, it is important that the end-point is 1657 able to recognise and handle the case when it occurs. This case 1658 becomes even more problematic when media mixers, and so on, are 1659 involved, where the stream received is a different stream but 1660 still contains this client's input. 1662 These SSRC/CSRC collisions can only be handled on RTP level as long 1663 as the same RTP session is extended across multiple 1664 RTCPeerConnections by a RTP middlebox. To resolve the more generic 1665 case where multiple RTCPeerConnections are interconnected, 1666 identification of the media source(s) part of a MediaStreamTrack 1667 being propagated across multiple interconnected RTCPeerConnection 1668 needs to be preserved across these interconnections. 1670 12.2.3. Media Synchronisation Context 1672 When an end-point sends media from more than one media source, it 1673 needs to consider if (and which of) these media sources are to be 1674 synchronized. In RTP/RTCP, synchronisation is provided by having a 1675 set of RTP packet streams be indicated as coming from the same 1676 synchronisation context and logical end-point by using the same RTCP 1677 CNAME identifier. 1679 The next provision is that the internal clocks of all media sources, 1680 i.e., what drives the RTP timestamp, can be correlated to a system 1681 clock that is provided in RTCP Sender Reports encoded in an NTP 1682 format. By correlating all RTP timestamps to a common system clock 1683 for all sources, the timing relation of the different RTP packet 1684 streams, also across multiple RTP sessions can be derived at the 1685 receiver and, if desired, the streams can be synchronized. The 1686 requirement is for the media sender to provide the correlation 1687 information; it is up to the receiver to use it or not. 1689 13. Security Considerations 1691 The overall security architecture for WebRTC is described in 1692 [I-D.ietf-rtcweb-security-arch], and security considerations for the 1693 WebRTC framework are described in [I-D.ietf-rtcweb-security]. These 1694 considerations also apply to this memo. 1696 The security considerations of the RTP specification, the RTP/SAVPF 1697 profile, and the various RTP/RTCP extensions and RTP payload formats 1698 that form the complete protocol suite described in this memo apply. 1699 It is not believed there are any new security considerations 1700 resulting from the combination of these various protocol extensions. 1702 The Extended Secure RTP Profile for Real-time Transport Control 1703 Protocol (RTCP)-Based Feedback [RFC5124] (RTP/SAVPF) provides 1704 handling of fundamental issues by offering confidentiality, integrity 1705 and partial source authentication. A mandatory to implement media 1706 security solution is created by combing this secured RTP profile and 1707 DTLS-SRTP keying [RFC5764] as defined by Section 5.5 of 1708 [I-D.ietf-rtcweb-security-arch]. 1710 RTCP packets convey a Canonical Name (CNAME) identifier that is used 1711 to associate RTP packet streams that need to be synchronised across 1712 related RTP sessions. Inappropriate choice of CNAME values can be a 1713 privacy concern, since long-term persistent CNAME identifiers can be 1714 used to track users across multiple WebRTC calls. Section 4.9 of 1715 this memo provides guidelines for generation of untraceable CNAME 1716 values that alleviate this risk. 1718 The guidelines in [RFC6562] apply when using variable bit rate (VBR) 1719 audio codecs such as Opus (see Section 4.3 for discussion of mandated 1720 audio codecs). The guidelines in [RFC6562] also apply, but are of 1721 lesser importance, when using the client-to-mixer audio level header 1722 extensions (Section 5.2.2) or the mixer-to-client audio level header 1723 extensions (Section 5.2.3). The use of the encryption of the header 1724 extensions are RECOMMENDED, unless there are known reasons, like RTP 1725 middleboxes or third party monitoring that will greatly benefit from 1726 the information, and this has been expressed using API or signalling. 1727 If further evidence are produced to show that information leakage is 1728 significant from audio level indications, then use of encryption 1729 needs to be mandated at that time. 1731 14. IANA Considerations 1733 This memo makes no request of IANA. 1735 Note to RFC Editor: this section is to be removed on publication as 1736 an RFC. 1738 15. Acknowledgements 1740 The authors would like to thank Bernard Aboba, Harald Alvestrand, 1741 Cary Bran, Ben Campbell, Charles Eckel, Alex Eleftheriadis, Christian 1742 Groves, Cullen Jennings, Olle Johansson, Suhas Nandakumar, Dan 1743 Romascanu, Jim Spring, Martin Thomson, and the other members of the 1744 IETF RTCWEB working group for their valuable feedback. 1746 16. References 1748 16.1. Normative References 1750 [I-D.ietf-avtcore-multi-media-rtp-session] 1751 Westerlund, M., Perkins, C., and J. Lennox, "Sending 1752 Multiple Types of Media in a Single RTP Session", draft- 1753 ietf-avtcore-multi-media-rtp-session-05 (work in 1754 progress), February 2014. 1756 [I-D.ietf-avtcore-rtp-circuit-breakers] 1757 Perkins, C. and V. Singh, "Multimedia Congestion Control: 1758 Circuit Breakers for Unicast RTP Sessions", draft-ietf- 1759 avtcore-rtp-circuit-breakers-05 (work in progress), 1760 February 2014. 1762 [I-D.ietf-avtcore-rtp-multi-stream-optimisation] 1763 Lennox, J., Westerlund, M., Wu, W., and C. Perkins, 1764 "Sending Multiple Media Streams in a Single RTP Session: 1765 Grouping RTCP Reception Statistics and Other Feedback", 1766 draft-ietf-avtcore-rtp-multi-stream-optimisation-02 (work 1767 in progress), February 2014. 1769 [I-D.ietf-avtcore-rtp-multi-stream] 1770 Lennox, J., Westerlund, M., Wu, W., and C. Perkins, 1771 "Sending Multiple Media Streams in a Single RTP Session", 1772 draft-ietf-avtcore-rtp-multi-stream-03 (work in progress), 1773 February 2014. 1775 [I-D.ietf-rtcweb-security-arch] 1776 Rescorla, E., "WebRTC Security Architecture", draft-ietf- 1777 rtcweb-security-arch-09 (work in progress), February 2014. 1779 [I-D.ietf-rtcweb-security] 1780 Rescorla, E., "Security Considerations for WebRTC", draft- 1781 ietf-rtcweb-security-06 (work in progress), January 2014. 1783 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1784 Requirement Levels", BCP 14, RFC 2119, March 1997. 1786 [RFC2736] Handley, M. and C. Perkins, "Guidelines for Writers of RTP 1787 Payload Format Specifications", BCP 36, RFC 2736, December 1788 1999. 1790 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 1791 Jacobson, "RTP: A Transport Protocol for Real-Time 1792 Applications", STD 64, RFC 3550, July 2003. 1794 [RFC3551] Schulzrinne, H. and S. Casner, "RTP Profile for Audio and 1795 Video Conferences with Minimal Control", STD 65, RFC 3551, 1796 July 2003. 1798 [RFC3556] Casner, S., "Session Description Protocol (SDP) Bandwidth 1799 Modifiers for RTP Control Protocol (RTCP) Bandwidth", RFC 1800 3556, July 2003. 1802 [RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K. 1803 Norrman, "The Secure Real-time Transport Protocol (SRTP)", 1804 RFC 3711, March 2004. 1806 [RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session 1807 Description Protocol", RFC 4566, July 2006. 1809 [RFC4585] Ott, J., Wenger, S., Sato, N., Burmeister, C., and J. Rey, 1810 "Extended RTP Profile for Real-time Transport Control 1811 Protocol (RTCP)-Based Feedback (RTP/AVPF)", RFC 4585, July 1812 2006. 1814 [RFC4588] Rey, J., Leon, D., Miyazaki, A., Varsa, V., and R. 1815 Hakenberg, "RTP Retransmission Payload Format", RFC 4588, 1816 July 2006. 1818 [RFC4961] Wing, D., "Symmetric RTP / RTP Control Protocol (RTCP)", 1819 BCP 131, RFC 4961, July 2007. 1821 [RFC5104] Wenger, S., Chandra, U., Westerlund, M., and B. Burman, 1822 "Codec Control Messages in the RTP Audio-Visual Profile 1823 with Feedback (AVPF)", RFC 5104, February 2008. 1825 [RFC5124] Ott, J. and E. Carrara, "Extended Secure RTP Profile for 1826 Real-time Transport Control Protocol (RTCP)-Based Feedback 1827 (RTP/SAVPF)", RFC 5124, February 2008. 1829 [RFC5285] Singer, D. and H. Desineni, "A General Mechanism for RTP 1830 Header Extensions", RFC 5285, July 2008. 1832 [RFC5506] Johansson, I. and M. Westerlund, "Support for Reduced-Size 1833 Real-Time Transport Control Protocol (RTCP): Opportunities 1834 and Consequences", RFC 5506, April 2009. 1836 [RFC5761] Perkins, C. and M. Westerlund, "Multiplexing RTP Data and 1837 Control Packets on a Single Port", RFC 5761, April 2010. 1839 [RFC5764] McGrew, D. and E. Rescorla, "Datagram Transport Layer 1840 Security (DTLS) Extension to Establish Keys for the Secure 1841 Real-time Transport Protocol (SRTP)", RFC 5764, May 2010. 1843 [RFC6051] Perkins, C. and T. Schierl, "Rapid Synchronisation of RTP 1844 Flows", RFC 6051, November 2010. 1846 [RFC6464] Lennox, J., Ivov, E., and E. Marocco, "A Real-time 1847 Transport Protocol (RTP) Header Extension for Client-to- 1848 Mixer Audio Level Indication", RFC 6464, December 2011. 1850 [RFC6465] Ivov, E., Marocco, E., and J. Lennox, "A Real-time 1851 Transport Protocol (RTP) Header Extension for Mixer-to- 1852 Client Audio Level Indication", RFC 6465, December 2011. 1854 [RFC6562] Perkins, C. and JM. Valin, "Guidelines for the Use of 1855 Variable Bit Rate Audio with Secure RTP", RFC 6562, March 1856 2012. 1858 [RFC6904] Lennox, J., "Encryption of Header Extensions in the Secure 1859 Real-time Transport Protocol (SRTP)", RFC 6904, April 1860 2013. 1862 [RFC7007] Terriberry, T., "Update to Remove DVI4 from the 1863 Recommended Codecs for the RTP Profile for Audio and Video 1864 Conferences with Minimal Control (RTP/AVP)", RFC 7007, 1865 August 2013. 1867 [RFC7022] Begen, A., Perkins, C., Wing, D., and E. Rescorla, 1868 "Guidelines for Choosing RTP Control Protocol (RTCP) 1869 Canonical Names (CNAMEs)", RFC 7022, September 2013. 1871 [RFC7160] Petit-Huguenin, M. and G. Zorn, "Support for Multiple 1872 Clock Rates in an RTP Session", RFC 7160, April 2014. 1874 [RFC7164] Gross, K. and R. Brandenburg, "RTP and Leap Seconds", RFC 1875 7164, March 2014. 1877 16.2. Informative References 1879 [I-D.ietf-avtcore-multiplex-guidelines] 1880 Westerlund, M., Perkins, C., and H. Alvestrand, 1881 "Guidelines for using the Multiplexing Features of RTP to 1882 Support Multiple Media Streams", draft-ietf-avtcore- 1883 multiplex-guidelines-02 (work in progress), January 2014. 1885 [I-D.ietf-avtcore-rtp-topologies-update] 1886 Westerlund, M. and S. Wenger, "RTP Topologies", draft- 1887 ietf-avtcore-rtp-topologies-update-01 (work in progress), 1888 October 2013. 1890 [I-D.ietf-avtext-rtp-grouping-taxonomy] 1891 Lennox, J., Gross, K., Nandakumar, S., and G. Salgueiro, 1892 "A Taxonomy of Grouping Semantics and Mechanisms for Real- 1893 Time Transport Protocol (RTP) Sources", draft-ietf-avtext- 1894 rtp-grouping-taxonomy-01 (work in progress), February 1895 2014. 1897 [I-D.ietf-mmusic-msid] 1898 Alvestrand, H., "WebRTC MediaStream Identification in the 1899 Session Description Protocol", draft-ietf-mmusic-msid-05 1900 (work in progress), March 2014. 1902 [I-D.ietf-mmusic-sdp-bundle-negotiation] 1903 Holmberg, C., Alvestrand, H., and C. Jennings, 1904 "Negotiating Media Multiplexing Using the Session 1905 Description Protocol (SDP)", draft-ietf-mmusic-sdp-bundle- 1906 negotiation-07 (work in progress), April 2014. 1908 [I-D.ietf-payload-rtp-howto] 1909 Westerlund, M., "How to Write an RTP Payload Format", 1910 draft-ietf-payload-rtp-howto-13 (work in progress), 1911 January 2014. 1913 [I-D.ietf-rmcat-cc-requirements] 1914 Jesup, R., "Congestion Control Requirements For RMCAT", 1915 draft-ietf-rmcat-cc-requirements-04 (work in progress), 1916 April 2014. 1918 [I-D.ietf-rtcweb-audio] 1919 Valin, J. and C. Bran, "WebRTC Audio Codec and Processing 1920 Requirements", draft-ietf-rtcweb-audio-05 (work in 1921 progress), February 2014. 1923 [I-D.ietf-rtcweb-overview] 1924 Alvestrand, H., "Overview: Real Time Protocols for Brower- 1925 based Applications", draft-ietf-rtcweb-overview-09 (work 1926 in progress), February 2014. 1928 [I-D.ietf-rtcweb-use-cases-and-requirements] 1929 Holmberg, C., Hakansson, S., and G. Eriksson, "Web Real- 1930 Time Communication Use-cases and Requirements", draft- 1931 ietf-rtcweb-use-cases-and-requirements-14 (work in 1932 progress), February 2014. 1934 [I-D.ietf-tsvwg-rtcweb-qos] 1935 Dhesikan, S., Druta, D., Jones, P., and J. Polk, "DSCP and 1936 other packet markings for RTCWeb QoS", draft-ietf-tsvwg- 1937 rtcweb-qos-00 (work in progress), April 2014. 1939 [RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control 1940 Protocol Extended Reports (RTCP XR)", RFC 3611, November 1941 2003. 1943 [RFC4383] Baugher, M. and E. Carrara, "The Use of Timed Efficient 1944 Stream Loss-Tolerant Authentication (TESLA) in the Secure 1945 Real-time Transport Protocol (SRTP)", RFC 4383, February 1946 2006. 1948 [RFC5245] Rosenberg, J., "Interactive Connectivity Establishment 1949 (ICE): A Protocol for Network Address Translator (NAT) 1950 Traversal for Offer/Answer Protocols", RFC 5245, April 1951 2010. 1953 [RFC5576] Lennox, J., Ott, J., and T. Schierl, "Source-Specific 1954 Media Attributes in the Session Description Protocol 1955 (SDP)", RFC 5576, June 2009. 1957 [RFC5968] Ott, J. and C. Perkins, "Guidelines for Extending the RTP 1958 Control Protocol (RTCP)", RFC 5968, September 2010. 1960 [RFC6263] Marjou, X. and A. Sollaud, "Application Mechanism for 1961 Keeping Alive the NAT Mappings Associated with RTP / RTP 1962 Control Protocol (RTCP) Flows", RFC 6263, June 2011. 1964 [RFC6792] Wu, Q., Hunt, G., and P. Arden, "Guidelines for Use of the 1965 RTP Monitoring Framework", RFC 6792, November 2012. 1967 [W3C.WD-mediacapture-streams-20130903] 1968 Burnett, D., Bergkvist, A., Jennings, C., and A. 1969 Narayanan, "Media Capture and Streams", World Wide Web 1970 Consortium WD WD-mediacapture-streams-20130903, September 1971 2013, . 1974 [W3C.WD-webrtc-20130910] 1975 Bergkvist, A., Burnett, D., Jennings, C., and A. 1976 Narayanan, "WebRTC 1.0: Real-time Communication Between 1977 Browsers", World Wide Web Consortium WD WD- 1978 webrtc-20130910, September 2013, 1979 . 1981 Authors' Addresses 1983 Colin Perkins 1984 University of Glasgow 1985 School of Computing Science 1986 Glasgow G12 8QQ 1987 United Kingdom 1989 Email: csp@csperkins.org 1990 URI: http://csperkins.org/ 1991 Magnus Westerlund 1992 Ericsson 1993 Farogatan 6 1994 SE-164 80 Kista 1995 Sweden 1997 Phone: +46 10 714 82 87 1998 Email: magnus.westerlund@ericsson.com 2000 Joerg Ott 2001 Aalto University 2002 School of Electrical Engineering 2003 Espoo 02150 2004 Finland 2006 Email: jorg.ott@aalto.fi