idnits 2.17.1 draft-ietf-mops-streaming-opcons-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 2 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (1 March 2022) is 786 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-02) exists of draft-cardwell-iccrg-bbr-congestion-control-01 == Outdated reference: A later version (-14) exists of draft-pantos-hls-rfc8216bis-10 == Outdated reference: A later version (-18) exists of draft-ietf-quic-manageability-14 == Outdated reference: A later version (-07) exists of draft-ietf-quic-qlog-h3-events-00 == Outdated reference: A later version (-08) exists of draft-ietf-quic-qlog-main-schema-01 == Outdated reference: A later version (-07) exists of draft-ietf-quic-qlog-quic-events-00 -- Obsolete informational reference (is this intentional?): RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 2001 (Obsoleted by RFC 2581) -- Obsolete informational reference (is this intentional?): RFC 8312 (Obsoleted by RFC 9438) Summary: 0 errors (**), 0 flaws (~~), 8 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 MOPS J. Holland 3 Internet-Draft Akamai Technologies, Inc. 4 Intended status: Informational A. Begen 5 Expires: 2 September 2022 Networked Media 6 S. Dawkins 7 Tencent America LLC 8 1 March 2022 10 Operational Considerations for Streaming Media 11 draft-ietf-mops-streaming-opcons-09 13 Abstract 15 This document provides an overview of operational networking issues 16 that pertain to quality of experience when streaming video and other 17 high-bitrate media over the Internet. 19 Status of This Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF). Note that other groups may also distribute 26 working documents as Internet-Drafts. The list of current Internet- 27 Drafts is at https://datatracker.ietf.org/drafts/current/. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 This Internet-Draft will expire on 2 September 2022. 36 Copyright Notice 38 Copyright (c) 2022 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 43 license-info) in effect on the date of publication of this document. 44 Please review these documents carefully, as they describe your rights 45 and restrictions with respect to this document. Code Components 46 extracted from this document must include Revised BSD License text as 47 described in Section 4.e of the Trust Legal Provisions and are 48 provided without warranty as described in the Revised BSD License. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 53 1.1. Notes for Contributors and Reviewers . . . . . . . . . . 4 54 1.1.1. Venues for Contribution and Discussion . . . . . . . 5 55 2. Our Focus on Streaming Video . . . . . . . . . . . . . . . . 5 56 3. Bandwidth Provisioning . . . . . . . . . . . . . . . . . . . 6 57 3.1. Scaling Requirements for Media Delivery . . . . . . . . . 6 58 3.1.1. Video Bitrates . . . . . . . . . . . . . . . . . . . 6 59 3.1.2. Virtual Reality Bitrates . . . . . . . . . . . . . . 7 60 3.2. Path Bandwidth Constraints . . . . . . . . . . . . . . . 7 61 3.2.1. Know Your Network Traffic . . . . . . . . . . . . . . 8 62 3.3. Path Requirements . . . . . . . . . . . . . . . . . . . . 9 63 3.4. Caching Systems . . . . . . . . . . . . . . . . . . . . . 10 64 3.5. Predictable Usage Profiles . . . . . . . . . . . . . . . 11 65 3.6. Unpredictable Usage Profiles . . . . . . . . . . . . . . 11 66 3.7. Extremely Unpredictable Usage Profiles . . . . . . . . . 12 67 4. Latency Considerations . . . . . . . . . . . . . . . . . . . 14 68 4.1. Ultra Low-Latency . . . . . . . . . . . . . . . . . . . . 14 69 4.2. Low-Latency Live . . . . . . . . . . . . . . . . . . . . 15 70 4.3. Non-Low-Latency Live . . . . . . . . . . . . . . . . . . 16 71 4.4. On-Demand . . . . . . . . . . . . . . . . . . . . . . . . 17 72 5. Adaptive Encoding, Adaptive Delivery, and Measurement 73 Collection . . . . . . . . . . . . . . . . . . . . . . . 17 74 5.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 17 75 5.2. Adaptive Encoding . . . . . . . . . . . . . . . . . . . . 18 76 5.3. Adaptive Segmented Delivery . . . . . . . . . . . . . . . 18 77 5.4. Advertising . . . . . . . . . . . . . . . . . . . . . . . 19 78 5.5. Bitrate Detection Challenges . . . . . . . . . . . . . . 21 79 5.5.1. Idle Time between Segments . . . . . . . . . . . . . 21 80 5.5.2. Head-of-Line Blocking . . . . . . . . . . . . . . . . 22 81 5.5.3. Wide and Rapid Variation in Path Capacity . . . . . . 22 82 5.6. Measurement Collection . . . . . . . . . . . . . . . . . 23 83 5.6.1. CTA-2066: Streaming Quality of Experience Events, 84 Properties and Metrics . . . . . . . . . . . . . . . 23 85 5.6.2. CTA-5004: Common Media Client Data (CMCD) . . . . . . 23 86 5.7. Unreliable Transport . . . . . . . . . . . . . . . . . . 24 87 6. Evolution of Transport Protocols and Transport Protocol 88 Behaviors . . . . . . . . . . . . . . . . . . . . . . . . 24 89 6.1. UDP and Its Behavior . . . . . . . . . . . . . . . . . . 25 90 6.2. TCP and Its Behavior . . . . . . . . . . . . . . . . . . 26 91 6.3. The QUIC Protocol and Its Behavior . . . . . . . . . . . 27 92 7. Streaming Encrypted Media . . . . . . . . . . . . . . . . . . 29 93 7.1. General Considerations for Media Encryption . . . . . . . 30 94 7.2. Considerations for "Hop-by-Hop" Media Encryption . . . . 31 95 7.3. Considerations for "End-to-End" Media Encryption . . . . 32 96 8. Further Reading and References . . . . . . . . . . . . . . . 33 97 8.1. Industry Terminology . . . . . . . . . . . . . . . . . . 33 98 8.2. Surveys and Tutorials . . . . . . . . . . . . . . . . . . 33 99 8.2.1. Encoding . . . . . . . . . . . . . . . . . . . . . . 33 100 8.2.2. Packaging . . . . . . . . . . . . . . . . . . . . . . 33 101 8.2.3. Content Delivery . . . . . . . . . . . . . . . . . . 34 102 8.2.4. ABR Algorithms . . . . . . . . . . . . . . . . . . . 34 103 8.2.5. Low-Latency Live Adaptive Streaming . . . . . . . . . 34 104 8.2.6. Server/Client/Network Collaboration . . . . . . . . . 35 105 8.2.7. QoE Metrics . . . . . . . . . . . . . . . . . . . . . 35 106 8.2.8. Point Clouds and Immersive Media . . . . . . . . . . 36 107 8.3. Open-Source Tools . . . . . . . . . . . . . . . . . . . . 36 108 8.4. Technical Events . . . . . . . . . . . . . . . . . . . . 37 109 8.5. List of Organizations Working on Streaming Media . . . . 37 110 8.6. Topics to Keep an Eye on . . . . . . . . . . . . . . . . 38 111 8.6.1. 5G and Media . . . . . . . . . . . . . . . . . . . . 38 112 8.6.2. Ad Insertion . . . . . . . . . . . . . . . . . . . . 39 113 8.6.3. Contribution and Ingest . . . . . . . . . . . . . . . 40 114 8.6.4. Synchronized Encoding and Packaging . . . . . . . . . 40 115 8.6.5. WebRTC-Based Streaming . . . . . . . . . . . . . . . 40 116 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 40 117 10. Security Considerations . . . . . . . . . . . . . . . . . . . 41 118 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 41 119 12. Informative References . . . . . . . . . . . . . . . . . . . 41 120 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 49 122 1. Introduction 124 This document examines networking issues as they relate to quality of 125 experience in Internet media delivery, especially focusing on 126 capturing characteristics of streaming video delivery that have 127 surprised network designers or transport experts who lack specific 128 video expertise, since streaming media highlights key differences 129 between common assumptions in existing networking practices and 130 observations of video delivery issues encountered when streaming 131 media over those existing networks. 133 This document specifically focuses on streaming applications and 134 defines streaming as follows: 136 * Streaming is transmission of a continuous media from a server to a 137 client and its simultaneous consumption by the client. 139 * Here, "continuous media" refers to media and associated streams 140 such as video, audio, metadata, etc. In this definition, the 141 critical term is "simultaneous", as it is not considered streaming 142 if one downloads a video file and plays it after the download is 143 completed, which would be called download-and-play. 145 This has two implications. 147 * First, the server's transmission rate must (loosely or tightly) 148 match to client's consumption rate in order to provide 149 uninterrupted playback. That is, the client must not run out of 150 data (buffer underrun) or accept more data than it can buffer 151 before playback (buffer overrun) as any excess media that cannot 152 be buffered is simply discarded. 154 * Second, the client's consumption rate is limited not only by 155 bandwidth availability,but also media availability. The client 156 cannot fetch media that is not available from a server yet. 158 This document contains 160 * A short description of streaming video characteristics in 161 Section 2, to set the stage for the rest of the document, 163 * General guidance on bandwidth provisioning (Section 3) and latency 164 considerations (Section 4) for streaming video delivery, 166 * A description of adaptive encoding and adaptive delivery 167 techniques in common use for streaming video, along with a 168 description of the challenges media senders face in detecting the 169 bitrate available between the media sender and media receiver, and 170 collection of measurements by a third party for use in analytics 171 (Section 5), 173 * A description of existing transport protocols used for video 174 streaming, and the issues encountered when using those protocols, 175 along with a description of the QUIC transport protocol [RFC9000] 176 that we expect to be used for streaming media (Section 6), 178 * A description of implications when streaming encrypted media 179 (Section 7), and 181 * A number of useful pointers for further reading on this rapidly 182 changing subject (Section 8). 184 Making specific recommendations on operational practices aimed at 185 mitigating the issues described in this document is out of scope, 186 though some existing mitigations are mentioned in passing. The 187 intent is to provide a point of reference for future solution 188 proposals to use in describing how new technologies address or avoid 189 existing observed problems. 191 1.1. Notes for Contributors and Reviewers 193 Note to RFC Editor: Please remove this section and its subsections 194 before publication. 196 This section is to provide references to make it easier to review the 197 development and discussion on the draft so far. 199 1.1.1. Venues for Contribution and Discussion 201 This document is in the Github repository at: 203 https://github.com/ietf-wg-mops/draft-ietf-mops-streaming-opcons 204 (https://github.com/ietf-wg-mops/draft-ietf-mops-streaming-opcons) 206 Readers are welcome to open issues and send pull requests for this 207 document. 209 Substantial discussion of this document should take place on the MOPS 210 working group mailing list (mops@ietf.org). 212 * Join: https://www.ietf.org/mailman/listinfo/mops 213 (https://www.ietf.org/mailman/listinfo/mops) 215 * Search: https://mailarchive.ietf.org/arch/browse/mops/ 216 (https://mailarchive.ietf.org/arch/browse/mops/) 218 2. Our Focus on Streaming Video 220 As the internet has grown, an increasingly large share of the traffic 221 delivered to end users has become video. Estimates put the total 222 share of internet video traffic at 75% in 2019, expected to grow to 223 82% by 2022. This estimate projects the gross volume of video 224 traffic will more than double during this time, based on a compound 225 annual growth rate continuing at 34% (from Appendix D of [CVNI]). 227 A substantial part of this growth is due to increased use of 228 streaming video, although the amount of video traffic in real-time 229 communications (for example, online videoconferencing) has also grown 230 significantly. While both streaming video and videoconferencing have 231 real-time delivery and latency requirements, these requirements vary 232 from one application to another. For example, videoconferencing 233 demands an end-to-end (one-way) latency of a few hundreds of 234 milliseconds whereas live streaming can tolerate latencies of several 235 seconds. 237 In many contexts, video traffic can be handled transparently as 238 generic application-level traffic. However, as the volume of video 239 traffic continues to grow, it is becoming increasingly important to 240 consider the effects of network design decisions on application-level 241 performance, with considerations for the impact on video delivery. 243 Much of the focus of this document is on reliable media using HTTP 244 over TCP, which is widely used because 246 * support for HTTP is widely available in a wide range of operating 247 systems, 249 * HTTP is also used in a wide variety of other applications, 251 * HTTP has been demonstrated to provide acceptable performance over 252 the open Internet, 254 * HTTP includes state of the art standardized security mechanisms, 255 and 257 * HTTP can make use of already-deployed caching infrastructure. 259 Unreliable media delivery using RTP and other UDP-based protocols is 260 also discussed in Section 4.1, Section 5.7, Section 6.1, and 261 Section 7.2, but it is difficult to give general guidance for these 262 applications. For instance, when loss occurs, the most appropriate 263 response may depend on the type of codec being used. 265 3. Bandwidth Provisioning 267 3.1. Scaling Requirements for Media Delivery 269 3.1.1. Video Bitrates 271 Video bitrate selection depends on many variables including the 272 resolution (height and width), frame rate, color depth, codec, 273 encoding parameters, scene complexity and amount of motion. 274 Generally speaking, as the resolution, frame rate, color depth, scene 275 complexity and amount of motion increase, the encoding bitrate 276 increases. As newer codecs with better compression tools are used, 277 the encoding bitrate decreases. Similarly, a multi-pass encoding 278 generally produces better quality output compared to single-pass 279 encoding at the same bitrate, or delivers the same quality at a lower 280 bitrate. 282 Here are a few common resolutions used for video content, with 283 typical ranges of bitrates for the two most popular video codecs 284 [Encodings]. 286 +============+================+============+============+ 287 | Name | Width x Height | H.264 | H.265 | 288 +============+================+============+============+ 289 | DVD | 720 x 480 | 1.0 Mbps | 0.5 Mbps | 290 +------------+----------------+------------+------------+ 291 | 720p (1K) | 1280 x 720 | 3-4.5 Mbps | 2-4 Mbps | 292 +------------+----------------+------------+------------+ 293 | 1080p (2K) | 1920 x 1080 | 6-8 Mbps | 4.5-7 Mbps | 294 +------------+----------------+------------+------------+ 295 | 2160p (4k) | 3840 x 2160 | N/A | 10-20 Mbps | 296 +------------+----------------+------------+------------+ 298 Table 1 300 3.1.2. Virtual Reality Bitrates 302 The bitrates given in Section 3.1.1 describe video streams that 303 provide the user with a single, fixed, point of view - so, the user 304 has no "degrees of freedom", and the user sees all of the video image 305 that is available. 307 Even basic virtual reality (360-degree) videos that allow users to 308 look around freely (referred to as "three degrees of freedom", or 309 3DoF) require substantially larger bitrates when they are captured 310 and encoded as such videos require multiple fields of view of the 311 scene. Yet, due to smart delivery methods such as viewport-based or 312 tiled-based streaming, we do not need to send the whole scene to the 313 user. Instead, the user needs only the portion corresponding to its 314 viewpoint at any given time ([Survey360o]). 316 In more immersive applications, where limited user movement ("three 317 degrees of freedom plus", or 3DoF+) or full user movement ("six 318 degrees of freedom", or 6DoF) is allowed, the required bitrate grows 319 even further. In this case, immersive content is typically referred 320 to as volumetric media. One way to represent the volumetric media is 321 to use point clouds, where streaming a single object may easily 322 require a bitrate of 30 Mbps or higher. Refer to [MPEGI] and [PCC] 323 for more details. 325 3.2. Path Bandwidth Constraints 327 Even when the bandwidth requirements for video streams along a path 328 are well understood, additional analysis is required to understand 329 the contraints on bandwidth at various points in the network. This 330 analysis is necessary because media servers may react to bandwith 331 constraints using two independent feedback loops: 333 * Media servers often respond to application-level feedback from the 334 media player that indicates a bottleneck link somewhere along the 335 path, by adjusting the amount of media that the media server will 336 send to the media player in a given timeframe. This is described 337 in greater detail in Section 5. 339 * Media servers also typically implement transport protocols with 340 capacity-seeking congestion controllers that probe for bandwidth, 341 and adjust the sending rate based on transport mechanisms. This 342 is described in greater detail in Section 6. 344 The result is that these two (potentially competing) "helpful" 345 mechanisms each respond to the same bottleneck with no coordination 346 between themselves, so that each is unaware of actions taken by the 347 other, and this can result in a quality of experience for users that 348 is significantly lower than what could have been achieved. 350 In one example, if a media server overestimates the available 351 bandwidth to the media player, 353 * the transport protocol detects loss due to congestion, and reduces 354 its sending window size per round trip, 356 * the media server adapts to application-level feedback from the 357 media player, and reduces its own sending rate, 359 * the transport protocol sends media at the new, lower rate, and 360 confirms that this new, lower rate is "safe", because no 361 transport-level loss is occuring, but 363 * because the media server continues to send at the new, lower rate, 364 the transport protocol's maximum sending rate is now limited by 365 the amount of information the media server queues for 366 transmission, so 368 * the transport protocol can't probe for available path bandwidth by 369 sending at a higher rate. 371 In order to avoid these types of situations, which can potentially 372 affect all the users whose streaming media traverses a bottleneck 373 link, there are several possible mitigations that streaming operators 374 can use, but the first step toward mitigating a problem is knowing 375 when that problem occurs. 377 3.2.1. Know Your Network Traffic 379 There are many reasons why path characteristics might change 380 suddenly, for example, 381 * "cross traffic" that traverses part of the path, especially if 382 this traffic is "inelastic", and does not, itself, respond to 383 indications of path congestion. 385 * routing changes, which can happen in normal operation, especially 386 if the new path now includes path segments that are more heavily 387 loaded, offer lower total bandwidth, or simply cover more 388 distance. 390 Recognizing that a path carrying streaming media is "not behaving the 391 way it normally does" is fundamental. Analytics that aid in that 392 recognition can be more or less sophisticated, and can be as simple 393 as noticing that the apparent round trip times for media traffic 394 carried over TCP transport on some paths are suddenly and 395 significantly longer than usual. Passive monitors can detect changes 396 in the elapsed time between the acknowledgements for specific TCP 397 segments from a TCP receiver, since TCP octet sequence numbers and 398 acknowledgements for those sequence numbers are "carried in the 399 clear", even if the TCP payload itself is encrypted. See Section 6.2 400 for more information. 402 As transport protocols evolve to encrypt their transport header 403 fields, one side effect of increasing encryption is that the kind of 404 passive monitoring, or even "performance enhancement" ([RFC3135]) 405 that was possible with the older transport protocols (UDP, described 406 in Section 6.1 and TCP, described in Section 6.2) is no longer 407 possible with newer transport protocols such as QUIC (described in 408 Section 6.3). The IETF has specified a "latency spin bit" mechanism 409 in Section 17.4 of [RFC9000] to allow passive latency monitoring from 410 observation points on the network path throughout the duration of a 411 connection, but currently chartered work in the IETF is focusing on 412 end-point monitoring and reporting, rather than on passive 413 monitoring. 415 One example is the "qlog" mechanism [I-D.ietf-quic-qlog-main-schema], 416 a protocol-agnostic mechanism used to provide better visibility for 417 encrypted protocols such as QUIC ([I-D.ietf-quic-qlog-quic-events]) 418 and for HTTP/3 ([I-D.ietf-quic-qlog-h3-events]). 420 3.3. Path Requirements 422 The bitrate requirements in Section 3.1 are per end-user actively 423 consuming a media feed, so in the worst case, the bitrate demands can 424 be multiplied by the number of simultaneous users to find the 425 bandwidth requirements for a router on the delivery path with that 426 number of users downstream. For example, at a node with 10,000 427 downstream users simultaneously consuming video streams, 428 approximately 80 Gbps might be necessary in order for all of them to 429 get typical content at 1080p resolution. 431 However, when there is some overlap in the feeds being consumed by 432 end users, it is sometimes possible to reduce the bandwidth 433 provisioning requirements for the network by performing some kind of 434 replication within the network. This can be achieved via object 435 caching with delivery of replicated objects over individual 436 connections, and/or by packet-level replication using multicast. 438 To the extent that replication of popular content can be performed, 439 bandwidth requirements at peering or ingest points can be reduced to 440 as low as a per-feed requirement instead of a per-user requirement. 442 3.4. Caching Systems 444 When demand for content is relatively predictable, and especially 445 when that content is relatively static, caching content close to 446 requesters, and pre-loading caches to respond quickly to initial 447 requests is often useful (for example, HTTP/1.1 caching is described 448 in [I-D.ietf-httpbis-cache]). This is subject to the usual 449 considerations for caching - for example, how much data must be 450 cached to make a significant difference to the requester, and how the 451 benefits of caching and pre-loading caches balances against the costs 452 of tracking "stale" content in caches and refreshing that content. 454 It is worth noting that not all high-demand content is "live" 455 content. One relevant example is when popular streaming content can 456 be staged close to a significant number of requesters, as can happen 457 when a new episode of a popular show is released. This content may 458 be largely stable, so low-cost to maintain in multiple places 459 throughout the Internet. This can reduce demands for high end-to-end 460 bandwidth without having to use mechanisms like multicast. 462 Caching and pre-loading can also reduce exposure to peering point 463 congestion, since less traffic crosses the peering point exchanges if 464 the caches are placed in peer networks, especially when the content 465 can be pre-loaded during off-peak hours, and especially if the 466 transfer can make use of "Lower-Effort Per-Hop Behavior (LE PHB) for 467 Differentiated Services" [RFC8622], "Low Extra Delay Background 468 Transport (LEDBAT)" [RFC6817], or similar mechanisms. 470 All of this depends, of course, on the ability of a content provider 471 to predict usage and provision bandwidth, caching, and other 472 mechanisms to meet the needs of users. In some cases (Section 3.5), 473 this is relatively routine, but in other cases, it is more difficult 474 (Section 3.6, Section 3.7). 476 And as with other parts of the ecosystem, new technology brings new 477 challenges. For example, with the emergence of ultra-low-latency 478 streaming, responses have to start streaming to the end user while 479 still being transmitted to the cache, and while the cache does not 480 yet know the size of the object. Some of the popular caching systems 481 were designed around cache footprint and had deeply ingrained 482 assumptions about knowing the size of objects that are being stored, 483 so the change in design requirements in long-established systems 484 caused some errors in production. Incidents occurred where a 485 transmission error in the connection from the upstream source to the 486 cache could result in the cache holding a truncated segment and 487 transmitting it to the end user's device. In this case, players 488 rendering the stream often had the video freeze until the player was 489 reset. In some cases the truncated object was even cached that way 490 and served later to other players as well, causing continued stalls 491 at the same spot in the video for all players playing the segment 492 delivered from that cache node. 494 3.5. Predictable Usage Profiles 496 Historical data shows that users consume more videos and at a higher 497 bit rate than they did in the past on their connected devices. 498 Improvements in the codecs that help with reducing the encoding 499 bitrates with better compression algorithms could not have offset the 500 increase in the demand for the higher quality video (higher 501 resolution, higher frame rate, better color gamut, better dynamic 502 range, etc.). In particular, mobile data usage has shown a large 503 jump over the years due to increased consumption of entertainment as 504 well as conversational video. 506 3.6. Unpredictable Usage Profiles 508 Although TCP/IP has been used with a number of widely used 509 applications that have symmetric bandwidth requirements (similar 510 bandwidth requirements in each direction between endpoints), many 511 widely-used Internet applications operate in client-server roles, 512 with asymmetric bandwidth requirements. A common example might be an 513 HTTP GET operation, where a client sends a relatively small HTTP GET 514 request for a resource to an HTTP server, and often receives a 515 significantly larger response carrying the requested resource. When 516 HTTP is commonly used to stream movie-length video, the ratio between 517 response size and request size can become arbitrarily large. 519 For this reason, operators may pay more attention to downstream 520 bandwidth utilization when planning and managing capacity. In 521 addition, operators have been able to deploy access networks for end 522 users using underlying technologies that are inherently asymmetric, 523 favoring downstream bandwidth (e.g. ADSL, cellular technologies, 524 most IEEE 802.11 variants), assuming that users will need less 525 upstream bandwidth than downstream bandwidth. This strategy usually 526 works, except when it fails because application bandwidth usage 527 patterns have changed in ways that were not predicted. 529 One example of this type of change was when peer-to-peer file sharing 530 applications gained popularity in the early 2000s. To take one well- 531 documented case ([RFC5594]), the Bittorrent application created 532 "swarms" of hosts, uploading and downloading files to each other, 533 rather than communicating with a server. Bittorrent favored peers 534 who uploaded as much as they downloaded, so that new Bittorrent users 535 had an incentive to significantly increase their upstream bandwidth 536 utilization. 538 The combination of the large volume of "torrents" and the peer-to- 539 peer characteristic of swarm transfers meant that end user hosts were 540 suddenly uploading higher volumes of traffic to more destinations 541 than was the case before Bittorrent. This caused at least one large 542 internet service provider (ISP) to attempt to "throttle" these 543 transfers in order to to mitigate the load that these hosts placed on 544 their network. These efforts were met by increased use of encryption 545 in Bittorrent, similar to an arms race, and set off discussions about 546 "Net Neutrality" and calls for regulatory action. 548 Especially as end users increase use of video-based social networking 549 applications, it will be helpful for access network providers to 550 watch for increasing numbers of end users uploading significant 551 amounts of content. 553 3.7. Extremely Unpredictable Usage Profiles 555 The causes of unpredictable usage described in Section 3.6 were more 556 or less the result of human choices, but we were reminded during a 557 post-IETF 107 meeting that humans are not always in control, and 558 forces of nature can cause enormous fluctuations in traffic patterns. 560 In his talk, Sanjay Mishra [Mishra] reported that after the CoViD-19 561 pandemic broke out in early 2020, 563 * Comcast's streaming and web video consumption rose by 38%, with 564 their reported peak traffic up 32% overall between March 1 to 565 March 30, 567 * AT&T reported a 28% jump in core network traffic (single day in 568 April, as compared to pre stay-at-home daily average traffic), 569 with video accounting for nearly half of all mobile network 570 traffic, while social networking and web browsing remained the 571 highest percentage (almost a quarter each) of overall mobility 572 traffic, and 574 * Verizon reported similar trends with video traffic up 36% over an 575 average day (pre COVID-19)}. 577 We note that other operators saw similar spikes during this time 578 period. Craig Labowitz [Labovitz] reported 580 * Weekday peak traffic increases over 45%-50% from pre-lockdown 581 levels, 583 * A 30% increase in upstream traffic over their pre-pandemic levels, 584 and 586 * A steady increase in the overall volume of DDoS traffic, with 587 amounts exceeding the pre-pandemic levels by 40%. (He attributed 588 this increase to the significant rise in gaming-related DDoS 589 attacks ([LabovitzDDoS]), as gaming usage also increased.) 591 Subsequently, the Internet Architecture Board (IAB) held a COVID-19 592 Network Impacts Workshop [IABcovid] in November 2020. Given a larger 593 number of reports and more time to reflect, the following 594 observations from the draft workshop report are worth considering. 596 * Participants describing different types of networks reported 597 different kinds of impacts, but all types of networks saw impacts. 599 * Mobile networks saw traffic reductions and residential networks 600 saw significant increases. 602 * Reported traffic increases from ISPs and internet exchange points 603 (IXP) over just a few weeks were as big as the traffic growth over 604 the course of a typical year, representing a 15-20% surge in 605 growth to land at a new normal that was much higher than 606 anticipated. 608 * At DE-CIX Frankfurt, the world's largest Internet Exchange Point 609 in terms of data throughput, the year 2020 has seen the largest 610 increase in peak traffic within a single year since the IXP was 611 founded in 1995. 613 * The usage pattern changed significantly as work-from-home and 614 videoconferencing usage peaked during normal work hours, which 615 would have typically been off-peak hours with adults at work and 616 children at school. One might expect that the peak would have had 617 more impact on networks if it had happened during typical evening 618 peak hours for video streaming applications. 620 * The increase in daytime bandwidth consumption reflected both 621 significant increases in "essential" applications such as 622 videoconferencing and virtual private networks (VPN), and 623 entertainment applications as people watched videos or played 624 games. 626 * At the IXP-level, it was observed that port utilization increased. 627 This phenomenon is mostly explained by a higher traffic demand 628 from residential users. 630 4. Latency Considerations 632 Streaming media latency refers to the "glass-to-glass" time duration, 633 which is the delay between the real-life occurrence of an event and 634 the streamed media being appropriately displayed on an end user's 635 device. Note that this is different from the network latency 636 (defined as the time for a packet to cross a network from one end to 637 another end) because it includes video encoding/decoding and 638 buffering time, and for most cases also ingest to an intermediate 639 service such as a CDN or other video distribution service, rather 640 than a direct connection to an end user. 642 Streaming media can be usefully categorized according to the 643 application's latency requirements into a few rough categories: 645 * ultra low-latency (less than 1 second) 647 * low-latency live (less than 10 seconds) 649 * non-low-latency live (10 seconds to a few minutes) 651 * on-demand (hours or more) 653 4.1. Ultra Low-Latency 655 Ultra low-latency delivery of media is defined here as having a 656 glass-to-glass delay target under one second. 658 Some media content providers aim to achieve this level of latency for 659 live media events. This introduces new challenges relative to less- 660 restricted levels of latency requirements because this latency is the 661 same scale as commonly observed end-to-end network latency variation 662 (for example, due to effects such as bufferbloat ([CoDel]), Wi-Fi 663 error correction, or packet reordering). These effects can make it 664 difficult to achieve this level of latency for the general case, and 665 may require tradeoffs in relatively frequent user-visible media 666 artifacts. However, for controlled environments or targeted networks 667 that provide mitigations against such effects, this level of latency 668 is potentially achievable with the right provisioning. 670 Applications requiring ultra low latency for media delivery are 671 usually tightly constrained on the available choices for media 672 transport technologies, and sometimes may need to operate in 673 controlled environments to reliably achieve their latency and quality 674 goals. 676 Most applications operating over IP networks and requiring latency 677 this low use the Real-time Transport Protocol (RTP) [RFC3550] or 678 WebRTC [RFC8825], which uses RTP for the media transport as well as 679 several other protocols necessary for safe operation in browsers. 681 Worth noting is that many applications for ultra low-latency delivery 682 do not need to scale to more than a few users at a time, which 683 simplifies many delivery considerations relative to other use cases. 685 Recommended reading for applications adopting an RTP-based approach 686 also includes [RFC7656]. For increasing the robustness of the 687 playback by implementing adaptive playout methods, refer to [RFC4733] 688 and [RFC6843]. 690 Applications with further-specialized latency requirements are out of 691 scope for this document. 693 4.2. Low-Latency Live 695 Low-latency live delivery of media is defined here as having a glass- 696 to-glass delay target under 10 seconds. 698 This level of latency is targeted to have a user experience similar 699 to traditional broadcast TV delivery. A frequently cited problem 700 with failing to achieve this level of latency for live sporting 701 events is the user experience failure from having crowds within 702 earshot of one another who react audibly to an important play, or 703 from users who learn of an event in the match via some other channel, 704 for example social media, before it has happened on the screen 705 showing the sporting event. 707 Applications requiring low-latency live media delivery are generally 708 feasible at scale with some restrictions. This typically requires 709 the use of a premium service dedicated to the delivery of live video, 710 and some tradeoffs may be necessary relative to what is feasible in a 711 higher latency service. The tradeoffs may include higher costs, or 712 delivering a lower quality video, or reduced flexibility for adaptive 713 bitrates, or reduced flexibility for available resolutions so that 714 fewer devices can receive an encoding tuned for their display. Low- 715 latency live delivery is also more susceptible to user-visible 716 disruptions due to transient network conditions than higher latency 717 services. 719 Implementation of a low-latency live video service can be achieved 720 with the use of low-latency extensions of HLS (called LL-HLS) 721 [I-D.draft-pantos-hls-rfc8216bis] and DASH (called LL-DASH) 722 [LL-DASH]. These extensions use the Common Media Application Format 723 (CMAF) standard [MPEG-CMAF] that allows the media to be packaged into 724 and transmitted in units smaller than segments, which are called 725 chunks in CMAF language. This way, the latency can be decoupled from 726 the duration of the media segments. Without a CMAF-like packaging, 727 lower latencies can only be achieved by using very short segment 728 durations. However, shorter segments means more frequent intra-coded 729 frames and that is detrimental to video encoding quality. CMAF 730 allows us to still use longer segments (improving encoding quality) 731 without penalizing latency. 733 While a LL-HLS client retrieves each chunk with a separate HTTP GET 734 request, a LL-DASH client uses the chunked transfer encoding feature 735 of the HTTP [CMAF-CTE] which allows the LL-DASH client to fetch all 736 the chunks belonging to a segment with a single GET request. An HTTP 737 server can transmit the CMAF chunks to the LL-DASH client as they 738 arrive from the encoder/packager. A detailed comparison of LL-HLS 739 and LL-DASH is given in [MMSP20]. 741 4.3. Non-Low-Latency Live 743 Non-low-latency live delivery of media is defined here as a live 744 stream that does not have a latency target shorter than 10 seconds. 746 This level of latency is the historically common case for segmented 747 video delivery using HLS [RFC8216] and DASH [MPEG-DASH]. This level 748 of latency is often considered adequate for content like news or pre- 749 recorded content. This level of latency is also sometimes achieved 750 as a fallback state when some part of the delivery system or the 751 client-side players do not have the necessary support for the 752 features necessary to support low-latency live streaming. 754 This level of latency can typically be achieved at scale with 755 commodity CDN services for HTTP(s) delivery, and in some cases the 756 increased time window can allow for production of a wider range of 757 encoding options relative to the requirements for a lower latency 758 service without the need for increasing the hardware footprint, which 759 can allow for wider device interoperability. 761 4.4. On-Demand 763 On-Demand media streaming refers to playback of pre-recorded media 764 based on a user's action. In some cases on-demand media is produced 765 as a by-product of a live media production, using the same segments 766 as the live event, but freezing the manifest after the live event has 767 finished. In other cases, on-demand media is constructed out of pre- 768 recorded assets with no streaming necessarily involved during the 769 production of the on-demand content. 771 On-demand media generally is not subject to latency concerns, but 772 other timing-related considerations can still be as important or even 773 more important to the user experience than the same considerations 774 with live events. These considerations include the startup time, the 775 stability of the media stream's playback quality, and avoidance of 776 stalls and video artifacts during the playback under all but the most 777 severe network conditions. 779 In some applications, optimizations are available to on-demand video 780 that are not always available to live events, such as pre-loading the 781 first segment for a startup time that doesn't have to wait for a 782 network download to begin. 784 5. Adaptive Encoding, Adaptive Delivery, and Measurement Collection 786 5.1. Overview 788 A simple model of video playback can be described as a video stream 789 consumer, a buffer, and a transport mechanism that fills the buffer. 790 The consumption rate is fairly static and is represented by the 791 content bitrate. The size of the buffer is also commonly a fixed 792 size. The fill process needs to be at least fast enough to ensure 793 that the buffer is never empty, however it also can have significant 794 complexity when things like personalization or ad workflows are 795 introduced. 797 The challenges in filling the buffer in a timely way fall into two 798 broad categories: 1. content selection and 2. content variation. 799 Content selection comprises all of the steps needed to determine 800 which content variation to offer the client. Content variation is 801 the number of content options that exist at any given selection 802 point. A common example, easily visualized, is Adaptive BitRate 803 (ABR), described in more detail below. The mechanism used to select 804 the bitrate is part of the content selection, and the content 805 variation are all of the different bitrate renditions. 807 ABR is a sort of application-level response strategy in which the 808 streaming client attempts to detect the available bandwidth of the 809 network path by observing the successful application-layer download 810 speed, then chooses a bitrate for each of the video, audio, subtitles 811 and metadata (among the limited number of available options) that 812 fits within that bandwidth, typically adjusting as changes in 813 available bandwidth occur in the network or changes in capabilities 814 occur during the playback (such as available memory, CPU, display 815 size, etc.). 817 5.2. Adaptive Encoding 819 Media servers can provide media streams at various bitrates because 820 the media has been encoded at various bitrates. This is a so-called 821 "ladder" of bitrates, that can be offered to media players as part of 822 the manifest that describes the media being requested by the media 823 player, so that the media player can select among the available 824 bitrate choices. 826 The media server may also choose to alter which bitrates are made 827 available to players by adding or removing bitrate options from the 828 ladder delivered to the player in subsequent manifests built and sent 829 to the player. This way, both the player, through its selection of 830 bitrate to request from the manifest, and the server, through its 831 construction of the bitrates offered in the manifest, are able to 832 affect network utilization. 834 5.3. Adaptive Segmented Delivery 836 ABR playback is commonly implemented by streaming clients using HLS 837 [RFC8216] or DASH [MPEG-DASH] to perform a reliable segmented 838 delivery of media over HTTP. Different implementations use different 839 strategies [ABRSurvey], often relying on proprietary algorithms 840 (called rate adaptation or bitrate selection algorithms) to perform 841 available bandwidth estimation/prediction and the bitrate selection. 843 Many server-player systems will do an initial probe or a very simple 844 throughput speed test at the start of a video playback. This is done 845 to get a rough sense of the highest video bitrate in the ABR ladder 846 that the network between the server and player will likely be able to 847 provide under initial network conditions. After the initial testing, 848 clients tend to rely upon passive network observations and will make 849 use of player side statistics such as buffer fill rates to monitor 850 and respond to changing network conditions. 852 The choice of bitrate occurs within the context of optimizing for 853 some metric monitored by the client, such as highest achievable video 854 quality or lowest chances for a rebuffering event (playback stall). 856 5.4. Advertising 858 A variety of business models exist for producers of streaming media. 859 Some content providers derive the majority of the revenue associated 860 with streaming media directly from consumer subscriptions or one-time 861 purchases. Others derive the majority of their streaming media 862 associated revenue from advertising. Many content providers derive 863 income from a mix of these and other sources of funding. The 864 inclusion of advertising alongside or interspersed with streaming 865 media content is therefore common in today's media landscape. 867 Some commonly used forms of advertising can introduce potential user 868 experience issues for a media stream. This section provides a very 869 brief overview of a complex and evolving space, but a complete 870 coverage of the potential issues is out of scope for this document. 872 The same techniques used to allow a media player to switch between 873 renditions of different bitrates at segment or chunk boundaries can 874 also be used to enable the dynamic insertion of advertisements 875 (herafter referred to as "ads"). 877 Ads may be inserted either with Client Side Ad Insertion (CSAI) or 878 Server Side Ad Insertion (SSAI). In CSAI, the ABR manifest will 879 generally include links to an external ad server for some segments of 880 the media stream, while in SSAI the server will remain the same 881 during advertisements, but will include media segments that contain 882 the advertising. In SSAI, the media segments may or may not be 883 sourced from an external ad server like with CSAI. 885 In general, the more targeted the ad request is, the more requests 886 the ad service needs to be able to handle concurrently. If 887 connectivity is poor to the ad service, this can cause rebuffering 888 even if the underlying video assets (both content and ads) are able 889 to be accessed quickly. The less targeted, the more likely the ad 890 requests can be consolidated and can leverage the same caching 891 techniques as the video content. 893 In some cases, especially with SSAI, advertising space in a stream is 894 reserved for a specific advertiser and can be integrated with the 895 video so that the segments share the same encoding properties such as 896 bitrate, dynamic range, and resolution. However, in many cases ad 897 servers integrate with a Supply Side Platform (SSP) that offers 898 advertising space in real-time auctions via an Ad Exchange, with bids 899 for the advertising space coming from Demand Side Platforms (DSPs) 900 that collect money from advertisers for delivering the 901 advertisements. Most such Ad Exchanges use application-level 902 protocol specifications published by the Interactive Advertising 903 Bureau [IAB-ADS], an industry trade organization. 905 This ecosystem balances several competing objectives, and integrating 906 with it naively can produce surprising user experience results. For 907 example, ad server provisioning and/or the bitrate of the ad segments 908 might be different from that of the main video, either of which can 909 sometimes result in video stalls. For another example, since the 910 inserted ads are often produced independently they might have a 911 different base volume level than the main video, which can make for a 912 jarring user experience. 914 Additionally, this market historically has had incidents of ad fraud 915 (misreporting of ad delivery to end users for financial gain). As a 916 mitigation for concerns driven by those incidents, some SSPs have 917 required the use of players with features like reporting of ad 918 delivery, or providing information that can be used for user 919 tracking. Some of these and other measures have raised privacy 920 concerns for end users. 922 In general this is a rapidly developing space with many 923 considerations, and media streaming operators engaged in advertising 924 may need to research these and other concerns to find solutions that 925 meet their user experience, user privacy, and financial goals. For 926 further reading on mitigations, [BAP] has published some standards 927 and best practices based on user experience research. 929 5.5. Bitrate Detection Challenges 931 This kind of bandwidth-measurement system can experience trouble in 932 several ways that are affected by networking issues. Because 933 adaptive application-level response strategies are often using rates 934 as observed by the application layer, there are sometimes inscrutable 935 transport-level protocol behaviors that can produce surprising 936 measurement values when the application-level feedback loop is 937 interacting with a transport-level feedback loop. 939 A few specific examples of surprising phenomena that affect bitrate 940 detection measurements are described in the following subsections. 941 As these examples will demonstrate, it is common to encounter cases 942 that can deliver application level measurements that are too low, too 943 high, and (possibly) correct but varying more quickly than a lab- 944 tested selection algorithm might expect. 946 These effects and others that cause transport behavior to diverge 947 from lab modeling can sometimes have a significant impact on bitrate 948 selection and on user quality of experience, especially where players 949 use naive measurement strategies and selection algorithms that don't 950 account for the likelihood of bandwidth measurements that diverge 951 from the true path capacity. 953 5.5.1. Idle Time between Segments 955 When the bitrate selection is chosen substantially below the 956 available capacity of the network path, the response to a segment 957 request will typically complete in much less absolute time than the 958 duration of the requested segment, leaving significant idle time 959 between segment downloads. This can have a few surprising 960 consequences: 962 * TCP slow-start when restarting after idle requires multiple RTTs 963 to re-establish a throughput at the network's available capacity. 964 When the active transmission time for segments is substantially 965 shorter than the time between segments, leaving an idle gap 966 between segments that triggers a restart of TCP slow-start, the 967 estimate of the successful download speed coming from the 968 application-visible receive rate on the socket can thus end up 969 much lower than the actual available network capacity. This in 970 turn can prevent a shift to the most appropriate bitrate. 971 [RFC7661] provides some mitigations for this effect at the TCP 972 transport layer, for senders who anticipate a high incidence of 973 this problem. 975 * Mobile flow-bandwidth spectrum and timing mapping can be impacted 976 by idle time in some networks. The carrier capacity assigned to a 977 link can vary with activity. Depending on the idle time 978 characteristics, this can result in a lower available bitrate than 979 would be achievable with a steadier transmission in the same 980 network. 982 Some receiver-side ABR algorithms such as [ELASTIC] are designed to 983 try to avoid this effect. 985 Another way to mitigate this effect is by the help of two 986 simultaneous TCP connections, as explained in [MMSys11] for Microsoft 987 Smooth Streaming. In some cases, the system-level TCP slow-start 988 restart can also be disabled, for example as described in 989 [OReilly-HPBN]. 991 5.5.2. Head-of-Line Blocking 993 In the event of a lost packet on a TCP connection with SACK support 994 (a common case for segmented delivery in practice), loss of a packet 995 can provide a confusing bandwidth signal to the receiving 996 application. Because of the sliding window in TCP, many packets may 997 be accepted by the receiver without being available to the 998 application until the missing packet arrives. Upon arrival of the 999 one missing packet after retransmit, the receiver will suddenly get 1000 access to a lot of data at the same time. 1002 To a receiver measuring bytes received per unit time at the 1003 application layer, and interpreting it as an estimate of the 1004 available network bandwidth, this appears as a high jitter in the 1005 goodput measurement, presenting as a stall, followed by a sudden leap 1006 that can far exceed the actual capacity of the transport path from 1007 the server when the hole in the received data is filled by a later 1008 retransmission. 1010 It is worth noting that more modern transport protocols such as QUIC 1011 have mitigation of head-of-line blocking as a protocol design goal. 1012 See Section 6.3 for more details. 1014 5.5.3. Wide and Rapid Variation in Path Capacity 1016 As many end devices have moved to wireless connectivity for the final 1017 hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have 1018 emerged from radio interference and signal strength effects. 1020 Each of these technologies can experience sudden changes in capacity 1021 as the end user device moves from place to place and encounters new 1022 sources of interference. Microwave ovens, for example, can cause a 1023 throughput degradation of more than a factor of 2 while active 1024 [Micro]. 5G and LTE likewise can easily see rate variation by a 1025 factor of 2 or more over a span of seconds as users move around. 1027 These swings in actual transport capacity can result in user 1028 experience issues that can be exacerbated by insufficiently 1029 responsive ABR algorithms. 1031 5.6. Measurement Collection 1033 In addition to measurements media players use to guide their segment- 1034 by-segment adaptive streaming requests, streaming media providers may 1035 also rely on measurements collected from media players to provide 1036 analytics that can be used for decisions such as whether the adaptive 1037 encoding bitrates in use are the best ones to provide to media 1038 players, or whether current media content caching is providing the 1039 best experience for viewers. To that effect, the Consumer Technology 1040 Association (CTA) who owns the Web Application Video Ecosystem (WAVE) 1041 project has published two important specifications. 1043 5.6.1. CTA-2066: Streaming Quality of Experience Events, Properties and 1044 Metrics 1046 [CTA-2066] specifies a set of media player events, properties, 1047 quality of experience (QoE) metrics and associated terminology for 1048 representing streaming media quality of experience across systems, 1049 media players and analytics vendors. While all these events, 1050 properties, metrics and associated terminology is used across a 1051 number of proprietary analytics and measurement solutions, they were 1052 used in slightly (or vastly) different ways that led to 1053 interoperability issues. CTA-2066 attempts to address this issue by 1054 defining a common terminology as well as how each metric should be 1055 computed for consistent reporting. 1057 5.6.2. CTA-5004: Common Media Client Data (CMCD) 1059 Many assume that the CDNs have a holistic view into the health and 1060 performance of the streaming clients. However, this is not the case. 1061 The CDNs produce millions of log lines per second across hundreds of 1062 thousands of clients and they have no concept of a "session" as a 1063 client would have, so CDNs are decoupled from the metrics the clients 1064 generate and report. A CDN cannot tell which request belongs to 1065 which playback session, the duration of any media object, the 1066 bitrate, or whether any of the clients have stalled and are 1067 rebuffering or are about to stall and will rebuffer. The consequence 1068 of this decoupling is that a CDN cannot prioritize delivery for when 1069 the client needs it most, prefetch content, or trigger alerts when 1070 the network itself may be underperforming. One approach to couple 1071 the CDN to the playback sessions is for the clients to communicate 1072 standardized media-relevant information to the CDNs while they are 1073 fetching data. [CTA-5004] was developed exactly for this purpose. 1075 5.7. Unreliable Transport 1077 In contrast to segmented delivery, several applications use 1078 unreliable UDP or SCTP with its "partial reliability" extension 1079 [RFC3758] to deliver Media encapsulated in RTP [RFC3550] or raw MPEG 1080 Transport Stream ("MPEG-TS")-formatted video [MPEG-TS], when the 1081 media is being delivered in situations such as broadcast and live 1082 streaming, that better tolerate occasional packet loss without 1083 retransmission. 1085 Under congestion and loss, this approach generally experiences more 1086 video artifacts with fewer delay or head-of-line blocking effects. 1087 Often one of the key goals is to reduce latency, to better support 1088 applications like videoconferencing, or for other live-action video 1089 with interactive components, such as some sporting events. 1091 The Secure Reliable Transport protocol [SRT] also uses UDP in an 1092 effort to achieve lower latency for streaming media, although it adds 1093 reliability at the application layer. 1095 Congestion avoidance strategies for deployments using unreliable 1096 transport protocols vary widely in practice, ranging from being 1097 entirely unresponsive to congestion, to using feedback signaling to 1098 change encoder settings (as in [RFC5762]), to using fewer enhancement 1099 layers (as in [RFC6190]), to using proprietary methods to detect 1100 "quality of experience" issues and turn off video in order to allow 1101 less bandwidth-intensive media such as audio to be delivered. 1103 More details about congestion avoidance strategies used with 1104 unreliable transport protocols are included in Section 6.1. 1106 6. Evolution of Transport Protocols and Transport Protocol Behaviors 1108 Because networking resources are shared between users, a good place 1109 to start our discussion is how contention between users, and 1110 mechanisms to resolve that contention in ways that are "fair" between 1111 users, impact streaming media users. These topics are closely tied 1112 to transport protocol behaviors. 1114 As noted in Section 5, ABR response strategies such as HLS [RFC8216] 1115 or DASH [MPEG-DASH] are attempting to respond to changing path 1116 characteristics, and underlying transport protocols are also 1117 attempting to respond to changing path characteristics. 1119 For most of the history of the Internet, these transport protocols, 1120 described in Section 6.1 and Section 6.2, have had relatively 1121 consistent behaviors that have changed slowly, if at all, over time. 1122 Newly standardized transport protocols like QUIC [RFC9000] can behave 1123 differently from existing transport protocols, and these behaviors 1124 may evolve over time more rapidly than currently-used transport 1125 protocols. 1127 For this reason, we have included a description of how the path 1128 characteristics that streaming media providers may see are likely to 1129 evolve over time. 1131 6.1. UDP and Its Behavior 1133 For most of the history of the Internet, we have trusted UDP-based 1134 applications to limit their impact on other users. One of the 1135 strategies used was to use UDP for simple query-response application 1136 protocols, such as DNS, which is often used to send a single-packet 1137 request to look up the IP address for a DNS name, and return a 1138 single-packet response containing the IP address. Although it is 1139 possible to saturate a path between a DNS client and DNS server with 1140 DNS requests, in practice, that was rare enough that DNS included few 1141 mechanisms to resolve contention between DNS users and other users 1142 (whether they are also using DNS, or using other application 1143 protocols). 1145 In recent times, the usage of UDP-based applications that were not 1146 simple query-response protocols has grown substantially, and since 1147 UDP does not provide any feedback mechanism to senders to help limit 1148 impacts on other users, application-level protocols such as RTP 1149 [RFC3550] have been responsible for the decisions that TCP-based 1150 applications have delegated to TCP - what to send, how much to send, 1151 and when to send it. So, the way some UDP-based applications 1152 interact with other users has changed. 1154 It is also worth pointing out that because UDP has no transport-layer 1155 feedback mechanisms, UDP-based applications that send and receive 1156 substantial amounts of information are expected to provide their own 1157 feedback mechanisms. This expectation is most recently codified in 1158 Best Current Practice [RFC8085]. 1160 RTP relies on RTCP Sender and Receiver Reports [RFC3550] as its own 1161 feedback mechanism, and even includes Circuit Breakers for Unicast 1162 RTP Sessions [RFC8083] for situations when normal RTP congestion 1163 control has not been able to react sufficiently to RTP flows sending 1164 at rates that result in sustained packet loss. 1166 The notion of "Circuit Breakers" has also been applied to other UDP 1167 applications in [RFC8084], such as tunneling packets over UDP that 1168 are potentially not congestion-controlled (for example, 1169 "Encapsulating MPLS in UDP", as described in [RFC7510]). If 1170 streaming media is carried in tunnels encapsulated in UDP, these 1171 media streams may encounter "tripped circuit breakers", with 1172 resulting user-visible impacts. 1174 6.2. TCP and Its Behavior 1176 For most of the history of the Internet, we have trusted TCP to limit 1177 the impact of applications that sent a significant number of packets, 1178 in either or both directions, on other users. Although early 1179 versions of TCP were not particularly good at limiting this impact 1180 [RFC0793], the addition of Slow Start and Congestion Avoidance, as 1181 described in [RFC2001], were critical in allowing TCP-based 1182 applications to "use as much bandwidth as possible, but to avoid 1183 using more bandwidth than was possible". Although dozens of RFCs 1184 have been written refining TCP decisions about what to send, how much 1185 to send, and when to send it, since 1988 [Jacobson-Karels] the 1186 signals available for TCP senders remained unchanged - end-to-end 1187 acknowledgements for packets that were successfully sent and 1188 received, and packet timeouts for packets that were not. 1190 The success of the largely TCP-based Internet is evidence that the 1191 mechanisms TCP used to achieve equilibrium quickly, at a point where 1192 TCP senders do not interfere with other TCP senders for sustained 1193 periods of time, have been largely successful. The Internet 1194 continued to work even when the specific mechanisms used to reach 1195 equilibrium changed over time. Because TCP provides a common tool to 1196 avoid contention, as some TCP-based applications like FTP were 1197 largely replaced by other TCP-based applications like HTTP, the 1198 transport behavior remained consistent. 1200 In recent times, the TCP goal of probing for available bandwidth, and 1201 "backing off" when a network path is saturated, has been supplanted 1202 by the goal of avoiding growing queues along network paths, which 1203 prevent TCP senders from reacting quickly when a network path is 1204 saturated. Congestion control mechanisms such as COPA [COPA18] and 1205 BBR [I-D.cardwell-iccrg-bbr-congestion-control] make these decisions 1206 based on measured path delays, assuming that if the measured path 1207 delay is increasing, the sender is injecting packets onto the network 1208 path faster than the receiver can accept them, so the sender should 1209 adjust its sending rate accordingly. 1211 Although TCP behavior has changed over time, the common practice of 1212 implementing TCP as part of an operating system kernel has acted to 1213 limit how quickly TCP behavior can change. Even with the widespread 1214 use of automated operating system update installation on many end- 1215 user systems, streaming media providers could have a reasonable 1216 expectation that they could understand TCP transport protocol 1217 behaviors, and that those behaviors would remain relatively stable in 1218 the short term. 1220 6.3. The QUIC Protocol and Its Behavior 1222 The QUIC protocol, developed from a proprietary protocol into an IETF 1223 standards-track protocol [RFC9000], turns many of the statements made 1224 in Section 6.1 and Section 6.2 on their heads. 1226 Although QUIC provides an alternative to the TCP and UDP transport 1227 protocols, QUIC is itself encapsulated in UDP. As noted elsewhere in 1228 Section 7.1, the QUIC protocol encrypts almost all of its transport 1229 parameters, and all of its payload, so any intermediaries that 1230 network operators may be using to troubleshoot HTTP streaming media 1231 performance issues, perform analytics, or even intercept exchanges in 1232 current applications will not work for QUIC-based applications 1233 without making changes to their networks. Section 7 describes the 1234 implications of media encryption in more detail. 1236 While QUIC is designed as a general-purpose transport protocol, and 1237 can carry different application-layer protocols, the current 1238 standardized mapping is for HTTP/3 [I-D.ietf-quic-http], which 1239 describes how QUIC transport features are used for HTTP. The 1240 convention is for HTTP/3 to run over UDP port 443 [Port443] but this 1241 is not a strict requirement. 1243 When HTTP/3 is encapsulated in QUIC, which is then encapsulated in 1244 UDP, streaming operators (and network operators) might see UDP 1245 traffic patterns that are similar to HTTP(S) over TCP. Since earlier 1246 versions of HTTP(S) rely on TCP, UDP ports may be blocked for any 1247 port numbers that are not commonly used, such as UDP 53 for DNS. 1248 Even when UDP ports are not blocked and HTTP/3 can flow, streaming 1249 operators (and network operators) may severely rate-limit this 1250 traffic because they do not expect to see legitimate high-bandwidth 1251 traffic such as streaming media over the UDP ports that HTTP/3 is 1252 using. 1254 As noted in Section 5.5.2, because TCP provides a reliable, in-order 1255 delivery service for applications, any packet loss for a TCP 1256 connection causes "head-of-line blocking", so that no TCP segments 1257 arriving after a packet is lost will be delivered to the receiving 1258 application until the lost packet is retransmitted, allowing in-order 1259 delivery to the application to continue. As described in [RFC9000], 1260 QUIC connections can carry multiple streams, and when packet losses 1261 do occur, only the streams carried in the lost packet are delayed. 1263 A QUIC extension currently being specified ([I-D.ietf-quic-datagram]) 1264 adds the capability for "unreliable" delivery, similar to the service 1265 provided by UDP, but these datagrams are still subject to the QUIC 1266 connection's congestion controller, providing some transport-level 1267 congestion avoidance measures, which UDP does not. 1269 As noted in Section 6.2, there is an increasing interest in transport 1270 protocol behaviors that respond to delay measurements, instead of 1271 responding to packet loss. These behaviors may deliver improved user 1272 experience, but in some cases have not responded to sustained packet 1273 loss, which exhausts available buffers along the end-to-end path that 1274 may affect other users sharing that path. The QUIC protocol provides 1275 a set of congestion control hooks that can be used for algorithm 1276 agility, and [RFC9002] defines a basic algorithm with transport 1277 behavior that is roughly similar to TCP NewReno [RFC6582]. However, 1278 QUIC senders can and do unilaterally choose to use different 1279 algorithms such as loss-based CUBIC [RFC8312], delay-based COPA or 1280 BBR, or even something completely different. 1282 We do have experience with deploying new congestion controllers 1283 without melting the Internet (CUBIC is one example), but the point 1284 mentioned in Section 6.2 about TCP being implemented in operating 1285 system kernels is also different with QUIC. Although QUIC can be 1286 implemented in operating system kernels, one of the design goals when 1287 this work was chartered was "QUIC is expected to support rapid, 1288 distributed development and testing of features", and to meet this 1289 expectation, many implementers have chosen to implement QUIC in user 1290 space, outside the operating system kernel, and to even distribute 1291 QUIC libraries with their own applications. 1293 The decision to deploy a new version of QUIC is relatively 1294 uncontrolled, compared to other widely used transport protocols, and 1295 this can include new transport behaviors that appear without much 1296 notice except to the QUIC endpoints. At IETF 105, Christian Huitema 1297 and Brian Trammell presented a talk on "Congestion Defense in Depth" 1298 [CDiD], that explored potential concerns about new QUIC congestion 1299 controllers being broadly deployed without the testing and 1300 instrumentation that current major content providers routinely 1301 include. The sense of the room at IETF 105 was that the current 1302 major content providers understood what is at stake when they deploy 1303 new congestion controllers, but this presentation, and the related 1304 discussion in TSVAREA minutes from IETF 105 ([tsvarea-105], are still 1305 worth a look for new and rapidly growing content providers. 1307 It is worth considering that if TCP-based HTTP traffic and UDP-based 1308 HTTP/3 traffic are allowed to enter operator networks on roughly 1309 equal terms, questions of fairness and contention will be heavily 1310 dependent on interactions between the congestion controllers in use 1311 for TCP-based HTTP traffic and UDP-based HTTP/3 traffic. 1313 More broadly, [I-D.ietf-quic-manageability] discusses manageability 1314 of the QUIC transport protocol, focusing on the implications of 1315 QUIC's design and wire image on network operations involving QUIC 1316 traffic. It discusses what network operators can consider in some 1317 detail. 1319 7. Streaming Encrypted Media 1321 "Encrypted Media" has at least three meanings: 1323 * Media encrypted at the application layer, typically using some 1324 sort of Digital Rights Management (DRM) system, and typically 1325 remaining encrypted "at rest", when senders and receivers store 1326 it. 1328 * Media encrypted by the sender at the transport layer, and 1329 remaining encrypted until it reaches the ultimate media consumer 1330 (in this document, referred to as "end-to-end media encryption"). 1332 * Media encrypted by the sender at the transport layer, and 1333 remaining encrypted until it reaches some intermediary that is 1334 _not_ the ultimate media consumer, but has credentials allowing 1335 decryption of the media content. This intermediary may examine 1336 and even transform the media content in some way, before 1337 forwarding re-encrypted media content (in this document referred 1338 to as "hop-by-hop media encryption"). 1340 Both "hop-by-hop" and "end-to-end" encrypted transport may carry 1341 media that is, in addition, encrypted at the application layer. 1343 Each of these encryption strategies is intended to achieve a 1344 different goal. For instance, application-level encryption may be 1345 used for business purposes, such as avoiding piracy or enforcing 1346 geographic restrictions on playback, while transport-layer encryption 1347 may be used to prevent media steam manipulation or to protect 1348 manifests. 1350 This document does not take a position on whether those goals are 1351 "valid" (whatever that might mean). 1353 In this document, we will focus on media encrypted at the transport 1354 layer, whether encrypted "hop-by-hop" or "end-to-end". Because media 1355 encrypted at the application layer will only be processed by 1356 application-level entities, this encryption does not have transport- 1357 layer implications. 1359 Both "End-to-End" and "Hop-by-Hop" media encryption have specific 1360 implications for streaming operators. These are described in 1361 Section 7.2 and Section 7.3. 1363 7.1. General Considerations for Media Encryption 1365 The use of strong encryption does provide confidentiality for 1366 encrypted streaming media, from the sender to either an intermediary 1367 or the ultimate media consumer, and this does prevent Deep Packet 1368 Inspection by any intermediary that does not possess credentials 1369 allowing decryption. However, even encrypted content streams may be 1370 vulnerable to traffic analysis. An intermediary that can identify an 1371 encrypted media stream without decrypting it, may be able to 1372 "fingerprint" the encrypted media stream of known content, and then 1373 match the targeted media stream against the fingerprints of known 1374 content. This protection can be lessened if a media provider is 1375 repeatedly encrypting the same content. [CODASPY17] is an example of 1376 what is possible when identifying HTTPS-protected videos over TCP 1377 transport, based either on the length of entire resources being 1378 transferred, or on characteristic packet patterns at the beginning of 1379 a resource being transferred. 1381 If traffic analysis is successful at identifying encrypted content 1382 and associating it with specific users, this breaks privacy as 1383 certainly as examining decrypted traffic. 1385 Because HTTPS has historically layered HTTP on top of TLS, which is 1386 in turn layered on top of TCP, intermediaries do have access to 1387 unencrypted TCP-level transport information, such as retransmissions, 1388 and some carriers exploited this information in attempts to improve 1389 transport-layer performance [RFC3135]. The most recent standardized 1390 version of HTTPS, HTTP/3 [I-D.ietf-quic-http], uses the QUIC protocol 1391 [RFC9000] as its transport layer. QUIC relies on the TLS 1.3 initial 1392 handshake [RFC8446] only for key exchange [RFC9001], and encrypts 1393 almost all transport parameters itself, with the exception of a few 1394 invariant header fields. In the QUIC short header, the only 1395 transport-level parameter which is sent "in the clear" is the 1396 Destination Connection ID [RFC8999], and even in the QUIC long 1397 header, the only transport-level parameters sent "in the clear" are 1398 the Version, Destination Connection ID, and Source Connection ID. 1399 For these reasons, HTTP/3 is significantly more "opaque" than HTTPS 1400 with HTTP/1 or HTTP/2. 1402 7.2. Considerations for "Hop-by-Hop" Media Encryption 1404 Although the IETF has put considerable emphasis on end-to-end 1405 streaming media encryption, there are still important use cases that 1406 require the insertion of intermediaries. 1408 There are a variety of ways to involve intermediaries, and some are 1409 much more intrusive than others. 1411 From a content provider's perspective, a number of considerations are 1412 in play. The first question is likely whether the content provider 1413 intends that intermediaries are explicitly addressed from endpoints, 1414 or whether the content provider is willing to allow intermediaries to 1415 "intercept" streaming content transparently, with no awareness or 1416 permission from either endpoint. 1418 If a content provider does not actively work to avoid interception by 1419 intermediaries, the effect will be indistinguishable from 1420 "impersonation attacks", and endpoints cannot be assumed of any level 1421 of privacy. 1423 Assuming that a content provider does intend to allow intermediaries 1424 to participate in content streaming, and does intend to provide some 1425 level of privacy for endpoints, there are a number of possible tools, 1426 either already available or still being specified. These include 1428 * Server And Network assisted DASH [MPEG-DASH-SAND] - this 1429 specification introduces explicit messaging between DASH clients 1430 and network elements or between various network elements for the 1431 purpose of improving the efficiency of streaming sessions by 1432 providing information about real-time operational characteristics 1433 of networks, servers, proxies, caches, CDNs, as well as DASH 1434 client's performance and status. 1436 * "Double Encryption Procedures for the Secure Real-Time Transport 1437 Protocol (SRTP)" [RFC8723] - this specification provides a 1438 cryptographic transform for the Secure Real-time Transport 1439 Protocol that provides both hop-by-hop and end-to-end security 1440 guarantees. 1442 * Secure Media Frames [SFRAME] - [RFC8723] is closely tied to SRTP, 1443 and this close association impeded widespread deployment, because 1444 it could not be used for the most common media content delivery 1445 mechanisms. A more recent proposal, Secure Media Frames [SFRAME], 1446 also provides both hop-by-hop and end-to-end security guarantees, 1447 but can be used with other transport protocols beyond SRTP. 1449 If a content provider chooses not to involve intermediaries, this 1450 choice should be carefully considered. As an example, if media 1451 manifests are encrypted end-to-end, network providers who had been 1452 able to lower offered quality and reduce on their networks will no 1453 longer be able to do that. Some resources that might inform this 1454 consideration are in [RFC8825] (for WebRTC) and 1455 [I-D.ietf-quic-manageability] (for HTTP/3 and QUIC). 1457 7.3. Considerations for "End-to-End" Media Encryption 1459 "End-to-end" media encryption offers the potential of providing 1460 privacy for streaming media consumers, with the idea being that if an 1461 unauthorized intermediary can't decrypt streaming media, the 1462 intermediary can't use Deep Packet Inspection (DPI) to examine HTTP 1463 request and response headers and identify the media content being 1464 streamed. 1466 "End-to-end" media encryption has become much more widespread in the 1467 years since the IETF issued "Pervasive Monitoring Is an Attack" 1468 [RFC7258] as a Best Current Practice, describing pervasive monitoring 1469 as a much greater threat than previously appreciated. After the 1470 Snowden disclosures, many content providers made the decision to use 1471 HTTPS protection - HTTP over TLS - for most or all content being 1472 delivered as a routine practice, rather than in exceptional cases for 1473 content that was considered "sensitive". 1475 Unfortunately, as noted in [RFC7258], there is no way to prevent 1476 pervasive monitoring by an "attacker", while allowing monitoring by a 1477 more benign entity who "only" wants to use DPI to examine HTTP 1478 requests and responses in order to provide a better user experience. 1479 If a modern encrypted transport protocol is used for end-to-end media 1480 encryption, intermediary streaming operators are unable to examine 1481 transport and application protocol behavior. As described in 1482 Section 7.2, only an intermediary streaming operator who is 1483 explicitly authorized to examine packet payloads, rather than 1484 intercepting packets and examining them without authorization, can 1485 continue these practices. 1487 [RFC7258] said that "The IETF will strive to produce specifications 1488 that mitigate pervasive monitoring attacks", so streaming operators 1489 should expect the IETF's direction toward preventing unauthorized 1490 monitoring of IETF protocols to continue for the forseeable future. 1492 8. Further Reading and References 1494 Editor's note: This section is to be kept in a living document where 1495 future references, links and/or updates to the existing references 1496 will be reflected. That living document is likely to be an IETF- 1497 owned Wiki: https://tinyurl.com/streaming-opcons-reading 1499 8.1. Industry Terminology 1501 * SVA Glossary: https://glossary.streamingvideoalliance.org/ 1503 * Datazoom Video Player Data Dictionary: 1504 https://help.datazoom.io/hc/en-us/articles/360031323311 1506 * Datazoom Video Metrics Encyclopedia: https://help.datazoom.io/hc/ 1507 en-us/articles/360046177191 1509 8.2. Surveys and Tutorials 1511 8.2.1. Encoding 1513 The following papers describe how video is encoded, different video 1514 encoding standards and tradeoffs in selecting encoding parameters. 1516 * Overview of the Versatile Video Coding (VVC) Standard and its 1517 Applications (https://ieeexplore.ieee.org/document/9503377) 1519 * Video Compression - From Concepts to the H.264/AVC Standard 1520 (https://ieeexplore.ieee.org/document/1369695) 1522 * Developments in International Video Coding Standardization After 1523 AVC, With an Overview of Versatile Video Coding (VVC) 1524 (https://ieeexplore.ieee.org/document/9328514) 1526 * A Technical Overview of AV1 (https://ieeexplore.ieee.org/ 1527 document/9363937) 1529 * CTU Depth Decision Algorithms for HEVC: A Survey 1530 (https://arxiv.org/abs/2104.08328) 1532 8.2.2. Packaging 1534 The following papers summarize the methods for selecting packaging 1535 configurations such as the resolution-bitrate pairs, segment 1536 durations, use of constant vs. variable-duration segments, etc. 1538 * Deep Reinforced Bitrate Ladders for Adaptive Video Streaming 1539 (https://dl.acm.org/doi/10.1145/3458306.3458873) 1541 * Comparing Fixed and Variable Segment Durations for Adaptive Video 1542 Streaming: a Holistic Analysis (https://dl.acm.org/ 1543 doi/10.1145/3339825.3391858) 1545 8.2.3. Content Delivery 1547 The following links describe some of the issues and solutions 1548 regarding the interconnecting of the content delivery networks. 1550 * Open Caching: Open standards for Caching in ISP Networks: 1551 https://www.streamingvideoalliance.org/working-group/open-caching/ 1553 * Netflix Open Connect: https://openconnect.netflix.com 1555 8.2.4. ABR Algorithms 1557 The two surveys describe and compare different rate-adaptation 1558 algorithms in terms of different metrics like achieved bitrate/ 1559 quality, stall rate/duration, bitrate switching frequency, fairness, 1560 network utilization, etc. 1562 * A Survey on Bitrate Adaptation Schemes for Streaming Media Over 1563 HTTP (https://ieeexplore.ieee.org/document/8424813) 1565 * A Survey of Rate Adaptation Techniques for Dynamic Adaptive 1566 Streaming Over HTTP (https://ieeexplore.ieee.org/document/7884970) 1568 8.2.5. Low-Latency Live Adaptive Streaming 1570 The following papers describe the peculiarities of adaptive streaming 1571 in low-latency live streaming scenarios. 1573 * Catching the Moment with LoL+ in Twitch-like Low-latency Live 1574 Streaming Platforms (https://ieeexplore.ieee.org/document/9429986) 1576 * Data-driven Bandwidth Prediction Models and Automated Model 1577 Selection for Low Latency (https://ieeexplore.ieee.org/ 1578 document/9154522) 1580 * Performance Analysis of ACTE: A Bandwidth Prediction Method for 1581 Low-latency Chunked Streaming (https://dl.acm.org/ 1582 doi/10.1145/3387921) 1584 * Online Learning for Low-latency Adaptive Streaming 1585 (https://dl.acm.org/doi/10.1145/3339825.3397042) 1587 * Tightrope Walking in Low-latency Live Streaming: Optimal Joint 1588 Adaptation of Video Rate and Playback Speed (https://dl.acm.org/ 1589 doi/10.1145/3458305.3463382) 1591 * Content-aware Playback Speed Control for Low-latency Live 1592 Streaming of Sports (https://dl.acm.org/ 1593 doi/10.1145/3458305.3478437) 1595 8.2.6. Server/Client/Network Collaboration 1597 The following papers explain the benefits of server and network 1598 assistance in client-driven streaming systems. There is also a good 1599 reference about how congestion affects video quality and how rate 1600 control works in streaming applications. 1602 * Manus Manum Lavat: Media Clients and Servers Cooperating with 1603 Common Media Client/Server Data (https://dl.acm.org/ 1604 doi/10.1145/3472305.3472886) 1606 * Common media client data (CMCD): initial findings 1607 (https://dl.acm.org/doi/10.1145/3458306.3461444) 1609 * SDNDASH: Improving QoE of HTTP Adaptive Streaming Using Software 1610 Defined Networking (https://dl.acm.org/ 1611 doi/10.1145/2964284.2964332) 1613 * Caching in HTTP Adaptive Streaming: Friend or Foe? 1614 (https://dl.acm.org/doi/10.1145/2578260.2578270) 1616 * A Survey on Multi-Access Edge Computing Applied to Video 1617 Streaming: Some Research Issues and Challenges 1618 (https://ieeexplore.ieee.org/document/9374553) 1620 * The Ultimate Guide to Internet Congestion Control 1621 (https://www.compiralabs.com/ultimate-guide-congestion-control) 1623 8.2.7. QoE Metrics 1625 The following papers describe various QoE metrics one can use in 1626 streaming applications. 1628 * QoE Management of Multimedia Streaming Services in Future 1629 Networks: a Tutorial and Survey (https://ieeexplore.ieee.org/ 1630 document/8930519) 1632 * A Survey on Quality of Experience of HTTP Adaptive Streaming 1633 (https://ieeexplore.ieee.org/document/6913491) 1635 * QoE Modeling for HTTP Adaptive Video Streaming-A Survey and Open 1636 Challenges (https://ieeexplore.ieee.org/document/8666971) 1638 8.2.8. Point Clouds and Immersive Media 1640 The following papers explain the latest developments in the immersive 1641 media domain (for video and audio) and the developing standards for 1642 such media. 1644 * A Survey on Adaptive 360o Video Streaming: Solutions, Challenges 1645 and Opportunities (https://ieeexplore.ieee.org/document/9133103) 1647 * MPEG Immersive Video Coding Standard (https://ieeexplore.ieee.org/ 1648 document/9374648) 1650 * Emerging MPEG Standards for Point Cloud Compression 1651 (https://ieeexplore.ieee.org/document/8571288) 1653 * Compression of Sparse and Dense Dynamic Point Clouds--Methods and 1654 Standards (https://ieeexplore.ieee.org/document/9457097) 1656 * MPEG Standards for Compressed Representation of Immersive Audio 1657 (https://ieeexplore.ieee.org/document/9444109) 1659 * An Overview of Omnidirectional MediA Format (OMAF) 1660 (https://ieeexplore.ieee.org/document/9380215) 1662 * From Capturing to Rendering: Volumetric Media Delivery with Six 1663 Degrees of Freedom (https://ieeexplore.ieee.org/document/9247522) 1665 8.3. Open-Source Tools 1667 * 5G-MA: https://www.5g-mag.com/reference-tools 1669 * dash.js: http://reference.dashif.org/dash.js/latest/samples/ 1671 * DASH-IF Conformance: https://conformance.dashif.org 1673 * ExoPlayer: https://github.com/google/ExoPlayer 1675 * FFmpeg: https://www.ffmpeg.org/ 1677 * GPAC: https://gpac.wp.imt.fr/ 1679 * hls.js: https://github.com/video-dev/hls.js 1681 * OBS Studio: https://obsproject.com/ 1682 * Shaka Player: https://github.com/google/shaka-player 1684 * Shaka Packager: https://github.com/google/shaka-packager 1686 * Traffic Control CDN: https://trafficcontrol.apache.org/ 1688 * VideoLAN: https://www.videolan.org/projects/ 1690 * video.js: https://github.com/videojs/video.js 1692 8.4. Technical Events 1694 * ACM Mile High Video (MHV): https://mile-high.video/ 1696 * ACM Multimedia Systems (MMSys): https://acmmmsys.org 1698 * ACM Multimedia (MM): https://acmmm.org 1700 * ACM NOSSDAV: https://www.nossdav.org/ 1702 * ACM Packet Video: https://packet.video/ 1704 * Demuxed and meetups: https://demuxed.com/ and https://demuxed.com/ 1705 events/ 1707 * DVB World: https://www.dvbworld.org 1709 * EBU BroadThinking: https://tech.ebu.ch/events/broadthinking2021 1711 * IBC Conference: https://show.ibc.org/conference/ibc-conference 1713 * IEEE Int. Conf. on Multimedia and Expo (ICME) 1715 * Media Web Symposium: https://www.fokus.fraunhofer.de/de/go/mws 1717 * Live Video Stack: https://sh2021.livevideostack.com 1719 * Picture Coding Symp. (PCS) 1721 * SCTE Expo: https://expo.scte.org/ 1723 8.5. List of Organizations Working on Streaming Media 1725 * 3GPP SA4: https://www.3gpp.org/specifications-groups/sa-plenary/ 1726 sa4-codec 1728 * 5G-MAG: https://www.5g-mag.com/ 1729 * AOM: http://aomedia.org/ 1731 * ATSC: https://www.atsc.org/ 1733 * CTA WAVE: https://cta.tech/Resources/Standards/WAVE-Project 1735 * DASH Industry Forum: https://dashif.org/ 1737 * DVB: https://dvb.org/ 1739 * HbbTV: https://www.hbbtv.org/ 1741 * HESP Alliance: https://www.hespalliance.org/ 1743 * IAB: https://www.iab.com/ 1745 * MPEG: https://www.mpegstandards.org/ 1747 * Streaming Video Alliance: https://www.streamingvideoalliance.org/ 1749 * SCTE: https://www.scte.org/ 1751 * SMPTE: https://www.smpte.org/ 1753 * SRT Alliance: https://www.srtalliance.org/ 1755 * Video Services Forum: https://vsf.tv/ 1757 * VQEG: https://www.its.bldrdoc.gov/vqeg/vqeg-home.aspx 1759 * W3C: https://www.w3.org/ 1761 8.6. Topics to Keep an Eye on 1763 8.6.1. 5G and Media 1765 5G new radio and systems technologies provide new functionalities for 1766 video distribution. 5G targets not only smartphones, but also new 1767 devices such as augmented reality glasses or automotive receivers. 1768 Higher bandwidth, lower latencies, edge and cloud computing 1769 functionalities, service-based architectures, low power consumption, 1770 broadcast/multicast functionalities and other network functions come 1771 hand in hand with new media formats and processing capabilities 1772 promising better and more consistent quality for traditional video 1773 streaming services as well as enabling new experiences such as 1774 immersive media and augmented realities. 1776 * 5G Multimedia Standardization (https://www.riverpublishers.com/ 1777 journal_read_html_article.php?j=JICTS/6/1/8) 1779 8.6.2. Ad Insertion 1781 Ads can be inserted at different stages in the streaming workflow, on 1782 the server side or client side. The DASH-IF guidelines detail 1783 server-side ad-insertion with period replacements based on 1784 manipulating the manifest. HLS interstitials provide a similar 1785 approach. The idea is that the manifest can be changed and point to 1786 a sub-playlist of segments, possibly located on a different location. 1787 This approach results in efficient resource usage in the network, as 1788 duplicate caching is avoided, but some intelligence at the player is 1789 needed to deal with content transitions (e.g., codec changes, 1790 timeline gaps, etc.). Player support for such content is gradually 1791 maturing. Other important technologies for ad insertion include 1792 signalling of ads and breaks that is still typically based on SCTE-35 1793 for HLS and SCTE-214 for DASH. Such signals provide useful 1794 information for scheduling the ads and contacting ad servers. The 1795 usage of SCTE-35 for ad insertion is popular in the broadcast 1796 industry, while the exact usage in the OTT space is still being 1797 discussed in SCTE. Another important technology is identification of 1798 ads, such as based on ad-id or other commercial entities that provide 1799 such services. The identification of the ad in a manifest or stream 1800 is usually standardized by SMPTE. Other key technologies for ad 1801 insertion include tracking of viewer impressions, usually based on 1802 Video Ad Serving Template (VAST) defined by IAB. 1804 * DASH-IF Ad Insertion Guidelines: https://dashif.org/docs/CR-Ad- 1805 Insertion-r7.pdf 1807 * SCTE-214-1: https://www.scte.org/standards-development/library/ 1808 standards-catalog/ansiscte-214-1-2016/ 1810 * RP 2092-1:2015 - SMPTE Recommended Practice - Advertising Digital 1811 Identifier (Ad-ID) Representations: https://ieeexplore.ieee.org/ 1812 document/7291518 1814 * IAB Tech Lab Digital Video Studio: https://iabtechlab.com/audio- 1815 video/tech-lab-digital-video-suite/ 1817 8.6.3. Contribution and Ingest 1819 There are different contribution and ingest specifications dealing 1820 with different use cases. A common case is contribution that 1821 previously happened over satellite to a broadcast or streaming 1822 headend. RIST and SRT are examples of such contribution protocols. 1823 Within a streaming headend the encoder and packager/CDN may have an 1824 ingest/contribution interface as well. This is specified by the 1825 DASH-IF Ingest. 1827 * DASH-IF Ingest: https://github.com/Dash-Industry-Forum/Ingest 1829 * RIST: https://www.rist.tv/ 1831 * SRT: https://github.com/Haivision/srt 1833 8.6.4. Synchronized Encoding and Packaging 1835 Practical streaming headends need redundant encoders and packagers to 1836 operate without glitches and blackouts. The redundant operation 1837 requires synchronization between two or more encoders and also 1838 between two or more packagers that possibly handle different inputs 1839 and outputs, generating compatible inter-changeable output 1840 representations. This problem is important for anyone developing a 1841 streaming headend at scale, and the synchronization problem is 1842 currently under discussion in the wider community. Follow the 1843 developments at: https://sites.google.com/view/encodersyncworkshop/ 1844 home 1846 8.6.5. WebRTC-Based Streaming 1848 WebRTC is increasingly being used for streaming of time-sensitive 1849 content such as live sporting events. Innovations in cloud computing 1850 allow implementers to efficiently scale delivery of content using 1851 WebRTC. Support for WebRTC communication is available on all modern 1852 web browsers and is available on native clients for all major 1853 platforms. 1855 * DASH-IF WebRTC Discussions: https://dashif.org/webRTC/ 1857 * Overview of WebRTC: https://webrtc.org/ 1859 9. IANA Considerations 1861 This document requires no actions from IANA. 1863 10. Security Considerations 1865 Security is an important matter for streaming media applications and 1866 it was briefly touched on in Section 7.1. This document itself 1867 introduces no new security issues. 1869 11. Acknowledgments 1871 Thanks to Alexandre Gouaillard, Aaron Falk, Chris Lemmons, Dave Oran, 1872 Glenn Deen, Kyle Rose, Leslie Daigle, Lucas Pardue, Mark Nottingham, 1873 Matt Stock, Mike English, Renan Krishna, Roni Even, Sanjay Mishra, 1874 and Will Law for very helpful suggestions, reviews and comments. 1876 12. Informative References 1878 [ABRSurvey] 1879 Taani, B., Begen, A. C., Timmerer, C., Zimmermann, R., and 1880 A. Bentaleb et al, "A Survey on Bitrate Adaptation Schemes 1881 for Streaming Media Over HTTP", IEEE Communications 1882 Surveys & Tutorials , 2019, 1883 . 1885 [BAP] "The Coalition for Better Ads", n.d., 1886 . 1888 [CDiD] Huitema, C. and B. Trammell, "(A call for) Congestion 1889 Defense in Depth", July 2019, 1890 . 1893 [CMAF-CTE] Law, W., "Ultra-Low-Latency Streaming Using Chunked- 1894 Encoded and Chunked Transferred CMAF", October 2018, 1895 . 1898 [CODASPY17] 1899 Reed, A. and M. Kranch, "Identifying HTTPS-Protected 1900 Netflix Videos in Real-Time", ACM CODASPY , March 2017, 1901 . 1903 [CoDel] Nichols, K. and V. Jacobson, "Controlling Queue Delay", 1904 Communications of the ACM, Volume 55, Issue 7, pp. 42-50 , 1905 July 2012. 1907 [COPA18] Arun, V. and H. Balakrishnan, "Copa: Practical Delay-Based 1908 Congestion Control for the Internet", USENIX NSDI , April 1909 2018, . 1911 [CTA-2066] Consumer Technology Association, "Streaming Quality of 1912 Experience Events, Properties and Metrics", March 2020, 1913 . 1916 [CTA-5004] CTA, "Common Media Client Data (CMCD)", September 2020, 1917 . 1920 [CVNI] "Cisco Visual Networking Index: Forecast and Trends, 1921 2017-2022 White Paper", 27 February 2019, 1922 . 1926 [ELASTIC] De Cicco, L., Caldaralo, V., Palmisano, V., and S. 1927 Mascolo, "ELASTIC: A client-side controller for dynamic 1928 adaptive streaming over HTTP (DASH)", Packet Video 1929 Workshop , December 2013, 1930 . 1932 [Encodings] 1933 Apple, Inc, "HLS Authoring Specification for Apple 1934 Devices", June 2020, 1935 . 1939 [I-D.cardwell-iccrg-bbr-congestion-control] 1940 Cardwell, N., Cheng, Y., Yeganeh, S. H., Swett, I., and V. 1941 Jacobson, "BBR Congestion Control", Work in Progress, 1942 Internet-Draft, draft-cardwell-iccrg-bbr-congestion- 1943 control-01, 7 November 2021, 1944 . 1947 [I-D.draft-pantos-hls-rfc8216bis] 1948 Pantos, R., "HTTP Live Streaming 2nd Edition", Work in 1949 Progress, Internet-Draft, draft-pantos-hls-rfc8216bis-10, 1950 8 November 2021, . 1953 [I-D.ietf-httpbis-cache] 1954 Fielding, R. T., Nottingham, M., and J. Reschke, "HTTP 1955 Caching", Work in Progress, Internet-Draft, draft-ietf- 1956 httpbis-cache-19, 12 September 2021, 1957 . 1960 [I-D.ietf-quic-datagram] 1961 Pauly, T., Kinnear, E., and D. Schinazi, "An Unreliable 1962 Datagram Extension to QUIC", Work in Progress, Internet- 1963 Draft, draft-ietf-quic-datagram-10, 4 February 2022, 1964 . 1967 [I-D.ietf-quic-http] 1968 Bishop, M., "Hypertext Transfer Protocol Version 3 1969 (HTTP/3)", Work in Progress, Internet-Draft, draft-ietf- 1970 quic-http-34, 2 February 2021, 1971 . 1974 [I-D.ietf-quic-manageability] 1975 Kuehlewind, M. and B. Trammell, "Manageability of the QUIC 1976 Transport Protocol", Work in Progress, Internet-Draft, 1977 draft-ietf-quic-manageability-14, 21 January 2022, 1978 . 1981 [I-D.ietf-quic-qlog-h3-events] 1982 Marx, R., Niccolini, L., and M. Seemann, "HTTP/3 and QPACK 1983 event definitions for qlog", Work in Progress, Internet- 1984 Draft, draft-ietf-quic-qlog-h3-events-00, 10 June 2021, 1985 . 1988 [I-D.ietf-quic-qlog-main-schema] 1989 Marx, R., Niccolini, L., and M. Seemann, "Main logging 1990 schema for qlog", Work in Progress, Internet-Draft, draft- 1991 ietf-quic-qlog-main-schema-01, 25 October 2021, 1992 . 1995 [I-D.ietf-quic-qlog-quic-events] 1996 Marx, R., Niccolini, L., and M. Seemann, "QUIC event 1997 definitions for qlog", Work in Progress, Internet-Draft, 1998 draft-ietf-quic-qlog-quic-events-00, 10 June 2021, 1999 . 2002 [IAB-ADS] "IAB", n.d., . 2004 [IABcovid] Arkko, J., Farrel, S., Kühlewind, M., and C. Perkins, 2005 "Report from the IAB COVID-19 Network Impacts Workshop 2006 2020", November 2020, . 2009 [Jacobson-Karels] 2010 Jacobson, V. and M. Karels, "Congestion Avoidance and 2011 Control", November 1988, 2012 . 2014 [Labovitz] Labovitz, C., "Network traffic insights in the time of 2015 COVID-19: April 9 update", April 2020, 2016 . 2019 [LabovitzDDoS] 2020 Takahashi, D., "Why the game industry is still vulnerable 2021 to DDoS attacks", May 2018, 2022 . 2026 [LL-DASH] DASH-IF, "Low-latency Modes for DASH", March 2020, 2027 . 2029 [Micro] Taher, T. M., Misurac, M. J., LoCicero, J. L., and D. R. 2030 Ucci, "Microwave Oven Signal Interference Mitigation For 2031 Wi-Fi Communication Systems", 2008 5th IEEE Consumer 2032 Communications and Networking Conference 5th IEEE, pp. 2033 67-68 , 2008. 2035 [Mishra] Mishra, S. and J. Thibeault, "An update on Streaming Video 2036 Alliance", April 2020, 2037 . 2042 [MMSP20] Durak, K. and et al, "Evaluating the performance of 2043 Apple's low-latency HLS", IEEE MMSP , September 2020, 2044 . 2046 [MMSys11] Akhshabi, S., Begen, A. C., and C. Dovrolis, "An 2047 experimental evaluation of rate-adaptation algorithms in 2048 adaptive streaming over HTTP", ACM MMSys , February 2011, 2049 . 2051 [MPEG-CMAF] 2052 "ISO/IEC 23000-19:2020 Multimedia application format 2053 (MPEG-A) - Part 19: Common media application format (CMAF) 2054 for segmented media", March 2020, 2055 . 2057 [MPEG-DASH] 2058 "ISO/IEC 23009-1:2019 Dynamic adaptive streaming over HTTP 2059 (DASH) - Part 1: Media presentation description and 2060 segment formats", December 2019, 2061 . 2063 [MPEG-DASH-SAND] 2064 "ISO/IEC 23009-5:2017 Dynamic adaptive streaming over HTTP 2065 (DASH) - Part 5: Server and network assisted DASH (SAND)", 2066 February 2017, . 2068 [MPEG-TS] "H.222.0 : Information technology - Generic coding of 2069 moving pictures and associated audio information: 2070 Systems", 29 August 2018, 2071 . 2073 [MPEGI] Boyce, J. M. and et al, "MPEG Immersive Video Coding 2074 Standard", Proceedings of the IEEE , n.d., 2075 . 2077 [OReilly-HPBN] 2078 "High Performance Browser Networking (Chapter 2: Building 2079 Blocks of TCP)", May 2021, 2080 . 2082 [PCC] Schwarz, S. and et al, "Emerging MPEG Standards for Point 2083 Cloud Compression", IEEE Journal on Emerging and Selected 2084 Topics in Circuits and Systems , March 2019, 2085 . 2087 [Port443] "Service Name and Transport Protocol Port Number 2088 Registry", April 2021, . 2092 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 2093 RFC 793, DOI 10.17487/RFC0793, September 1981, 2094 . 2096 [RFC2001] Stevens, W., "TCP Slow Start, Congestion Avoidance, Fast 2097 Retransmit, and Fast Recovery Algorithms", RFC 2001, 2098 DOI 10.17487/RFC2001, January 1997, 2099 . 2101 [RFC3135] Border, J., Kojo, M., Griner, J., Montenegro, G., and Z. 2102 Shelby, "Performance Enhancing Proxies Intended to 2103 Mitigate Link-Related Degradations", RFC 3135, 2104 DOI 10.17487/RFC3135, June 2001, 2105 . 2107 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 2108 Jacobson, "RTP: A Transport Protocol for Real-Time 2109 Applications", STD 64, RFC 3550, DOI 10.17487/RFC3550, 2110 July 2003, . 2112 [RFC3758] Stewart, R., Ramalho, M., Xie, Q., Tuexen, M., and P. 2113 Conrad, "Stream Control Transmission Protocol (SCTP) 2114 Partial Reliability Extension", RFC 3758, 2115 DOI 10.17487/RFC3758, May 2004, 2116 . 2118 [RFC4733] Schulzrinne, H. and T. Taylor, "RTP Payload for DTMF 2119 Digits, Telephony Tones, and Telephony Signals", RFC 4733, 2120 DOI 10.17487/RFC4733, December 2006, 2121 . 2123 [RFC5594] Peterson, J. and A. Cooper, "Report from the IETF Workshop 2124 on Peer-to-Peer (P2P) Infrastructure, May 28, 2008", 2125 RFC 5594, DOI 10.17487/RFC5594, July 2009, 2126 . 2128 [RFC5762] Perkins, C., "RTP and the Datagram Congestion Control 2129 Protocol (DCCP)", RFC 5762, DOI 10.17487/RFC5762, April 2130 2010, . 2132 [RFC6190] Wenger, S., Wang, Y.-K., Schierl, T., and A. 2133 Eleftheriadis, "RTP Payload Format for Scalable Video 2134 Coding", RFC 6190, DOI 10.17487/RFC6190, May 2011, 2135 . 2137 [RFC6582] Henderson, T., Floyd, S., Gurtov, A., and Y. Nishida, "The 2138 NewReno Modification to TCP's Fast Recovery Algorithm", 2139 RFC 6582, DOI 10.17487/RFC6582, April 2012, 2140 . 2142 [RFC6817] Shalunov, S., Hazel, G., Iyengar, J., and M. Kuehlewind, 2143 "Low Extra Delay Background Transport (LEDBAT)", RFC 6817, 2144 DOI 10.17487/RFC6817, December 2012, 2145 . 2147 [RFC6843] Clark, A., Gross, K., and Q. Wu, "RTP Control Protocol 2148 (RTCP) Extended Report (XR) Block for Delay Metric 2149 Reporting", RFC 6843, DOI 10.17487/RFC6843, January 2013, 2150 . 2152 [RFC7258] Farrell, S. and H. Tschofenig, "Pervasive Monitoring Is an 2153 Attack", BCP 188, RFC 7258, DOI 10.17487/RFC7258, May 2154 2014, . 2156 [RFC7510] Xu, X., Sheth, N., Yong, L., Callon, R., and D. Black, 2157 "Encapsulating MPLS in UDP", RFC 7510, 2158 DOI 10.17487/RFC7510, April 2015, 2159 . 2161 [RFC7656] Lennox, J., Gross, K., Nandakumar, S., Salgueiro, G., and 2162 B. Burman, Ed., "A Taxonomy of Semantics and Mechanisms 2163 for Real-Time Transport Protocol (RTP) Sources", RFC 7656, 2164 DOI 10.17487/RFC7656, November 2015, 2165 . 2167 [RFC7661] Fairhurst, G., Sathiaseelan, A., and R. Secchi, "Updating 2168 TCP to Support Rate-Limited Traffic", RFC 7661, 2169 DOI 10.17487/RFC7661, October 2015, 2170 . 2172 [RFC8083] Perkins, C. and V. Singh, "Multimedia Congestion Control: 2173 Circuit Breakers for Unicast RTP Sessions", RFC 8083, 2174 DOI 10.17487/RFC8083, March 2017, 2175 . 2177 [RFC8084] Fairhurst, G., "Network Transport Circuit Breakers", 2178 BCP 208, RFC 8084, DOI 10.17487/RFC8084, March 2017, 2179 . 2181 [RFC8085] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage 2182 Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085, 2183 March 2017, . 2185 [RFC8216] Pantos, R., Ed. and W. May, "HTTP Live Streaming", 2186 RFC 8216, DOI 10.17487/RFC8216, August 2017, 2187 . 2189 [RFC8312] Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 2190 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 2191 RFC 8312, DOI 10.17487/RFC8312, February 2018, 2192 . 2194 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 2195 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 2196 . 2198 [RFC8622] Bless, R., "A Lower-Effort Per-Hop Behavior (LE PHB) for 2199 Differentiated Services", RFC 8622, DOI 10.17487/RFC8622, 2200 June 2019, . 2202 [RFC8723] Jennings, C., Jones, P., Barnes, R., and A.B. Roach, 2203 "Double Encryption Procedures for the Secure Real-Time 2204 Transport Protocol (SRTP)", RFC 8723, 2205 DOI 10.17487/RFC8723, April 2020, 2206 . 2208 [RFC8825] Alvestrand, H., "Overview: Real-Time Protocols for 2209 Browser-Based Applications", RFC 8825, 2210 DOI 10.17487/RFC8825, January 2021, 2211 . 2213 [RFC8999] Thomson, M., "Version-Independent Properties of QUIC", 2214 RFC 8999, DOI 10.17487/RFC8999, May 2021, 2215 . 2217 [RFC9000] Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based 2218 Multiplexed and Secure Transport", RFC 9000, 2219 DOI 10.17487/RFC9000, May 2021, 2220 . 2222 [RFC9001] Thomson, M., Ed. and S. Turner, Ed., "Using TLS to Secure 2223 QUIC", RFC 9001, DOI 10.17487/RFC9001, May 2021, 2224 . 2226 [RFC9002] Iyengar, J., Ed. and I. Swett, Ed., "QUIC Loss Detection 2227 and Congestion Control", RFC 9002, DOI 10.17487/RFC9002, 2228 May 2021, . 2230 [SFRAME] "Secure Media Frames Working Group (Home Page)", n.d., 2231 . 2233 [SRT] Sharabayko, M., "Secure Reliable Transport (SRT) Protocol 2234 Overview", 15 April 2020, 2235 . 2240 [Survey360o] 2241 Yaqoob, A., Bi, T., and G. Muntean, "A Survey on Adaptive 2242 360° Video Streaming: Solutions, Challenges and 2243 Opportunities", IEEE Communications Surveys & Tutorials , 2244 July 2020, . 2246 [tsvarea-105] 2247 "TSVAREA Minutes - IETF 105", July 2019, 2248 . 2251 Authors' Addresses 2253 Jake Holland 2254 Akamai Technologies, Inc. 2255 150 Broadway 2256 Cambridge, MA 02144, 2257 United States of America 2258 Email: jakeholland.net@gmail.com 2260 Ali Begen 2261 Networked Media 2262 Turkey 2263 Email: ali.begen@networked.media 2265 Spencer Dawkins 2266 Tencent America LLC 2267 United States of America 2268 Email: spencerdawkins.ietf@gmail.com