Internet-Draft WARP July 2022
Curley Expires 10 January 2023 [Page]
Workgroup:
Independent Submission
Internet-Draft:
draft-lcurley-warp-01
Published:
Intended Status:
Informational
Expires:
Author:
L. Curley
Twitch

Warp - Segmented Live Media Transport

Abstract

This document defines the core behavior for Warp, a segmented live media transport protocol. Warp maps live media to QUIC streams based on the underlying media encoding. Media is prioritized to reduce latency when encountering congestion.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on 10 January 2023.

Table of Contents

1. Overview

Warp is a live media transport protocol that utilizes the QUIC network protocol [QUIC].

Section 2 covers how QUIC is used to transfer media. QUIC streams are created for each segment and prioritized such that the most important media is delivered during congestion.

Section 3 covers how media is packaged into fragmented MP4 containers. Initialization segments contain track metadata while media segments contain audio and/or video samples.

Section 4 covers how control messages are encoded. These are used sent alongside segments to carry necessary metadata and control playback.

Section 5 covers how to build an optimal live media stack. The application can configure Warp based on the desired user experience.

1.1. Terms and Definitions

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

Commonly used terms in this document are described below.

Congestion:
Packet loss and queuing caused by degraded or overloaded networks.
Frame:
An video image or group of audio samples to be rendered at a specific point in time.
I-frame:
A frame that does not depend on the contents of other frames.
Group of pictures (GOP):
A I-frame followed by a sequential series of dependent frames.
Group of samples:
A sequential series of audio samples starting at a given timestamp.
Segment:
A sequence of video frames and/or audio samples serialized into a container.
Media player:
A component responsible for presenting frames to a viewer based on the presentation timestamp.
Media encoder:
A component responsible for creating a compressed media stream.
Media producer:
A QUIC endpoint sending media over the network. This could be the media encoder or middleware.
Media consumer:
A QUIC endpoint receiving media over the network. This could be the media player or middleware.
Middleware:
A media consumer that forwards streams to one or more downstream media consumers.

2. Connection

Warp uses the QUIC stream API to transfer media.

2.1. Establishment

A connection is established using WebTransport over HTTP/3 [WebTransport]. This involves establishing a HTTP/3 connection, issuing a CONNECT request to establish the session, and exposing the underlying QUIC stream API while the session is active.

The application is responsible for authentication based on the CONNECT request.

The application is responsible for determining if an endpoint is a media producer, consumer, or both.

2.2. Streams

Endpoints communicate over unidirectional QUIC streams. The application MAY use bidirectional QUIC streams for other purposes.

Both endpoints can create a new stream at any time. Each stream consists of byte data with an eventual final size. A stream is reliably delivered in order unless canceled early with an error code.

The delivery of each stream is independent. The sender MAY prioritize their delivery (Section 2.2.2); intentionally starving streams in favor of more important streams.

2.2.1. Contents

Each stream consists of MP4 top-level boxes [ISOBMFF] concatenated together.

  • Segments (Section 3) contain media samples and additional metadata. These are ftyp, moov, styp, moof, and mdat boxes.
  • Messages (Section 4) control playback or carry metadata about segments. These are warp boxes.

Each ftyp box MUST be preceded by a warp box indicating that it is an initialization segment (Section 4.1). Each styp box MUST be preceded by a warp box indicating that it is a media segment (Section 4.2).

A stream MUST start with a message and MAY contain multiple messages. A stream MUST NOT contain multiple segments.

2.2.2. Prioritization

Warp utilizes precedence to deliver the most important content during congestion.

The media producer assigns a numeric precedence to each stream. This is a strict prioritization scheme, such that any available bandwidth is allocated to streams in descending order. QUIC supports stream prioritization but does not standardize any mechanisms; see Section 2.3 in [QUIC].

The media producer MUST support sending prioritized streams using precedence. The media producer MAY choose to delay retransmitting lower priority streams when possible within QUIC flow control limits.

See Section 5.3 for suggestions on how to prioritize streams based on the contents.

2.2.3. Cancellation

During congestion, prioritization intentionally cause stream starvation for the lowest priority streams. Some form of starvation will last until the network fully recovers, which may be indefinite.

The media consumer SHOULD cancel a stream (via a QUIC STOP_SENDING frame) with application error code 0 when the segment is no longer desired. This can happen when the consumer decides to skip the remainder of a segment after some duration has elapsed. The media producer MUST NOT treat this as a fatal error.

The media producer SHOULD cancel the lowest priority stream (via QUIC RESET_STREAM frame) with application error code 0 when nearing resource limits. This can happen after sustained starvation and indicates that the consumer must skip over the remainer of a segment. The media consumer MUST NOT treat this as a fatal error.

Both of these actions will effectively drop the tail of the segment. The segment fragment size SHOULD be small to reduce data loss, ideally one fragment per frame.

2.3. Middleware

Media may go through multiple hops and processing steps on the path from the broadcaster to player. The full effectiveness of warp as an end-to-end protocol depends on middleware support.

  • Middleware SHOULD maintain stream idependence to avoid introducing head-of-line blocking.
  • Middleware SHOULD maintain stream prioritization when traversing networks susceptible to congestion.
  • Middleware MUST forward the priority message (Section 4.3) to downstream servers.

2.4. Termination

The QUIC connection can be terminated at any point with an error code.

The media producer MAY terminate the QUIC connection with an error code of 0 to indicate the end of the media stream. Either endpoint MAY use any other error code to indicate a fatal error.

3. Segments

The live stream is split into segments before being transferred over the network. Segments are fragmented MP4 files as defined by [ISOBMFF].

There are two types of segments: initialization and media.

3.1. Initialization

Initialization segments contain track metadata but no sample data.

Initialization segments MUST consist of a File Type Box (ftyp) followed by a Movie Box (moov). This Movie Box consists of Movie Header Boxes (mvhd), Track Header Boxes (tkhd), Track Boxes (trak), followed by a final Movie Extends Box (mvex). These boxes MUST NOT contain any samples and MUST have a duration of zero.

Note that a Common Media Application Format Header [CMAF] meets all these requirements.

3.2. Media

Media segments contain media samples for a single track.

Media segments MUST consist of a Segment Type Box (styp) followed by at least one media fragment. Each media fragment consists of a Movie Fragment Box (moof) followed by a Media Data Box (mdat). The Media Fragment Box MUST contain a Movie Fragment Header Box (mfhd) and Track Box (trak) with a Track ID (track_ID) matching a Track Box in the initialization segment.

Note that a Common Media Application Format Segment [CMAF] meets all these requirements.

3.2.1. Segmentation

Media is broken into segments at configurable boundaries. Each media segment MUST start with an I-frame so it can be decoded independently of other media segments. Each media segment SHOULD contain a single group of pictures (GOP).

3.2.2. Fragmentation

Media segments are further broken into media fragments at configurable boundaries. See Section 5.6 for advice on when to fragment.

4. Messages

Warp endpoints communicate via messages contained in a custom top-level [ISOBMFF] Box.

This Warp Box (warp) contains a single JSON object. Each key defines the message type and the value the contents. Unknown messages MUST be ignored.

Multiple messages with different types MAY be encoded in the same JSON object. Messages SHOULD be sent in separate boxes on the same stream when ordering is important.

4.1. init

The init message indicates that the remainder of the stream contains an initialization segment.

{
  init: {
    id: int
  }
}
id:
Incremented by 1 for each unique initialization segment.

4.2. media

The segment message contains metadata about the next media segment in the stream.

{
  segment: {
    init: int,
    timestamp: int,
    timescale: int, (optional)
  }
}
init:
The id of the cooresponding initialization segment. A decoder MUST block until the cooresponding initialization segment has been fully processed.
timestamp:
The presentation timestamp in timescale units for the first frame/sample in the next segment. This timestamp takes precedence over the timestamp in media container to support stream stitching.
timescale (optional):
The number of units in second. This defaults to 1000 to signify milliseconds.

4.3. priority

The priority message informs middleware about the intended priority of the current stream. Middleware MUST foward this message but it is OPTIONAL to obey it.

{
  priority: {
    precedence: int,
  }
}
precedence:
An integer value, indicating that any available bandwidth SHOULD be allocated to streams in descending order.

4.4. Extensions

Custom messages MUST start with x-. Unicode LATIN SMALL LETTER X (U+0078) followed by HYPHEN-MINUS (U+002D).

Custom messages could control playback. For example: x-pause could halt the transfer of segments until followed by a x-play.

Custom messages SHOULD use a unique prefix to reduce collisions. For example: x-twitch-load would contain identification required to start playback of a Twitch stream.

5. Configuration

Achieving both a high quality and low latency broadcast is difficult. Warp is a generic media transport and it is ultimately up to the application to choose the desired user experience.

5.1. Playback Buffer

It is RECOMMENDED that a media player use a playback buffer to ensure smooth playback at the cost of higher latency. The buffer SHOULD be at last large enough to synchronize audio/video and to account for network/encoding jitter.

The size of the playback buffer MAY be increased by temporarily pausing playback or reducing playback speed. The playback buffer MAY be fragmented such that unreceived media can be skipped.

A larger playback buffer gives the application more time to recover from starvation without user impact. A media player MAY increase the size of the playback buffer when future starvation events are anticipated.

Middleware SHOULD NOT use a buffer, as it will increase latency for each hop.

5.2. Congestion Control

Warp uses the underlying QUIC congestion control [QUIC-RECOVERY]. The default congestion control algorithm [NewReno] will work in many situations but can be improved.

This section outlines how a live media congestion control algorithm should perform, but does not recommend a specific algorithm.

5.2.1. Transmission Delays

Live media is generated in real-time and played back at a constant rate. Transmission delays cause frame delays, necessitating a larger playback buffer. Additionally, the effectiveness of prioritizing streams is reduced by high transmission delays.

A live media congestion control algorithm SHOULD aim to minimize delay, possibly at the expense of throughput.

The default QUIC congestion controller is loss-based and suffers from bufferbloat. Large queues on intermediate routers cause high transmission delays prior to any packet loss.

5.2.1.1. Application Limited

Live media is often application-limited, as the encoder limits the amount of data available to be sent. This occurs more frequently with a smaller fragment duration, as individual frames might not be large enough to saturate the congestion window.

A live media congestion control algorithm SHOULD have some way of determining the network capabilities even when application-limited. Alternatively, the media producer CAN pad the network with QUIC PING frames to avoid being application limited at the expense of higher bandwidth usage.

The default QUIC congestion controller does not increase the congestion window when application-limited. See section 7.8 of [QUIC-RECOVERY].

5.2.2. Constant Delivery

Live media generates frames at regular intervals. Delaying the delivery of a frame relative to others necessitates a larger playback buffer

A live media congestion control algorithm SHOULD NOT introduce artificial starvation.

A counter-example is BBR [BBR], as the PROBE_RTT state effectively prohibits sending packets for a short period of time for the sake of remeasuring min_rtt. The impact is reduced in future versions of BBR.

5.3. Prioritization

Media segments might be delivered out of order during starvation.

The media player determines how long to wait for a given segment (buffer size) before skipping ahead. The media consumer MAY cancel a skipped segment to save bandwidth, or leave it downloading in the background (ex. to support rewind).

Prioritization allows a single media producer to support multiple media consumers with different latency targets. For example, one consumer could have a 1s buffer to minimize latency, while another consumer could have a 5s buffer to improve quality, while a yet another consumer could have a 30s buffer to receive all media (ex. VOD recorder).

5.3.1. Live Content

Live content is encoded and delivered in real-time. Media delivery is blocked on the encoder throughput, except during congestion causing limited network throughput. To best deliver live content:

  • Audio streams SHOULD be prioritized over video streams. This allows the media consumer to skip video while audio continues uninterrupted during congestion.
  • Newer video streams SHOULD be prioritized over older video streams. This allows the media consumer to skip older video content during congestion.

For example, this formula will prioritize audio segments, but only up to 3s in the future:

  if is_audio:
    precedence = timestamp + 3s
  else:
    precedence = timestamp

5.3.2. Recorded Content

Recorded content has already been encoded. Media delivery is blocked exclusively on network throughput.

Warp is primarily designed for live content, but can switch to head-of-line blocking by changing stream prioritization. This is also useful for content that should not be skipped over, such as advertisements. To enable head-of-line blocking:

  • Audio and video streams SHOULD be equally prioritized.
  • Older streams SHOULD be prioritized over newer streams.

For example, this formula will prioritize older segments:

  precedence = -timestamp

5.4. Bitrate Selection

Live media is encoded in real-time and the bitrate can be adjusted on the fly. This is common in 1:1 media delivery.

A media producer MAY reduce the media bitrate in response to starvation. This can be detected via the estimated bitrate as reported by the congestion control algorithm. A less accurate indication of starvation is when the QUIC sender is actively prioritizing streams, as it means the congestion control window is full.

5.5. Rendition Selection

Live media is can be encoded into multiple renditions, such that media consumers could receive different renditions based on network conditions. This is common in 1:n media delivery.

A media producer MAY switch between renditions at segment boundaries. Renditions SHOULD be fragmented at the same timestamps to avoid introducing gaps or redundant media.

5.5.1. Push versus Pull

Protocols like HLS and DASH rely on the media player to determine the rendition. However, it becomes increasingly difficult to determine the network capabilities on the receiver side as media fragments become smaller and smaller. It also introduces split-brain, as the sender's congestion control may disagree with the receiver's requested rendition.

It is RECOMMENDED that the media producer chooses the rendition based on the estimated bitrate as reported by the congestion control algorithm. Alternatively, the media producer MAY expose the estimated bitrate if the player must be in charge.

5.6. Fragmentation

Segments are encoded as fragmented MP4. Each fragment is a moof and mdat pair containing data for a number of samples. Using more fragments introduces more container overhead (higher bitrate), so it's up to the application to determine the fragment frequency.

For the highest latency: one fragment per segment. This means the entire segment must be received before any of the samples can be processed. This is optimal for content that is not intended to be decoded in real-time.

For the lowest latency: one fragment per frame. This means that each frame can be decoded when fully received. This is optimal for real-time decoding, however it introduces the largest overhead.

Fragments can be created with variable durations. However, the fragment duration SHOULD be relatively consistent to avoid introducing additional playback starvation. Likewise audio and video SHOULD be encoded using similar fragment durations.

5.7. Encoding

Warp is primarily a network protocol and does enforce any encoding requirements. However, encoding has a significant impact on the user experience and should be taken into account.

B-frames MAY be used to improve compression efficiency, but they introduce jitter. This necessitates a larger playback buffer, increasing latency.

Audio and video MAY be encoded and transmitted independently. However, audio can be encoded without delay unlike video. Media players SHOULD be prepared to receive audio before video even without congestion.

6. Security Considerations

6.1. Resource Exhaustion

Live media requires significant bandwidth and resources. Failure to set limits will quickly cause resource exhaustion.

Warp uses QUIC flow control to impose resource limits at the network layer. Endpoints SHOULD set flow control limits based on the anticipated media bitrate.

The media producer prioritizes and transmits streams out of order. Streams might be starved indefinitely during congestion and SHOULD be cancelled (Section 2.2.3) after hitting some timeout or resource limit.

The media consumer might receive streams out of order. If stream data is buffered, for example to decode segments in order, then the media consumer SHOULD cancel a stream (Section 2.2.3) after hitting some timeout or resource limit.

7. IANA Considerations

This document has no IANA actions.

8. References

8.1. Normative References

[ISOBMFF]
"Information technology — Coding of audio-visual objects — Part 12: ISO Base Media File Format", .
[QUIC]
Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based Multiplexed and Secure Transport", RFC 9000, DOI 10.17487/RFC9000, , <https://www.rfc-editor.org/info/rfc9000>.
[QUIC-RECOVERY]
Iyengar, J., Ed. and I. Swett, Ed., "QUIC Loss Detection and Congestion Control", RFC 9002, DOI 10.17487/RFC9002, , <https://www.rfc-editor.org/info/rfc9002>.
[RFC2119]
Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, , <https://www.rfc-editor.org/info/rfc2119>.
[RFC8174]
Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, , <https://www.rfc-editor.org/info/rfc8174>.
[WebTransport]
Frindell, A., Kinnear, E., and V. Vasiliev, "WebTransport over HTTP/3", Work in Progress, Internet-Draft, draft-ietf-webtrans-http3-03, , <https://www.ietf.org/archive/id/draft-ietf-webtrans-http3-03.txt>.

8.2. Informative References

[BBR]
Cardwell, N., Cheng, Y., Yeganeh, S. H., Swett, I., and V. Jacobson, "BBR Congestion Control", Work in Progress, Internet-Draft, draft-cardwell-iccrg-bbr-congestion-control-02, , <https://www.ietf.org/archive/id/draft-cardwell-iccrg-bbr-congestion-control-02.txt>.
[CMAF]
"Information technology -- Multimedia application format (MPEG-A) -- Part 19: Common media application format (CMAF) for segmented media", .
[NewReno]
Henderson, T., Floyd, S., Gurtov, A., and Y. Nishida, "The NewReno Modification to TCP's Fast Recovery Algorithm", RFC 6582, DOI 10.17487/RFC6582, , <https://www.rfc-editor.org/info/rfc6582>.

Contributors

Author's Address

Luke Curley
Twitch