This is an early review of draft-ietf-alto-new-transport-07 The comments below are coming from a perspective based on recent work on media delivery over QUIC in which scalable video coding can be combined with HTTP/3 and/or WebTransport features supporting partial reliability so as to deliver media that is both resilient as well as low-latency. However, the considerations described below may not apply to ALTO, so please feel free to ignore them if they make no sense. Overall, the document does not lay out the transport requirements explicitly, and so it is hard for me to tell whether the proposed design is optimal or not. In particular, it would be useful to understand the desired reliability vs. latency tradeoff, as well as the requirements for backward compatibility (e.g. need to operate over HTTP 1.x as well as HTTP/2 and HTTP/3). There seems to be a requirement for the protocol to support HTTP 1.x as well as HTTP/2 and HTTP/3, but this is not stated explicitly, nor is it justified. In other WGs desiring to use the new capabilities of HTTP/3, a decision has sometimes been made to limit the level of backward compatibility (e.g. support for HTTP/3 and HTTP/2 only) in order to make it possible to take full advantage of the features of HTTP/3. This decision is easier to make for a new protocol than for one which already has significant HTTP 1.x deployment. So if the transport approach used here is contrained by the existing HTTP/1.x deployments, it would be useful to say so up front. Also, the requirements for reliability and latency are not explicitly laid out. The diagrams show incremental update N+1 always depending on update N. Is this an inherent limitation imposed to meet a requirement, or is it a choice that could potentially be modified? For example, do bad things happen if a client were to obtain a coherent set of information at times N and N+2 but not at time N+1? Or would it be better to delay receipt of the N+2 info so as to allow for the retransmission of N+1 info? There is a tradeoff between latency and reliability that can be made if layered coding can be permitted. For example, if it is possible to modify the dependencies, a client wouldn't necessarily have to obtain every update (e.g. if the bandwidth was insufficient to allow them all to be delivered within the required time frame). Are there circumstances where such a tradeoff would be desirable? As an example, there could be a base layer of updates where update N+2 depends on update N, and an extension layer (with higher update frequency) where update N+1 depends on update N. The client could request both layers if it could handle the higher frequency, or just the base layer if it could not. A diagram describing this kind of two-layer dependency structure is here: https://www.w3.org/TR/webrtc-svc/#L1T2* ALthough such an approach does have lower coding efficiency, it can potentially respond better to situations in which the client's bandwidth availability can vary substantially, by taking fuller advantage of HTTP/3, allowing the client to restore sync even if a "discardable" update were not delivered, as long as the stream of "non-discardable" updates could be delivered.