idnits 2.17.1 draft-ietf-tsvwg-l4s-arch-12.txt: -(1607): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 6 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (25 October 2021) is 915 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-briscoe-docsis-q-protection-00 == Outdated reference: A later version (-03) exists of draft-briscoe-iccrg-prague-congestion-control-00 == Outdated reference: A later version (-02) exists of draft-cardwell-iccrg-bbr-congestion-control-00 == Outdated reference: A later version (-28) exists of draft-ietf-tcpm-accurate-ecn-15 == Outdated reference: A later version (-15) exists of draft-ietf-tcpm-generalized-ecn-08 == Outdated reference: A later version (-25) exists of draft-ietf-tsvwg-aqm-dualq-coupled-18 == Outdated reference: A later version (-22) exists of draft-ietf-tsvwg-ecn-encap-guidelines-16 == Outdated reference: A later version (-29) exists of draft-ietf-tsvwg-ecn-l4s-id-19 == Outdated reference: A later version (-22) exists of draft-ietf-tsvwg-nqb-07 == Outdated reference: A later version (-23) exists of draft-ietf-tsvwg-rfc6040update-shim-14 == Outdated reference: A later version (-07) exists of draft-stewart-tsvwg-sctpecn-05 -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) -- Obsolete informational reference (is this intentional?): RFC 7540 (Obsoleted by RFC 9113) -- Obsolete informational reference (is this intentional?): RFC 8312 (Obsoleted by RFC 9438) Summary: 0 errors (**), 0 flaws (~~), 13 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe, Ed. 3 Internet-Draft Independent 4 Intended status: Informational K. De Schepper 5 Expires: 28 April 2022 Nokia Bell Labs 6 M. Bagnulo Braun 7 Universidad Carlos III de Madrid 8 G. White 9 CableLabs 10 25 October 2021 12 Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: 13 Architecture 14 draft-ietf-tsvwg-l4s-arch-12 16 Abstract 18 This document describes the L4S architecture, which enables Internet 19 applications to achieve Low queuing Latency, Low Loss, and Scalable 20 throughput (L4S). The insight on which L4S is based is that the root 21 cause of queuing delay is in the congestion controllers of senders, 22 not in the queue itself. With the L4S architecture _all_ Internet 23 applications could (but do not have to) transition away from 24 congestion control algorithms that cause substantial queuing delay, 25 to a new class of congestion controls that induce very little 26 queuing, aided by explicit congestion signaling from the network. 27 This new class of congestion controls can provide low latency for 28 capacity-seeking flows, so applications can achieve both high 29 bandwidth and low latency. 31 The architecture primarily concerns incremental deployment. It 32 defines mechanisms that allow the new class of L4S congestion 33 controls to coexist with 'Classic' congestion controls in a shared 34 network. These mechanisms aim to ensure that the latency and 35 throughput performance using an L4S-compliant congestion controller 36 is usually much better (and never worse) than performance would have 37 been using a 'Classic' congestion controller, and that competing 38 flows continuing to use 'Classic' controllers are typically not 39 impacted by the presence of L4S. These characteristics are important 40 to encourage adoption of L4S congestion control algorithms and L4S 41 compliant network elements. 43 The L4S architecture consists of three components: network support to 44 isolate L4S traffic from classic traffic; protocol features that 45 allow network elements to identify L4S traffic; and host support for 46 L4S congestion controls. 48 Status of This Memo 50 This Internet-Draft is submitted in full conformance with the 51 provisions of BCP 78 and BCP 79. 53 Internet-Drafts are working documents of the Internet Engineering 54 Task Force (IETF). Note that other groups may also distribute 55 working documents as Internet-Drafts. The list of current Internet- 56 Drafts is at https://datatracker.ietf.org/drafts/current/. 58 Internet-Drafts are draft documents valid for a maximum of six months 59 and may be updated, replaced, or obsoleted by other documents at any 60 time. It is inappropriate to use Internet-Drafts as reference 61 material or to cite them other than as "work in progress." 63 This Internet-Draft will expire on 28 April 2022. 65 Copyright Notice 67 Copyright (c) 2021 IETF Trust and the persons identified as the 68 document authors. All rights reserved. 70 This document is subject to BCP 78 and the IETF Trust's Legal 71 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 72 license-info) in effect on the date of publication of this document. 73 Please review these documents carefully, as they describe your rights 74 and restrictions with respect to this document. Code Components 75 extracted from this document must include Simplified BSD License text 76 as described in Section 4.e of the Trust Legal Provisions and are 77 provided without warranty as described in the Simplified BSD License. 79 Table of Contents 81 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 82 1.1. Document Roadmap . . . . . . . . . . . . . . . . . . . . 5 83 2. L4S Architecture Overview . . . . . . . . . . . . . . . . . . 5 84 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7 85 4. L4S Architecture Components . . . . . . . . . . . . . . . . . 9 86 4.1. Protocol Mechanisms . . . . . . . . . . . . . . . . . . . 9 87 4.2. Network Components . . . . . . . . . . . . . . . . . . . 10 88 4.3. Host Mechanisms . . . . . . . . . . . . . . . . . . . . . 13 89 5. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . 14 90 5.1. Why These Primary Components? . . . . . . . . . . . . . . 14 91 5.2. What L4S adds to Existing Approaches . . . . . . . . . . 17 92 6. Applicability . . . . . . . . . . . . . . . . . . . . . . . . 20 93 6.1. Applications . . . . . . . . . . . . . . . . . . . . . . 20 94 6.2. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 22 95 6.3. Applicability with Specific Link Technologies . . . . . . 23 96 6.4. Deployment Considerations . . . . . . . . . . . . . . . . 23 97 6.4.1. Deployment Topology . . . . . . . . . . . . . . . . . 24 98 6.4.2. Deployment Sequences . . . . . . . . . . . . . . . . 25 99 6.4.3. L4S Flow but Non-ECN Bottleneck . . . . . . . . . . . 27 100 6.4.4. L4S Flow but Classic ECN Bottleneck . . . . . . . . . 28 101 6.4.5. L4S AQM Deployment within Tunnels . . . . . . . . . . 28 102 7. IANA Considerations (to be removed by RFC Editor) . . . . . . 28 103 8. Security Considerations . . . . . . . . . . . . . . . . . . . 29 104 8.1. Traffic Rate (Non-)Policing . . . . . . . . . . . . . . . 29 105 8.2. 'Latency Friendliness' . . . . . . . . . . . . . . . . . 30 106 8.3. Interaction between Rate Policing and L4S . . . . . . . . 31 107 8.4. ECN Integrity . . . . . . . . . . . . . . . . . . . . . . 32 108 8.5. Privacy Considerations . . . . . . . . . . . . . . . . . 33 109 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 33 110 10. Informative References . . . . . . . . . . . . . . . . . . . 33 111 Appendix A. Standardization items . . . . . . . . . . . . . . . 42 112 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 45 114 1. Introduction 116 At any one time, it is increasingly common for _all_ of the traffic 117 in a bottleneck link (e.g. a household's Internet access) to come 118 from applications that prefer low delay: interactive Web, Web 119 services, voice, conversational video, interactive video, interactive 120 remote presence, instant messaging, online gaming, remote desktop, 121 cloud-based applications and video-assisted remote control of 122 machinery and industrial processes. In the last decade or so, much 123 has been done to reduce propagation delay by placing caches or 124 servers closer to users. However, queuing remains a major, albeit 125 intermittent, component of latency. For instance spikes of hundreds 126 of milliseconds are not uncommon, even with state-of-the-art active 127 queue management (AQM) [COBALT], [DOCSIS3AQM]. Queuing in access 128 network bottlenecks is typically configured to cause overall network 129 delay to roughly double during a long-running flow, relative to 130 expected base (unloaded) path delay [BufferSize]. Low loss is also 131 important because, for interactive applications, losses translate 132 into even longer retransmission delays. 134 It has been demonstrated that, once access network bit rates reach 135 levels now common in the developed world, increasing capacity offers 136 diminishing returns if latency (delay) is not addressed. Therefore, 137 the goal is an Internet service with very Low queueing Latency, very 138 Low Loss and Scalable throughput (L4S). Very low queuing latency 139 means less than 1 millisecond (ms) on average and less than about 140 2 ms at the 99th percentile. This document describes the L4S 141 architecture for achieving these goals. 143 Differentiated services (Diffserv) offers Expedited Forwarding 144 (EF [RFC3246]) for some packets at the expense of others, but this 145 makes no difference when all (or most) of the traffic at a bottleneck 146 at any one time requires low latency. In contrast, L4S still works 147 well when _all_ traffic is L4S - a service that gives without taking 148 needs none of the configuration or management baggage (traffic 149 policing, traffic contracts) associated with favouring some traffic 150 flows over others. 152 Queuing delay degrades performance intermittently [Hohlfeld14]. It 153 occurs when a large enough capacity-seeking (e.g. TCP) flow is 154 running alongside the user's traffic in the bottleneck link, which is 155 typically in the access network. Or when the low latency application 156 is itself a large capacity-seeking or adaptive rate (e.g. interactive 157 video) flow. At these times, the performance improvement from L4S 158 must be sufficient that network operators will be motivated to deploy 159 it. 161 Active Queue Management (AQM) is part of the solution to queuing 162 under load. AQM improves performance for all traffic, but there is a 163 limit to how much queuing delay can be reduced by solely changing the 164 network; without addressing the root of the problem. 166 The root of the problem is the presence of standard TCP congestion 167 control (Reno [RFC5681]) or compatible variants (e.g. TCP 168 Cubic [RFC8312]). We shall use the term 'Classic' for these Reno- 169 friendly congestion controls. Classic congestion controls induce 170 relatively large saw-tooth-shaped excursions up the queue and down 171 again, which have been growing as flow rate scales [RFC3649]. So if 172 a network operator naively attempts to reduce queuing delay by 173 configuring an AQM to operate at a shallower queue, a Classic 174 congestion control will significantly underutilize the link at the 175 bottom of every saw-tooth. 177 It has been demonstrated that if the sending host replaces a Classic 178 congestion control with a 'Scalable' alternative, when a suitable AQM 179 is deployed in the network the performance under load of all the 180 above interactive applications can be significantly improved. For 181 instance, queuing delay under heavy load with the example DCTCP/DualQ 182 solution cited below on a DSL or Ethernet link is roughly 1 to 2 183 milliseconds at the 99th percentile without losing link 184 utilization [DualPI2Linux], [DCttH19] (for other link types, see 185 Section 6.3). This compares with 5-20 ms on _average_ with a Classic 186 congestion control and current state-of-the-art AQMs such as FQ- 187 CoDel [RFC8290], PIE [RFC8033] or DOCSIS PIE [RFC8034] and about 188 20-30 ms at the 99th percentile [DualPI2Linux]. 190 L4S is designed for incremental deployment. It is possible to deploy 191 the L4S service at a bottleneck link alongside the existing best 192 efforts service [DualPI2Linux] so that unmodified applications can 193 start using it as soon as the sender's stack is updated. Access 194 networks are typically designed with one link as the bottleneck for 195 each site (which might be a home, small enterprise or mobile device), 196 so deployment at either or both ends of this link should give nearly 197 all the benefit in the respective direction. With some transport 198 protocols, namely TCP and SCTP, the sender has to check for suitably 199 updated receiver feedback, whereas with more recent transport 200 protocols such as QUIC and DCCP, all receivers have always been 201 suitable. 203 This document presents the L4S architecture, by describing and 204 justifying the component parts and how they interact to provide the 205 scalable, low latency, low loss Internet service. It also details 206 the approach to incremental deployment, as briefly summarized above. 208 1.1. Document Roadmap 210 This document describes the L4S architecture in three passes. First 211 this brief overview gives the very high level idea and states the 212 main components with minimal rationale. This is only intended to 213 give some context for the terminology definitions that follow in 214 Section 3, and to explain the structure of the rest of the document. 215 Then Section 4 goes into more detail on each component with some 216 rationale, but still mostly stating what the architecture is, rather 217 than why. Finally Section 5 justifies why each element of the 218 solution was chosen (Section 5.1) and why these choices were 219 different from other solutions (Section 5.2). 221 Having described the architecture, Section 6 clarifies its 222 applicability; that is, the applications and use-cases that motivated 223 the design, the challenges applying the architecture to various link 224 technologies, and various incremental deployment models: including 225 the two main deployment topologies, different sequences for 226 incremental deployment and various interactions with pre-existing 227 approaches. The document ends with the usual tail pieces, including 228 extensive discussion of traffic policing and other security 229 considerations Section 8. 231 2. L4S Architecture Overview 233 Below we outline the three main components to the L4S architecture; 234 1) the scalable congestion control on the sending host; 2) the AQM at 235 the network bottleneck; and 3) the protocol between them. 237 But first, the main point to grasp is that low latency is not 238 provided by the network - low latency results from the careful 239 behaviour of the scalable congestion controllers used by L4S senders. 240 The network does have a role - primarily to isolate the low latency 241 of the carefully behaving L4S traffic from the higher queuing delay 242 needed by traffic with pre-existing Classic behaviour. The network 243 also alters the way it signals queue growth to the transport - It 244 uses the Explicit Congestion Notification (ECN) protocol, but it 245 signals the very start of queue growth - immediately without the 246 smoothing delay typical of Classic AQMs. Because ECN support is 247 essential for L4S, senders use the ECN field as the protocol to 248 identify to the network which packets are L4S and which are Classic. 250 1) Host: Scalable congestion controls already exist. They solve the 251 scaling problem with Classic congestion controls, such as Reno or 252 Cubic. Because flow rate has scaled since TCP congestion control 253 was first designed in 1988, assuming the flow lasts long enough, 254 it now takes hundreds of round trips (and growing) to recover 255 after a congestion signal (whether a loss or an ECN mark) as shown 256 in the examples in Section 5.1 and [RFC3649]. Therefore control 257 of queuing and utilization becomes very slack, and the slightest 258 disturbances (e.g. from new flows starting) prevent a high rate 259 from being attained. 261 With a scalable congestion control, the average time from one 262 congestion signal to the next (the recovery time) remains 263 invariant as the flow rate scales, all other factors being equal. 264 This maintains the same degree of control over queueing and 265 utilization whatever the flow rate, as well as ensuring that high 266 throughput is more robust to disturbances. The scalable control 267 used most widely (in controlled environments) is Data Center TCP 268 (DCTCP [RFC8257]), which has been implemented and deployed in 269 Windows Server Editions (since 2012), in Linux and in FreeBSD. 270 Although DCTCP as-is functions well over wide-area round trip 271 times, most implementations lack certain safety features that 272 would be necessary for use outside controlled environments like 273 data centres (see Section 6.4.3 and Appendix A). So scalable 274 congestion control needs to be implemented in TCP and other 275 transport protocols (QUIC, SCTP, RTP/RTCP, RMCAT, etc.). Indeed, 276 between the present document being drafted and published, the 277 following scalable congestion controls were implemented: TCP 278 Prague [PragueLinux], QUIC Prague, an L4S variant of the RMCAT 279 SCReAM controller [SCReAM] and the L4S ECN part of BBRv2 [BBRv2] 280 intended for TCP and QUIC transports. 282 2) Network: L4S traffic needs to be isolated from the queuing 283 latency of Classic traffic. One queue per application flow (FQ) 284 is one way to achieve this, e.g. FQ-CoDel [RFC8290]. However, 285 just two queues is sufficient and does not require inspection of 286 transport layer headers in the network, which is not always 287 possible (see Section 5.2). With just two queues, it might seem 288 impossible to know how much capacity to schedule for each queue 289 without inspecting how many flows at any one time are using each. 290 And it would be undesirable to arbitrarily divide access network 291 capacity into two partitions. The Dual Queue Coupled AQM was 292 developed as a minimal complexity solution to this problem. It 293 acts like a 'semi-permeable' membrane that partitions latency but 294 not bandwidth. As such, the two queues are for transition from 295 Classic to L4S behaviour, not bandwidth prioritization. 297 Section 4 gives a high level explanation of how the per-flow-queue 298 (FQ) and DualQ variants of L4S work, and 299 [I-D.ietf-tsvwg-aqm-dualq-coupled] gives a full explanation of the 300 DualQ Coupled AQM framework. A specific marking algorithm is not 301 mandated for L4S AQMs. Appendices of 302 [I-D.ietf-tsvwg-aqm-dualq-coupled] give non-normative examples 303 that have been implemented and evaluated, and give recommended 304 default parameter settings. It is expected that L4S experiments 305 will improve knowledge of parameter settings and whether the set 306 of marking algorithms needs to be limited. 308 3) Protocol: A host needs to distinguish L4S and Classic packets 309 with an identifier so that the network can classify them into 310 their separate treatments. [I-D.ietf-tsvwg-ecn-l4s-id] concludes 311 that all alternatives involve compromises, but the ECT(1) and CE 312 codepoints of the ECN field represent a workable solution. As 313 already explained, the network also uses ECN to immediately signal 314 the very start of queue growth to the transport. 316 3. Terminology 318 Classic Congestion Control: A congestion control behaviour that can 319 co-exist with standard Reno [RFC5681] without causing 320 significantly negative impact on its flow rate [RFC5033]. The 321 scaling problem with Classic congestion control is explained, with 322 examples, in Section 5.1 and in [RFC3649]. 324 Scalable Congestion Control: A congestion control where the average 325 time from one congestion signal to the next (the recovery time) 326 remains invariant as the flow rate scales, all other factors being 327 equal. For instance, DCTCP averages 2 congestion signals per 328 round-trip whatever the flow rate, as do other recently developed 329 scalable congestion controls, e.g. Relentless TCP [Mathis09], TCP 330 Prague [I-D.briscoe-iccrg-prague-congestion-control], 331 [PragueLinux], BBRv2 [BBRv2] and the L4S variant of SCReAM for 332 real-time media [SCReAM], [RFC8298]). See Section 4.3 of 333 [I-D.ietf-tsvwg-ecn-l4s-id] for more explanation. 335 Classic service: The Classic service is intended for all the 336 congestion control behaviours that co-exist with Reno [RFC5681] 337 (e.g. Reno itself, Cubic [RFC8312], 338 Compound [I-D.sridharan-tcpm-ctcp], TFRC [RFC5348]). The term 339 'Classic queue' means a queue providing the Classic service. 341 Low-Latency, Low-Loss Scalable throughput (L4S) service: The 'L4S' 342 service is intended for traffic from scalable congestion control 343 algorithms, such as the Prague congestion 344 control [I-D.briscoe-iccrg-prague-congestion-control], which was 345 derived from DCTCP [RFC8257]. The L4S service is for more 346 general traffic than just TCP Prague--it allows the set of 347 congestion controls with similar scaling properties to Prague to 348 evolve, such as the examples listed above (Relentless, SCReAM). 349 The term 'L4S queue' means a queue providing the L4S service. 351 The terms Classic or L4S can also qualify other nouns, such as 352 'queue', 'codepoint', 'identifier', 'classification', 'packet', 353 'flow'. For example: an L4S packet means a packet with an L4S 354 identifier sent from an L4S congestion control. 356 Both Classic and L4S services can cope with a proportion of 357 unresponsive or less-responsive traffic as well, but in the L4S 358 case its rate has to be smooth enough or low enough not build a 359 queue (e.g. DNS, VoIP, game sync datagrams, etc). 361 Reno-friendly: The subset of Classic traffic that is friendly to the 362 standard Reno congestion control defined for TCP in [RFC5681]. 363 Reno-friendly is used in place of 'TCP-friendly', given the latter 364 has become imprecise, because the TCP protocol is now used with so 365 many different congestion control behaviours, and Reno is used in 366 non-TCP transports such as QUIC [RFC9000]. 368 Classic ECN: The original Explicit Congestion Notification (ECN) 369 protocol [RFC3168], which requires ECN signals to be treated as 370 equivalent to drops, both when generated in the network and when 371 responded to by the sender. 373 L4S uses the ECN field as an identifier 374 [I-D.ietf-tsvwg-ecn-l4s-id] with the names for the four codepoints 375 of the 2-bit IP-ECN field unchanged from those defined in 376 [RFC3168]: Not ECT, ECT(0), ECT(1) and CE, where ECT stands for 377 ECN-Capable Transport and CE stands for Congestion Experienced. A 378 packet marked with the CE codepoint is termed 'ECN-marked' or 379 sometimes just 'marked' where the context makes ECN obvious. 381 Site: A home, mobile device, small enterprise or campus, where the 382 network bottleneck is typically the access link to the site. Not 383 all network arrangements fit this model but it is a useful, widely 384 applicable generalization. 386 4. L4S Architecture Components 388 The L4S architecture is composed of the elements in the following 389 three subsections. 391 4.1. Protocol Mechanisms 393 The L4S architecture involves: a) unassignment of an identifier; b) 394 reassignment of the same identifier; and c) optional further 395 identifiers: 397 a. An essential aspect of a scalable congestion control is the use 398 of explicit congestion signals. 'Classic' ECN [RFC3168] requires 399 an ECN signal to be treated as equivalent to drop, both when it 400 is generated in the network and when it is responded to by hosts. 401 L4S needs networks and hosts to support a more fine-grained 402 meaning for each ECN signal that is less severe than a drop, so 403 that the L4S signals: 405 * can be much more frequent; 407 * can be signalled immediately, without the signficant delay 408 required to smooth out fluctuations in the queue. 410 To enable L4S, the standards track [RFC3168] has had to be 411 updated to allow L4S packets to depart from the 'equivalent to 412 drop' constraint. [RFC8311] is a standards track update to relax 413 specific requirements in RFC 3168 (and certain other standards 414 track RFCs), which clears the way for the experimental changes 415 proposed for L4S. [RFC8311] also reclassifies the original 416 experimental assignment of the ECT(1) codepoint as an ECN 417 nonce [RFC3540] as historic. 419 b. [I-D.ietf-tsvwg-ecn-l4s-id] specifies that ECT(1) is used as the 420 identifier to classify L4S packets into a separate treatment from 421 Classic packets. This satisfies the requirements for identifying 422 an alternative ECN treatment in [RFC4774]. 424 The CE codepoint is used to indicate Congestion Experienced by 425 both L4S and Classic treatments. This raises the concern that a 426 Classic AQM earlier on the path might have marked some ECT(0) 427 packets as CE. Then these packets will be erroneously classified 428 into the L4S queue. Appendix B of [I-D.ietf-tsvwg-ecn-l4s-id] 429 explains why five unlikely eventualities all have to coincide for 430 this to have any detrimental effect, which even then would only 431 involve a vanishingly small likelihood of a spurious 432 retransmission. 434 c. A network operator might wish to include certain unresponsive, 435 non-L4S traffic in the L4S queue if it is deemed to be smoothly 436 enough paced and low enough rate not to build a queue. For 437 instance, VoIP, low rate datagrams to sync online games, 438 relatively low rate application-limited traffic, DNS, LDAP, etc. 439 This traffic would need to be tagged with specific identifiers, 440 e.g. a low latency Diffserv Codepoint such as Expedited 441 Forwarding (EF [RFC3246]), Non-Queue-Building 442 (NQB [I-D.ietf-tsvwg-nqb]), or operator-specific identifiers. 444 4.2. Network Components 446 The L4S architecture aims to provide low latency without the _need_ 447 for per-flow operations in network components. Nonetheless, the 448 architecture does not preclude per-flow solutions. The following 449 bullets describe the known arrangements: a) the DualQ Coupled AQM 450 with an L4S AQM in one queue coupled from a Classic AQM in the other; 451 b) Per-Flow Queues with an instance of a Classic and an L4S AQM in 452 each queue; c) Dual queues with per-flow AQMs, but no per-flow 453 queues: 455 a. The Dual Queue Coupled AQM (illustrated in Figure 1) achieves the 456 'semi-permeable' membrane property mentioned earlier as follows. 457 The obvious part is that using two separate queues isolates the 458 queuing delay of one from the other. The less obvious part is 459 how the two queues act as if they are a single pool of bandwidth 460 without the scheduler needing to decide between them. This is 461 achieved by having the Classic AQM provide a congestion signal to 462 both queues in a manner that ensures a consistent response from 463 the two types of congestion control. In other words, the Classic 464 AQM generates a drop/mark probability based on congestion in the 465 Classic queue, uses this probability to drop/mark packets in that 466 queue, and also uses this probability to affect the marking 467 probability in the L4S queue. This coupling of the congestion 468 signaling between the two queues makes the L4S flows slow down to 469 leave the right amount of capacity for the Classic traffic (as 470 they would if they were the same type of traffic sharing the same 471 queue). Then the scheduler can serve the L4S queue with priority 472 (denoted by the '1' on the higher priority input), because the 473 L4S traffic isn't offering up enough traffic to use all the 474 priority that it is given. Therefore, on short time-scales (sub- 475 round-trip) the prioritization of the L4S queue protects its low 476 latency by allowing bursts to dissipate quickly; but on longer 477 time-scales (round-trip and longer) the Classic queue creates an 478 equal and opposite pressure against the L4S traffic to ensure 479 that neither has priority when it comes to bandwidth. The 480 tension between prioritizing L4S and coupling the marking from 481 the Classic AQM results in approximate per-flow fairness. To 482 protect against unresponsive traffic in the L4S queue taking 483 advantage of the prioritization and starving the Classic queue, 484 it is advisable not to use strict priority, but instead to use a 485 weighted scheduler (see Appendix A of 486 [I-D.ietf-tsvwg-aqm-dualq-coupled]). 488 When there is no Classic traffic, the L4S queue's AQM comes into 489 play. It starts congestion marking with a very shallow queue, so 490 L4S traffic maintains very low queuing delay. 492 If either queue becomes persistently overloaded, ECN marking is 493 disabled, as recommended in Section 7 of [RFC3168] and 494 Section 4.2.1 of [RFC7567]. Then both queues introduce the same 495 level of drop (not shown in the figure). 497 The Dual Queue Coupled AQM has been specified as generically as 498 possible [I-D.ietf-tsvwg-aqm-dualq-coupled] without specifying 499 the particular AQMs to use in the two queues so that designers 500 are free to implement diverse ideas. Informational appendices in 501 that draft give pseudocode examples of two different specific AQM 502 approaches: one called DualPI2 (pronounced Dual PI 503 Squared) [DualPI2Linux] that uses the PI2 variant of PIE, and a 504 zero-config variant of RED called Curvy RED. A DualQ Coupled AQM 505 based on PIE has also been specified and implemented for Low 506 Latency DOCSIS [DOCSIS3.1]. 508 (3) (2) 509 .-------^------. .--------------^-------------------. 510 ,-(1)-----. ______ 511 ; ________ : L4S --------. | | 512 :|Scalable| : _\ ||___\_| mark | 513 :| sender | : __________ / / || / |______|\ _________ 514 :|________|\; | |/ --------' ^ \1|condit'nl| 515 `---------'\_| IP-ECN | Coupling : \|priority |_\ 516 ________ / |Classifier| : /|scheduler| / 517 |Classic |/ |__________|\ --------. ___:__ / |_________| 518 | sender | \_\ || | |||___\_| mark/|/ 519 |________| / || | ||| / | drop | 520 Classic --------' |______| 522 Figure 1: Components of an L4S DualQ Coupled AQM Solution: 1) 523 Scalable Sending Host; 2) Isolation in separate network 524 queues; and 3) Packet Identification Protocol 526 b. Per-Flow Queues and AQMs: A scheduler with per-flow queues such 527 as FQ-CoDel or FQ-PIE can be used for L4S. For instance within 528 each queue of an FQ-CoDel system, as well as a CoDel AQM, there 529 is typically also the option of ECN marking at an immediate 530 (unsmoothed) shallow threshold to support use in data centres 531 (see Sec.5.2.7 of [RFC8290]). This can be modified so that the 532 shallow threshold is solely applied to ECT(1) packets 533 [FQ_CoDel_Thresh]. Then if there is a flow of non-ECN or ECT(0) 534 packets in the per-flow-queue, the Classic AQM (e.g. CoDel) is 535 applied; while if there is a flow of ECT(1) packets in the queue, 536 the shallower (typically sub-millisecond) threshold is applied. 537 In addition, ECT(0) and not-ECT packets could potentially be 538 classified into a separate flow-queue from ECT(1) and CE packets 539 to avoid them mixing if they share a common flow-identifier (e.g. 540 in a VPN). 542 c. Dual-queues, but per-flow AQMs: It should also be possible to use 543 dual queues for isolation, but with per-flow marking to control 544 flow-rates (instead of the coupled per-queue marking of the Dual 545 Queue Coupled AQM). One of the two queues would be for isolating 546 L4S packets, which would be classified by the ECN codepoint. 547 Flow rates could be controlled by flow-specific marking. The 548 policy goal of the marking could be to differentiate flow rates 549 (e.g. [Nadas20], which requires additional signalling of a per- 550 flow 'value'), or to equalize flow-rates (perhaps in a similar 551 way to Approx Fair CoDel [AFCD], 552 [I-D.morton-tsvwg-codel-approx-fair], but with two queues not 553 one). 555 Note that whenever the term 'DualQ' is used loosely without 556 saying whether marking is per-queue or per-flow, it means a dual 557 queue AQM with per-queue marking. 559 4.3. Host Mechanisms 561 The L4S architecture includes two main mechanisms in the end host 562 that we enumerate next: 564 a. Scalable Congestion Control at the sender: Section 2 defines a 565 scalable congestion control as one where the average time from 566 one congestion signal to the next (the recovery time) remains 567 invariant as the flow rate scales, all other factors being equal. 568 Data Center TCP is the most widely used example. It has been 569 documented as an informational record of the protocol currently 570 in use in controlled environments [RFC8257]. A draft list of 571 safety and performance improvements for a scalable congestion 572 control to be usable on the public Internet has been drawn up 573 (the so-called 'Prague L4S requirements' in Appendix A of 574 [I-D.ietf-tsvwg-ecn-l4s-id]). The subset that involve risk of 575 harm to others have been captured as normative requirements in 576 Section 4 of [I-D.ietf-tsvwg-ecn-l4s-id]. TCP 577 Prague [I-D.briscoe-iccrg-prague-congestion-control] has been 578 implemented in Linux as a reference implementation to address 579 these requirements [PragueLinux]. 581 Transport protocols other than TCP use various congestion 582 controls that are designed to be friendly with Reno. Before they 583 can use the L4S service, they will need to be updated to 584 implement a scalable congestion response, which they will have to 585 indicate by using the ECT(1) codepoint. Scalable variants are 586 under consideration for more recent transport protocols, 587 e.g. QUIC, and the L4S ECN part of BBRv2 [BBRv2] is a scalable 588 congestion control intended for the TCP and QUIC transports, 589 amongst others. Also an L4S variant of the RMCAT SCReAM 590 controller [RFC8298] has been implemented [SCReAM] for media 591 transported over RTP. 593 Section 4.3 of [I-D.ietf-tsvwg-ecn-l4s-id] defines scalable 594 congestion control in more detail, and specifies that 595 requirements that an L4S scalable congestion control has to 596 comply with. 598 b. The ECN feedback in some transport protocols is already 599 sufficiently fine-grained for L4S (specifically DCCP [RFC4340] 600 and QUIC [RFC9000]). But others either require update or are in 601 the process of being updated: 603 * For the case of TCP, the feedback protocol for ECN embeds the 604 assumption from Classic ECN [RFC3168] that an ECN mark is 605 equivalent to a drop, making it unusable for a scalable TCP. 606 Therefore, the implementation of TCP receivers will have to be 607 upgraded [RFC7560]. Work to standardize and implement more 608 accurate ECN feedback for TCP (AccECN) is in 609 progress [I-D.ietf-tcpm-accurate-ecn], [PragueLinux]. 611 * ECN feedback is only roughly sketched in an appendix of the 612 SCTP specification [RFC4960]. A fuller specification has been 613 proposed in a long-expired draft [I-D.stewart-tsvwg-sctpecn], 614 which would need to be implemented and deployed before SCTCP 615 could support L4S. 617 * For RTP, sufficient ECN feedback was defined in [RFC6679], but 618 [RFC8888] defines the latest standards track improvements. 620 5. Rationale 622 5.1. Why These Primary Components? 624 Explicit congestion signalling (protocol): Explicit congestion 625 signalling is a key part of the L4S approach. In contrast, use of 626 drop as a congestion signal creates a tension because drop is both 627 an impairment (less would be better) and a useful signal (more 628 would be better): 630 * Explicit congestion signals can be used many times per round 631 trip, to keep tight control, without any impairment. Under 632 heavy load, even more explicit signals can be applied so the 633 queue can be kept short whatever the load. In contrast, 634 Classic AQMs have to introduce very high packet drop at high 635 load to keep the queue short. By using ECN, an L4S congestion 636 control's sawtooth reduction can be smaller and therefore 637 return to the operating point more often, without worrying that 638 more sawteeth will cause more signals. The consequent smaller 639 amplitude sawteeth fit between an empty queue and a very 640 shallow marking threshold (~1 ms in the public Internet), so 641 queue delay variation can be very low, without risk of under- 642 utilization. 644 * Explicit congestion signals can be emitted immediately to track 645 fluctuations of the queue. L4S shifts smoothing from the 646 network to the host. The network doesn't know the round trip 647 times of any of the flows. So if the network is responsible 648 for smoothing (as in the Classic approach), it has to assume a 649 worst case RTT, otherwise long RTT flows would become unstable. 651 This delays Classic congestion signals by 100-200 ms. In 652 contrast, each host knows its own round trip time. So, in the 653 L4S approach, the host can smooth each flow over its own RTT, 654 introducing no more soothing delay than strictly necessary 655 (usually only a few milliseconds). A host can also choose not 656 to introduce any smoothing delay if appropriate, e.g. during 657 flow start-up. 659 Neither of the above are feasible if explicit congestion 660 signalling has to be considered 'equivalent to drop' (as was 661 required with Classic ECN [RFC3168]), because drop is an 662 impairment as well as a signal. So drop cannot be excessively 663 frequent, and drop cannot be immediate, otherwise too many drops 664 would turn out to have been due to only a transient fluctuation in 665 the queue that would not have warranted dropping a packet in 666 hindsight. Therefore, in an L4S AQM, the L4S queue uses a new L4S 667 variant of ECN that is not equivalent to drop (see section 5.2 of 668 [I-D.ietf-tsvwg-ecn-l4s-id]), while the Classic queue uses either 669 Classic ECN [RFC3168] or drop, which are equivalent to each other. 671 Before Classic ECN was standardized, there were various proposals 672 to give an ECN mark a different meaning from drop. However, there 673 was no particular reason to agree on any one of the alternative 674 meanings, so 'equivalent to drop' was the only compromise that 675 could be reached. RFC 3168 contains a statement that: 677 "An environment where all end nodes were ECN-Capable could 678 allow new criteria to be developed for setting the CE 679 codepoint, and new congestion control mechanisms for end-node 680 reaction to CE packets. However, this is a research issue, and 681 as such is not addressed in this document." 683 Latency isolation (network): L4S congestion controls keep queue 684 delay low whereas Classic congestion controls need a queue of the 685 order of the RTT to avoid under-utilization. One queue cannot 686 have two lengths, therefore L4S traffic needs to be isolated in a 687 separate queue (e.g. DualQ) or queues (e.g. FQ). 689 Coupled congestion notification: Coupling the congestion 690 notification between two queues as in the DualQ Coupled AQM is not 691 necessarily essential, but it is a simple way to allow senders to 692 determine their rate, packet by packet, rather than be overridden 693 by a network scheduler. An alternative is for a network scheduler 694 to control the rate of each application flow (see discussion in 695 Section 5.2). 697 L4S packet identifier (protocol): Once there are at least two 698 treatments in the network, hosts need an identifier at the IP 699 layer to distinguish which treatment they intend to use. 701 Scalable congestion notification: A scalable congestion control in 702 the host keeps the signalling frequency from the network high 703 whatever the flow rate, so that queue delay variations can be 704 small when conditions are stable, and rate can track variations in 705 available capacity as rapidly as possible otherwise. 707 Low loss: Latency is not the only concern of L4S. The 'Low Loss' 708 part of the name denotes that L4S generally achieves zero 709 congestion loss due to its use of ECN. Otherwise, loss would 710 itself cause delay, particularly for short flows, due to 711 retransmission delay [RFC2884]. 713 Scalable throughput: The "Scalable throughput" part of the name 714 denotes that the per-flow throughput of scalable congestion 715 controls should scale indefinitely, avoiding the imminent scaling 716 problems with Reno-friendly congestion control 717 algorithms [RFC3649]. It was known when TCP congestion avoidance 718 was first developed in 1988 that it would not scale to high 719 bandwidth-delay products (see footnote 6 in [TCP-CA]). Today, 720 regular broadband flow rates over WAN distances are already beyond 721 the scaling range of Classic Reno congestion control. So `less 722 unscalable' Cubic [RFC8312] and Compound [I-D.sridharan-tcpm-ctcp] 723 variants of TCP have been successfully deployed. However, these 724 are now approaching their scaling limits. 726 For instance, we will consider a scenario with a maximum RTT of 727 30 ms at the peak of each sawtooth. As Reno packet rate scales 8x 728 from 1,250 to 10,000 packet/s (from 15 to 120 Mb/s with 1500 B 729 packets), the time to recover from a congestion event rises 730 proportionately by 8x as well, from 422 ms to 3.38 s. It is 731 clearly problematic for a congestion control to take multiple 732 seconds to recover from each congestion event. Cubic [RFC8312] 733 was developed to be less unscalable, but it is approaching its 734 scaling limit; with the same max RTT of 30 ms, at 120 Mb/s Cubic 735 is still fully in its Reno-friendly mode, so it takes about 4.3 s 736 to recover. However, once the flow rate scales by 8x again to 737 960 Mb/s it enters true Cubic mode, with a recovery time of 738 12.2 s. From then on, each further scaling by 8x doubles Cubic's 739 recovery time (because the cube root of 8 is 2), e.g. at 7.68 Gb/s 740 the recovery time is 24.3 s. In contrast a scalable congestion 741 control like DCTCP or TCP Prague induces 2 congestion signals per 742 round trip on average, which remains invariant for any flow rate, 743 keeping dynamic control very tight. 745 For a feel of where the global average lone-flow download sits on 746 this scale at the time of writing (2021), according to [BDPdata] 747 globally averaged fixed access capacity was 103Mb/s in 2020 and 748 averaged base RTT to a CDN was 25-34ms in 2019. Averaging of per- 749 country data was weighted by Internet user population. So a lone 750 CUBIC flow would at best take about 200 round trips (5 s) to 751 recover from each of its sawtooth reductions, if the flow even 752 lasted that long. This is described as 'at best' because it 753 assume everyone uses an AQM, whereas In reality most users still 754 have a bloated tail-drop buffer. So likely average recovery time 755 would be at least 4x 5 s, if not more, because RTT under load 756 would be at least double, and recovery time depends on the square 757 of RTT. 759 Although work on scaling congestion controls tends to start with 760 TCP as the transport, the above is not intended to exclude other 761 transports (e.g. SCTP, QUIC) or less elastic algorithms 762 (e.g. RMCAT), which all tend to adopt the same or similar 763 developments. 765 5.2. What L4S adds to Existing Approaches 767 All the following approaches address some part of the same problem 768 space as L4S. In each case, it is shown that L4S complements them or 769 improves on them, rather than being a mutually exclusive alternative: 771 Diffserv: Diffserv addresses the problem of bandwidth apportionment 772 for important traffic as well as queuing latency for delay- 773 sensitive traffic. Of these, L4S solely addresses the problem of 774 queuing latency. Diffserv will still be necessary where important 775 traffic requires priority (e.g. for commercial reasons, or for 776 protection of critical infrastructure traffic) - see 777 [I-D.briscoe-tsvwg-l4s-diffserv]. Nonetheless, the L4S approach 778 can provide low latency for _all_ traffic within each Diffserv 779 class (including the case where there is only the one default 780 Diffserv class). 782 Also, Diffserv can only provide a latency benefit if a small 783 subset of the traffic on a bottleneck link requests low latency. 784 As already explained, it has no effect when all the applications 785 in use at one time at a single site (home, small business or 786 mobile device) require low latency. In contrast, because L4S 787 works for all traffic, it needs none of the management baggage 788 (traffic policing, traffic contracts) associated with favouring 789 some packets over others. This baggage has probably held Diffserv 790 back from widespread end-to-end deployment. 792 In particular, because networks tend not to trust end systems to 793 identify which packets should be favoured over others, where 794 networks assign packets to Diffserv classes they tend to use 795 packet inspection of application flow identifiers or deeper 796 inspection of application signatures. Thus, nowadays, Diffserv 797 doesn't always sit well with encryption of the layers above IP 798 [RFC8404]. So users have to choose between privacy and QoS. 800 As with Diffserv, the L4S identifier is in the IP header. But, in 801 contrast to Diffserv, the L4S identifier does not convey a want or 802 a need for a certain level of quality. Rather, it promises a 803 certain behaviour (scalable congestion response), which networks 804 can objectively verify if they need to. This is because low delay 805 depends on collective host behaviour, whereas bandwidth priority 806 depends on network behaviour. 808 State-of-the-art AQMs: AQMs such as PIE and FQ-CoDel give a 809 significant reduction in queuing delay relative to no AQM at all. 810 L4S is intended to complement these AQMs, and should not distract 811 from the need to deploy them as widely as possible. Nonetheless, 812 AQMs alone cannot reduce queuing delay too far without 813 significantly reducing link utilization, because the root cause of 814 the problem is on the host - where Classic congestion controls use 815 large saw-toothing rate variations. The L4S approach resolves 816 this tension by ensuring hosts can minimize the size of their 817 sawteeth without appearing so aggressive to Classic flows that 818 they starve them. 820 Per-flow queuing or marking: Similarly, per-flow approaches such as 821 FQ-CoDel or Approx Fair CoDel [AFCD] are not incompatible with the 822 L4S approach. However, per-flow queuing alone is not enough - it 823 only isolates the queuing of one flow from others; not from 824 itself. Per-flow implementations still need to have support for 825 scalable congestion control added, which has already been done in 826 FQ-CoDel (see Sec.5.2.7 of [RFC8290]). Without this simple 827 modification, per-flow AQMs like FQ-CoDel would still not be able 828 to support applications that need both very low delay and high 829 bandwidth, e.g. video-based control of remote procedures, or 830 interactive cloud-based video (see Note 1 below). 832 Although per-flow techniques are not incompatible with L4S, it is 833 important to have the DualQ alternative. This is because handling 834 end-to-end (layer 4) flows in the network (layer 3 or 2) precludes 835 some important end-to-end functions. For instance: 837 a. Per-flow forms of L4S like FQ-CoDel are incompatible with full 838 end-to-end encryption of transport layer identifiers for 839 privacy and confidentiality (e.g. IPSec or encrypted VPN 840 tunnels, as opposed to TLS over UDP), because they require 841 packet inspection to access the end-to-end transport flow 842 identifiers. 844 In contrast, the DualQ form of L4S requires no deeper 845 inspection than the IP layer. So, as long as operators take 846 the DualQ approach, their users can have both very low queuing 847 delay and full end-to-end encryption [RFC8404]. 849 b. With per-flow forms of L4S, the network takes over control of 850 the relative rates of each application flow. Some see it as 851 an advantage that the network will prevent some flows running 852 faster than others. Others consider it an inherent part of 853 the Internet's appeal that applications can control their rate 854 while taking account of the needs of others via congestion 855 signals. They maintain that this has allowed applications 856 with interesting rate behaviours to evolve, for instance, 857 variable bit-rate video that varies around an equal share 858 rather than being forced to remain equal at every instant, or 859 e2e scavenger behaviours [RFC6817] that use less than an equal 860 share of capacity [LEDBAT_AQM]. 862 The L4S architecture does not require the IETF to commit to 863 one approach over the other, because it supports both, so that 864 the 'market' can decide. Nonetheless, in the spirit of 'Do 865 one thing and do it well' [McIlroy78], the DualQ option 866 provides low delay without prejudging the issue of flow-rate 867 control. Then, flow rate policing can be added separately if 868 desired. This allows application control up to a point, but 869 the network can still choose to set the point at which it 870 intervenes to prevent one flow completely starving another. 872 Note: 874 1. It might seem that self-inflicted queuing delay within a per- 875 flow queue should not be counted, because if the delay wasn't 876 in the network it would just shift to the sender. However, 877 modern adaptive applications, e.g. HTTP/2 [RFC7540] or some 878 interactive media applications (see Section 6.1), can keep low 879 latency objects at the front of their local send queue by 880 shuffling priorities of other objects dependent on the 881 progress of other transfers. They cannot shuffle objects once 882 they have released them into the network. 884 Alternative Back-off ECN (ABE): Here again, L4S is not an 885 alternative to ABE but a complement that introduces much lower 886 queuing delay. ABE [RFC8511] alters the host behaviour in 887 response to ECN marking to utilize a link better and give ECN 888 flows faster throughput. It uses ECT(0) and assumes the network 889 still treats ECN and drop the same. Therefore ABE exploits any 890 lower queuing delay that AQMs can provide. But as explained 891 above, AQMs still cannot reduce queuing delay too far without 892 losing link utilization (to allow for other, non-ABE, flows). 894 BBR: Bottleneck Bandwidth and Round-trip propagation time 895 (BBR [I-D.cardwell-iccrg-bbr-congestion-control]) controls queuing 896 delay end-to-end without needing any special logic in the network, 897 such as an AQM. So it works pretty-much on any path (although it 898 has not been without problems, particularly capacity sharing in 899 BBRv1). BBR keeps queuing delay reasonably low, but perhaps not 900 quite as low as with state-of-the-art AQMs such as PIE or FQ- 901 CoDel, and certainly nowhere near as low as with L4S. Queuing 902 delay is also not consistently low, due to BBR's regular bandwidth 903 probing spikes and its aggressive flow start-up phase. 905 L4S complements BBR. Indeed BBRv2 [BBRv2] can use L4S ECN where 906 available and a scalable L4S congestion control behaviour in 907 response to any ECN signalling from the path. The L4S ECN signal 908 complements the delay based congestion control aspects of BBR with 909 an explicit indication that hosts can use, both to converge on a 910 fair rate and to keep below a shallow queue target set by the 911 network. Without L4S ECN, both these aspects need to be assumed 912 or estimated. 914 6. Applicability 916 6.1. Applications 918 A transport layer that solves the current latency issues will provide 919 new service, product and application opportunities. 921 With the L4S approach, the following existing applications also 922 experience significantly better quality of experience under load: 924 * Gaming, including cloud based gaming; 926 * VoIP; 928 * Video conferencing; 930 * Web browsing; 932 * (Adaptive) video streaming; 934 * Instant messaging. 936 The significantly lower queuing latency also enables some interactive 937 application functions to be offloaded to the cloud that would hardly 938 even be usable today: 940 * Cloud based interactive video; 942 * Cloud based virtual and augmented reality. 944 The above two applications have been successfully demonstrated with 945 L4S, both running together over a 40 Mb/s broadband access link 946 loaded up with the numerous other latency sensitive applications in 947 the previous list as well as numerous downloads - all sharing the 948 same bottleneck queue simultaneously [L4Sdemo16]. For the former, a 949 panoramic video of a football stadium could be swiped and pinched so 950 that, on the fly, a proxy in the cloud could generate a sub-window of 951 the match video under the finger-gesture control of each user. For 952 the latter, a virtual reality headset displayed a viewport taken from 953 a 360 degree camera in a racing car. The user's head movements 954 controlled the viewport extracted by a cloud-based proxy. In both 955 cases, with 7 ms end-to-end base delay, the additional queuing delay 956 of roughly 1 ms was so low that it seemed the video was generated 957 locally. 959 Using a swiping finger gesture or head movement to pan a video are 960 extremely latency-demanding actions--far more demanding than VoIP. 961 Because human vision can detect extremely low delays of the order of 962 single milliseconds when delay is translated into a visual lag 963 between a video and a reference point (the finger or the orientation 964 of the head sensed by the balance system in the inner ear --- the 965 vestibular system). 967 Without the low queuing delay of L4S, cloud-based applications like 968 these would not be credible without significantly more access 969 bandwidth (to deliver all possible video that might be viewed) and 970 more local processing, which would increase the weight and power 971 consumption of head-mounted displays. When all interactive 972 processing can be done in the cloud, only the data to be rendered for 973 the end user needs to be sent. 975 Other low latency high bandwidth applications such as: 977 * Interactive remote presence; 979 * Video-assisted remote control of machinery or industrial 980 processes. 982 are not credible at all without very low queuing delay. No amount of 983 extra access bandwidth or local processing can make up for lost time. 985 6.2. Use Cases 987 The following use-cases for L4S are being considered by various 988 interested parties: 990 * Where the bottleneck is one of various types of access network: 991 e.g. DSL, Passive Optical Networks (PON), DOCSIS cable, mobile, 992 satellite (see Section 6.3 for some technology-specific details) 994 * Private networks of heterogeneous data centres, where there is no 995 single administrator that can arrange for all the simultaneous 996 changes to senders, receivers and network needed to deploy DCTCP: 998 - a set of private data centres interconnected over a wide area 999 with separate administrations, but within the same company 1001 - a set of data centres operated by separate companies 1002 interconnected by a community of interest network (e.g. for the 1003 finance sector) 1005 - multi-tenant (cloud) data centres where tenants choose their 1006 operating system stack (Infrastructure as a Service - IaaS) 1008 * Different types of transport (or application) congestion control: 1010 - elastic (TCP/SCTP); 1012 - real-time (RTP, RMCAT); 1014 - query (DNS/LDAP). 1016 * Where low delay quality of service is required, but without 1017 inspecting or intervening above the IP layer [RFC8404]: 1019 - mobile and other networks have tended to inspect higher layers 1020 in order to guess application QoS requirements. However, with 1021 growing demand for support of privacy and encryption, L4S 1022 offers an alternative. There is no need to select which 1023 traffic to favour for queuing, when L4S can give favourable 1024 queuing to all traffic. 1026 * If queuing delay is minimized, applications with a fixed delay 1027 budget can communicate over longer distances, or via a longer 1028 chain of service functions [RFC7665] or onion routers. 1030 * If delay jitter is minimized, it is possible to reduce the 1031 dejitter buffers on the receive end of video streaming, which 1032 should improve the interactive experience 1034 6.3. Applicability with Specific Link Technologies 1036 Certain link technologies aggregate data from multiple packets into 1037 bursts, and buffer incoming packets while building each burst. WiFi, 1038 PON and cable all involve such packet aggregation, whereas fixed 1039 Ethernet and DSL do not. No sender, whether L4S or not, can do 1040 anything to reduce the buffering needed for packet aggregation. So 1041 an AQM should not count this buffering as part of the queue that it 1042 controls, given no amount of congestion signals will reduce it. 1044 Certain link technologies also add buffering for other reasons, 1045 specifically: 1047 * Radio links (cellular, WiFi, satellite) that are distant from the 1048 source are particularly challenging. The radio link capacity can 1049 vary rapidly by orders of magnitude, so it is considered desirable 1050 to hold a standing queue that can utilize sudden increases of 1051 capacity; 1053 * Cellular networks are further complicated by a perceived need to 1054 buffer in order to make hand-overs imperceptible; 1056 L4S cannot remove the need for all these different forms of 1057 buffering. However, by removing 'the longest pole in the tent' 1058 (buffering for the large sawteeth of Classic congestion controls), 1059 L4S exposes all these 'shorter poles' to greater scrutiny. 1061 Until now, the buffering needed for these additional reasons tended 1062 to be over-specified - with the excuse that none were 'the longest 1063 pole in the tent'. But having removed the 'longest pole', it becomes 1064 worthwhile to minimize them, for instance reducing packet aggregation 1065 burst sizes and MAC scheduling intervals. 1067 6.4. Deployment Considerations 1069 L4S AQMs, whether DualQ [I-D.ietf-tsvwg-aqm-dualq-coupled] or FQ, 1070 e.g. [RFC8290] are, in themselves, an incremental deployment 1071 mechanism for L4S - so that L4S traffic can coexist with existing 1072 Classic (Reno-friendly) traffic. Section 6.4.1 explains why only 1073 deploying an L4S AQM in one node at each end of the access link will 1074 realize nearly all the benefit of L4S. 1076 L4S involves both end systems and the network, so Section 6.4.2 1077 suggests some typical sequences to deploy each part, and why there 1078 will be an immediate and significant benefit after deploying just one 1079 part. 1081 Section 6.4.3 and Section 6.4.4 describe the converse incremental 1082 deployment case where there is no L4S AQM at the network bottleneck, 1083 so any L4S flow traversing this bottleneck has to take care in case 1084 it is competing with Classic traffic. 1086 6.4.1. Deployment Topology 1088 L4S AQMs will not have to be deployed throughout the Internet before 1089 L4S can benefit anyone. Operators of public Internet access networks 1090 typically design their networks so that the bottleneck will nearly 1091 always occur at one known (logical) link. This confines the cost of 1092 queue management technology to one place. 1094 The case of mesh networks is different and will be discussed later in 1095 this section. But the known bottleneck case is generally true for 1096 Internet access to all sorts of different 'sites', where the word 1097 'site' includes home networks, small- to medium-sized campus or 1098 enterprise networks and even cellular devices (Figure 2). Also, this 1099 known-bottleneck case tends to be applicable whatever the access link 1100 technology; whether xDSL, cable, PON, cellular, line of sight 1101 wireless or satellite. 1103 Therefore, the full benefit of the L4S service should be available in 1104 the downstream direction when an L4S AQM is deployed at the ingress 1105 to this bottleneck link. And similarly, the full upstream service 1106 will be available once an L4S AQM is deployed at the ingress into the 1107 upstream link. (Of course, multi-homed sites would only see the full 1108 benefit once all their access links were covered.) 1110 ______ 1111 ( ) 1112 __ __ ( ) 1113 |DQ\________/DQ|( enterprise ) 1114 ___ |__/ \__| ( /campus ) 1115 ( ) (______) 1116 ( ) ___||_ 1117 +----+ ( ) __ __ / \ 1118 | DC |-----( Core )|DQ\_______________/DQ|| home | 1119 +----+ ( ) |__/ \__||______| 1120 (_____) __ 1121 |DQ\__/\ __ ,===. 1122 |__/ \ ____/DQ||| ||mobile 1123 \/ \__|||_||device 1124 | o | 1125 `---' 1127 Figure 2: Likely location of DualQ (DQ) Deployments in common 1128 access topologies 1130 Deployment in mesh topologies depends on how overbooked the core is. 1131 If the core is non-blocking, or at least generously provisioned so 1132 that the edges are nearly always the bottlenecks, it would only be 1133 necessary to deploy an L4S AQM at the edge bottlenecks. For example, 1134 some data-centre networks are designed with the bottleneck in the 1135 hypervisor or host NICs, while others bottleneck at the top-of-rack 1136 switch (both the output ports facing hosts and those facing the 1137 core). 1139 An L4S AQM would often next be needed where the WiFi links in a hom 1140 sometimes become the bottleneck. And an L4S AQM would eventually 1141 also need to be deployed at any other persistent bottlenecks such as 1142 network interconnections, e.g. some public Internet exchange points 1143 and the ingress and egress to WAN links interconnecting data-centres. 1145 6.4.2. Deployment Sequences 1147 For any one L4S flow to provide benefit, it requires 3 parts to have 1148 been deployed. This was the same deployment problem that ECN 1149 faced [RFC8170] so we have learned from that experience. 1151 Firstly, L4S deployment exploits the fact that DCTCP already exists 1152 on many Internet hosts (Windows, FreeBSD and Linux); both servers and 1153 clients. Therefore, an L4S AQM can be deployed at a network 1154 bottleneck to immediately give a working deployment of all the L4S 1155 parts for testing, as long as the ECT(0) codepoint is switched to 1156 ECT(1). DCTCP needs some safety concerns to be fixed for general use 1157 over the public Internet (see Section 4.3 of 1158 [I-D.ietf-tsvwg-ecn-l4s-id]), but DCTCP is not on by default, so 1159 these issues can be managed within controlled deployments or 1160 controlled trials. 1162 Secondly, the performance improvement with L4S is so significant that 1163 it enables new interactive services and products that were not 1164 previously possible. It is much easier for companies to initiate new 1165 work on deployment if there is budget for a new product trial. If, 1166 in contrast, there were only an incremental performance improvement 1167 (as with Classic ECN), spending on deployment tends to be much harder 1168 to justify. 1170 Thirdly, the L4S identifier is defined so that initially network 1171 operators can enable L4S exclusively for certain customers or certain 1172 applications. But this is carefully defined so that it does not 1173 compromise future evolution towards L4S as an Internet-wide service. 1174 This is because the L4S identifier is defined not only as the end-to- 1175 end ECN field, but it can also optionally be combined with any other 1176 packet header or some status of a customer or their access link (see 1177 section 5.4 of [I-D.ietf-tsvwg-ecn-l4s-id]). Operators could do this 1178 anyway, even if it were not blessed by the IETF. However, it is best 1179 for the IETF to specify that, if they use their own local identifier, 1180 it must be in combination with the IETF's identifier. Then, if an 1181 operator has opted for an exclusive local-use approach, later they 1182 only have to remove this extra rule to make the service work 1183 Internet-wide - it will already traverse middleboxes, peerings, etc. 1185 +-+--------------------+----------------------+---------------------+ 1186 | | Servers or proxies | Access link | Clients | 1187 +-+--------------------+----------------------+---------------------+ 1188 |0| DCTCP (existing) | | DCTCP (existing) | 1189 +-+--------------------+----------------------+---------------------+ 1190 |1| |Add L4S AQM downstream| | 1191 | | WORKS DOWNSTREAM FOR CONTROLLED DEPLOYMENTS/TRIALS | 1192 +-+--------------------+----------------------+---------------------+ 1193 |2| Upgrade DCTCP to | |Replace DCTCP feedb'k| 1194 | | TCP Prague | | with AccECN | 1195 | | FULLY WORKS DOWNSTREAM | 1196 +-+--------------------+----------------------+---------------------+ 1197 | | | | Upgrade DCTCP to | 1198 |3| | Add L4S AQM upstream | TCP Prague | 1199 | | | | | 1200 | | FULLY WORKS UPSTREAM AND DOWNSTREAM | 1201 +-+--------------------+----------------------+---------------------+ 1203 Figure 3: Example L4S Deployment Sequence 1205 Figure 3 illustrates some example sequences in which the parts of L4S 1206 might be deployed. It consists of the following stages: 1208 1. Here, the immediate benefit of a single AQM deployment can be 1209 seen, but limited to a controlled trial or controlled deployment. 1210 In this example downstream deployment is first, but in other 1211 scenarios the upstream might be deployed first. If no AQM at all 1212 was previously deployed for the downstream access, an L4S AQM 1213 greatly improves the Classic service (as well as adding the L4S 1214 service). If an AQM was already deployed, the Classic service 1215 will be unchanged (and L4S will add an improvement on top). 1217 2. In this stage, the name 'TCP 1218 Prague' [I-D.briscoe-iccrg-prague-congestion-control] is used to 1219 represent a variant of DCTCP that is safe to use in a production 1220 Internet environment. If the application is primarily 1221 unidirectional, 'TCP Prague' at one end will provide all the 1222 benefit needed. For TCP transports, Accurate ECN feedback 1223 (AccECN) [I-D.ietf-tcpm-accurate-ecn] is needed at the other end, 1224 but it is a generic ECN feedback facility that is already planned 1225 to be deployed for other purposes, e.g. DCTCP, BBR. The two ends 1226 can be deployed in either order, because, in TCP, an L4S 1227 congestion control only enables itself if it has negotiated the 1228 use of AccECN feedback with the other end during the connection 1229 handshake. Thus, deployment of TCP Prague on a server enables 1230 L4S trials to move to a production service in one direction, 1231 wherever AccECN is deployed at the other end. This stage might 1232 be further motivated by the performance improvements of TCP 1233 Prague relative to DCTCP (see Appendix A.2 of 1234 [I-D.ietf-tsvwg-ecn-l4s-id]). 1236 Unlike TCP, from the outset, QUIC ECN feedback [RFC9000] has 1237 supported L4S. Therefore, if the transport is QUIC, one-ended 1238 deployment of a Prague congestion control at this stage is simple 1239 and sufficient. 1241 3. This is a two-move stage to enable L4S upstream. An L4S AQM or 1242 TCP Prague can be deployed in either order as already explained. 1243 To motivate the first of two independent moves, the deferred 1244 benefit of enabling new services after the second move has to be 1245 worth it to cover the first mover's investment risk. As 1246 explained already, the potential for new interactive services 1247 provides this motivation. An L4S AQM also improves the upstream 1248 Classic service - significantly if no other AQM has already been 1249 deployed. 1251 Note that other deployment sequences might occur. For instance: the 1252 upstream might be deployed first; a non-TCP protocol might be used 1253 end-to-end, e.g. QUIC, RTP; a body such as the 3GPP might require L4S 1254 to be implemented in 5G user equipment, or other random acts of 1255 kindness. 1257 6.4.3. L4S Flow but Non-ECN Bottleneck 1259 If L4S is enabled between two hosts, the L4S sender is required to 1260 coexist safely with Reno in response to any drop (see Section 4.3 of 1261 [I-D.ietf-tsvwg-ecn-l4s-id]). 1263 Unfortunately, as well as protecting Classic traffic, this rule 1264 degrades the L4S service whenever there is any loss, even if the 1265 cause is not persistent congestion at a bottleneck, e.g.: 1267 * congestion loss at other transient bottlenecks, e.g. due to bursts 1268 in shallower queues; 1270 * transmission errors, e.g. due to electrical interference; 1272 * rate policing. 1274 Three complementary approaches are in progress to address this issue, 1275 but they are all currently research: 1277 * In Prague congestion control, ignore certain losses deemed 1278 unlikely to be due to congestion (using some ideas from 1279 BBR [I-D.cardwell-iccrg-bbr-congestion-control] regarding isolated 1280 losses). This could mask any of the above types of loss while 1281 still coexisting with drop-based congestion controls. 1283 * A combination of RACK, L4S and link retransmission without 1284 resequencing could repair transmission errors without the head of 1285 line blocking delay usually associated with link-layer 1286 retransmission [UnorderedLTE], [I-D.ietf-tsvwg-ecn-l4s-id]; 1288 * Hybrid ECN/drop rate policers (see Section 8.3). 1290 L4S deployment scenarios that minimize these issues (e.g. over 1291 wireline networks) can proceed in parallel to this research, in the 1292 expectation that research success could continually widen L4S 1293 applicability. 1295 6.4.4. L4S Flow but Classic ECN Bottleneck 1297 Classic ECN support is starting to materialize on the Internet as an 1298 increased level of CE marking. It is hard to detect whether this is 1299 all due to the addition of support for ECN in the Linux 1300 implementation of FQ-CoDel, which is not problematic, because FQ 1301 inherently forces the throughput of each flow to be equal 1302 irrespective of its aggressiveness. However, some of this Classic 1303 ECN marking might be due to single-queue ECN deployment. This case 1304 is discussed in Section 4.3 of [I-D.ietf-tsvwg-ecn-l4s-id]). 1306 6.4.5. L4S AQM Deployment within Tunnels 1308 An L4S AQM uses the ECN field to signal congestion. So, in common 1309 with Classic ECN, if the AQM is within a tunnel or at a lower layer, 1310 correct functioning of ECN signalling requires correct propagation of 1311 the ECN field up the layers [RFC6040], 1312 [I-D.ietf-tsvwg-rfc6040update-shim], 1313 [I-D.ietf-tsvwg-ecn-encap-guidelines]. 1315 7. IANA Considerations (to be removed by RFC Editor) 1317 This specification contains no IANA considerations. 1319 8. Security Considerations 1321 8.1. Traffic Rate (Non-)Policing 1323 Because the L4S service reduces delay without increasing the delay of 1324 Classic traffic, it should not be necessary to rate-police access to 1325 the L4S service. In contrast, Section 5.2 explains how Diffserv only 1326 makes a difference if some packets get less favourable treatment than 1327 others, which typically requires traffic policing, which can, in 1328 turn, lead to further complexity such as traffic contracts at trust 1329 boundaries. Because L4S avoids this management complexity, it is 1330 more likely to work end-to-end. 1332 During early deployment (and perhaps always), some networks will not 1333 offer the L4S service. In general, these networks should not need to 1334 police L4S traffic - they are required not to change the L4S 1335 identifier, merely treating the traffic as Not-ECT, as they might 1336 already treat ECT(1) traffic today. At a bottleneck, such networks 1337 will introduce some queuing and dropping. When a scalable congestion 1338 control detects a drop it will have to respond safely with respect to 1339 Classic congestion controls (as required in Section 4.3 of 1340 [I-D.ietf-tsvwg-ecn-l4s-id]). This will degrade the L4S service to 1341 be no better (but never worse) than Classic best efforts, whenever a 1342 non-ECN bottleneck is encountered on a path (see Section 6.4.3). 1344 In some cases, networks that solely support Classic ECN [RFC3168] in 1345 a single queue bottleneck might opt to police L4S traffic so as to 1346 protect competing Classic ECN traffic. 1348 Certain network operators might choose to restrict access to the L4S 1349 class, perhaps only to selected premium customers as a value-added 1350 service. Their packet classifier (item 2 in Figure 1) could identify 1351 such customers against some other field (e.g. source address range) 1352 as well as classifying on the ECN field. If only the ECN L4S 1353 identifier matched, but not the source address (say), the classifier 1354 could direct these packets (from non-premium customers) into the 1355 Classic queue. Explaining clearly how operators can use an 1356 additional local classifiers (see section 5.4 of 1357 [I-D.ietf-tsvwg-ecn-l4s-id]) is intended to remove any motivation to 1358 bleach the L4S identifier. Then at least the L4S ECN identifier will 1359 be more likely to survive end-to-end even though the service may not 1360 be supported at every hop. Such local arrangements would only 1361 require simple registered/not-registered packet classification, 1362 rather than the managed, application-specific traffic policing 1363 against customer-specific traffic contracts that Diffserv uses. 1365 8.2. 'Latency Friendliness' 1367 Like the Classic service, the L4S service relies on self-constraint - 1368 limiting rate in response to congestion. In addition, the L4S 1369 service requires self-constraint in terms of limiting latency 1370 (burstiness). It is hoped that self-interest and guidance on dynamic 1371 behaviour (especially flow start-up, which might need to be 1372 standardized) will be sufficient to prevent transports from sending 1373 excessive bursts of L4S traffic, given the application's own latency 1374 will suffer most from such behaviour. 1376 Whether burst policing becomes necessary remains to be seen. Without 1377 it, there will be potential for attacks on the low latency of the L4S 1378 service. 1380 If needed, various arrangements could be used to address this 1381 concern: 1383 Local bottleneck queue protection: A per-flow (5-tuple) queue 1384 protection function [I-D.briscoe-docsis-q-protection] has been 1385 developed for the low latency queue in DOCSIS, which has adopted 1386 the DualQ L4S architecture. It protects the low latency service 1387 from any queue-building flows that accidentally or maliciously 1388 classify themselves into the low latency queue. It is designed to 1389 score flows based solely on their contribution to queuing (not 1390 flow rate in itself). Then, if the shared low latency queue is at 1391 risk of exceeding a threshold, the function redirects enough 1392 packets of the highest scoring flow(s) into the Classic queue to 1393 preserve low latency. 1395 Distributed traffic scrubbing: Rather than policing locally at each 1396 bottleneck, it may only be necessary to address problems 1397 reactively, e.g. punitively target any deployments of new bursty 1398 malware, in a similar way to how traffic from flooding attack 1399 sources is rerouted via scrubbing facilities. 1401 Local bottleneck per-flow scheduling: Per-flow scheduling should 1402 inherently isolate non-bursty flows from bursty (see Section 5.2 1403 for discussion of the merits of per-flow scheduling relative to 1404 per-flow policing). 1406 Distributed access subnet queue protection: Per-flow queue 1407 protection could be arranged for a queue structure distributed 1408 across a subnet inter-communicating using lower layer control 1409 messages (see Section 2.1.4 of [QDyn]). For instance, in a radio 1410 access network user equipment already sends regular buffer status 1411 reports to a radio network controller, which could use this 1412 information to remotely police individual flows. 1414 Distributed Congestion Exposure to Ingress Policers: The Congestion 1415 Exposure (ConEx) architecture [RFC7713] which uses egress audit to 1416 motivate senders to truthfully signal path congestion in-band 1417 where it can be used by ingress policers. An edge-to-edge variant 1418 of this architecture is also possible. 1420 Distributed Domain-edge traffic conditioning: An architecture 1421 similar to Diffserv [RFC2475] may be preferred, where traffic is 1422 proactively conditioned on entry to a domain, rather than 1423 reactively policed only if it leads to queuing once combined with 1424 other traffic at a bottleneck. 1426 Distributed core network queue protection: The policing function 1427 could be divided between per-flow mechanisms at the network 1428 ingress that characterize the burstiness of each flow into a 1429 signal carried with the traffic, and per-class mechanisms at 1430 bottlenecks that act on these signals if queuing actually occurs 1431 once the traffic converges. This would be somewhat similar to the 1432 idea behind core stateless fair queuing, which is in turn similar 1433 to [Nadas20]. 1435 None of these possible queue protection capabilities are considered a 1436 necessary part of the L4S architecture, which works without them (in 1437 a similar way to how the Internet works without per-flow rate 1438 policing). Indeed, under normal circumstances, latency policers 1439 would not intervene, and if operators found they were not necessary 1440 they could disable them. Part of the L4S experiment will be to see 1441 whether such a function is necessary, and which arrangements are most 1442 appropriate to the size of the problem. 1444 8.3. Interaction between Rate Policing and L4S 1446 As mentioned in Section 5.2, L4S should remove the need for low 1447 latency Diffserv classes. However, those Diffserv classes that give 1448 certain applications or users priority over capacity, would still be 1449 applicable in certain scenarios (e.g. corporate networks). Then, 1450 within such Diffserv classes, L4S would often be applicable to give 1451 traffic low latency and low loss as well. Within such a Diffserv 1452 class, the bandwidth available to a user or application is often 1453 limited by a rate policer. Similarly, in the default Diffserv class, 1454 rate policers are used to partition shared capacity. 1456 A classic rate policer drops any packets exceeding a set rate, 1457 usually also giving a burst allowance (variants exist where the 1458 policer re-marks non-compliant traffic to a discard-eligible Diffserv 1459 codepoint, so they may be dropped elsewhere during contention). 1460 Whenever L4S traffic encounters one of these rate policers, it will 1461 experience drops and the source will have to fall back to a Classic 1462 congestion control, thus losing the benefits of L4S (Section 6.4.3). 1463 So, in networks that already use rate policers and plan to deploy 1464 L4S, it will be preferable to redesign these rate policers to be more 1465 friendly to the L4S service. 1467 L4S-friendly rate policing is currently a research area (note that 1468 this is not the same as latency policing). It might be achieved by 1469 setting a threshold where ECN marking is introduced, such that it is 1470 just under the policed rate or just under the burst allowance where 1471 drop is introduced. This could be applied to various types of rate 1472 policer, e.g. [RFC2697], [RFC2698] or the 'local' (non-ConEx) variant 1473 of the ConEx congestion policer [I-D.briscoe-conex-policing]. It 1474 might also be possible to design scalable congestion controls to 1475 respond less catastrophically to loss that has not been preceded by a 1476 period of increasing delay. 1478 The design of L4S-friendly rate policers will require a separate 1479 dedicated document. For further discussion of the interaction 1480 between L4S and Diffserv, see [I-D.briscoe-tsvwg-l4s-diffserv]. 1482 8.4. ECN Integrity 1484 Receiving hosts can fool a sender into downloading faster by 1485 suppressing feedback of ECN marks (or of losses if retransmissions 1486 are not necessary or available otherwise). Various ways to protect 1487 transport feedback integrity have been developed. For instance: 1489 * The sender can test the integrity of the receiver's feedback by 1490 occasionally setting the IP-ECN field to the congestion 1491 experienced (CE) codepoint, which is normally only set by a 1492 congested link. Then the sender can test whether the receiver's 1493 feedback faithfully reports what it expects (see 2nd para of 1494 Section 20.2 of [RFC3168]). 1496 * A network can enforce a congestion response to its ECN markings 1497 (or packet losses) by auditing congestion exposure 1498 (ConEx) [RFC7713]. 1500 * Transport layer authentication such as the TCP authentication 1501 option (TCP-AO [RFC5925]) or QUIC's use of TLS [RFC9001] can 1502 detect any tampering with congestion feedback. 1504 * The ECN Nonce [RFC3540] was proposed to detect tampering with 1505 congestion feedback, but it has been reclassified as 1506 historic [RFC8311]. 1508 Appendix C.1 of [I-D.ietf-tsvwg-ecn-l4s-id] gives more details of 1509 these techniques including their applicability and pros and cons. 1511 8.5. Privacy Considerations 1513 As discussed in Section 5.2, the L4S architecture does not preclude 1514 approaches that inspect end-to-end transport layer identifiers. For 1515 instance it is simple to add L4S support to FQ-CoDel, which 1516 classifies by application flow ID in the network. However, the main 1517 innovation of L4S is the DualQ AQM framework that does not need to 1518 inspect any deeper than the outermost IP header, because the L4S 1519 identifier is in the IP-ECN field. 1521 Thus, the L4S architecture enables very low queuing delay without 1522 _requiring_ inspection of information above the IP layer. This means 1523 that users who want to encrypt application flow identifiers, e.g. in 1524 IPSec or other encrypted VPN tunnels, don't have to sacrifice low 1525 delay [RFC8404]. 1527 Because L4S can provide low delay for a broad set of applications 1528 that choose to use it, there is no need for individual applications 1529 or classes within that broad set to be distinguishable in any way 1530 while traversing networks. This removes much of the ability to 1531 correlate between the delay requirements of traffic and other 1532 identifying features [RFC6973]. There may be some types of traffic 1533 that prefer not to use L4S, but the coarse binary categorization of 1534 traffic reveals very little that could be exploited to compromise 1535 privacy. 1537 9. Acknowledgements 1539 Thanks to Richard Scheffenegger, Wes Eddy, Karen Nielsen, David 1540 Black, Jake Holland, Vidhi Goel, Ermin Sakic, Praveen Balasubramanian 1541 and Gorry Fairhurst for their useful review comments. 1543 Bob Briscoe and Koen De Schepper were part-funded by the European 1544 Community under its Seventh Framework Programme through the Reducing 1545 Internet Transport Latency (RITE) project (ICT-317700). Bob Briscoe 1546 was also part-funded by the Research Council of Norway through the 1547 TimeIn project, partly by CableLabs and partly by the Comcast 1548 Innovation Fund. The views expressed here are solely those of the 1549 authors. 1551 10. Informative References 1553 [AFCD] Xue, L., Kumar, S., Cui, C., Kondikoppa, P., Chiu, C-H., 1554 and S-J. Park, "Towards fair and low latency next 1555 generation high speed networks: AFCD queuing", Journal of 1556 Network and Computer Applications 70:183--193, July 2016, 1557 . 1559 [BBRv2] Cardwell, N., "TCP BBR v2 Alpha/Preview Release", github 1560 repository; Linux congestion control module, 1561 . 1563 [BDPdata] Briscoe, B., "PI2 Parameters", Technical Report TR-BB- 1564 2021-001 arXiv:2107.01003 [cs.NI], July 2021, 1565 . 1567 [BufferSize] 1568 Appenzeller, G., Keslassy, I., and N. McKeown, "Sizing 1569 Router Buffers", In Proc. SIGCOMM'04 34(4):281--292, 1570 September 2004, . 1572 [COBALT] Palmei, J., Gupta, S., Imputato, P., Morton, J., 1573 Tahiliani, M. P., Avallone, S., and D. Täht, "Design and 1574 Evaluation of COBALT Queue Discipline", In Proc. IEEE 1575 Int'l Symp. Local and Metropolitan Area Networks 1576 (LANMAN'19) 2019:1-6, July 2019, 1577 . 1579 [DCttH19] De Schepper, K., Bondarenko, O., Tilmans, O., and B. 1580 Briscoe, "`Data Centre to the Home': Ultra-Low Latency for 1581 All", Updated RITE project Technical Report , July 2019, 1582 . 1584 [DOCSIS3.1] 1585 CableLabs, "MAC and Upper Layer Protocols Interface 1586 (MULPI) Specification, CM-SP-MULPIv3.1", Data-Over-Cable 1587 Service Interface Specifications DOCSIS® 3.1 Version i17 1588 or later, 21 January 2019, . 1591 [DOCSIS3AQM] 1592 White, G., "Active Queue Management Algorithms for DOCSIS 1593 3.0; A Simulation Study of CoDel, SFQ-CoDel and PIE in 1594 DOCSIS 3.0 Networks", CableLabs Technical Report , April 1595 2013, <{http://www.cablelabs.com/wp- 1596 content/uploads/2013/11/ 1597 Active_Queue_Management_Algorithms_DOCSIS_3_0.pdf>. 1599 [DualPI2Linux] 1600 Albisser, O., De Schepper, K., Briscoe, B., Tilmans, O., 1601 and H. Steen, "DUALPI2 - Low Latency, Low Loss and 1602 Scalable (L4S) AQM", Proc. Linux Netdev 0x13 , March 2019, 1603 . 1606 [FQ_CoDel_Thresh] 1607 Høiland-Jørgensen, T., "fq_codel: generalise ce_threshold 1608 marking for subset of traffic", Linux Patch Commit ID: 1609 dfcb63ce1de6b10b, 20 October 2021, 1610 . 1613 [Hohlfeld14] 1614 Hohlfeld, O., Pujol, E., Ciucu, F., Feldmann, A., and P. 1615 Barford, "A QoE Perspective on Sizing Network Buffers", 1616 Proc. ACM Internet Measurement Conf (IMC'14) hmm, November 1617 2014, . 1619 [I-D.briscoe-conex-policing] 1620 Briscoe, B., "Network Performance Isolation using 1621 Congestion Policing", Work in Progress, Internet-Draft, 1622 draft-briscoe-conex-policing-01, 14 February 2014, 1623 . 1626 [I-D.briscoe-docsis-q-protection] 1627 Briscoe, B. and G. White, "Queue Protection to Preserve 1628 Low Latency", Work in Progress, Internet-Draft, draft- 1629 briscoe-docsis-q-protection-00, 8 July 2019, 1630 . 1633 [I-D.briscoe-iccrg-prague-congestion-control] 1634 Schepper, K. D., Tilmans, O., and B. Briscoe, "Prague 1635 Congestion Control", Work in Progress, Internet-Draft, 1636 draft-briscoe-iccrg-prague-congestion-control-00, 9 March 1637 2021, . 1640 [I-D.briscoe-tsvwg-l4s-diffserv] 1641 Briscoe, B., "Interactions between Low Latency, Low Loss, 1642 Scalable Throughput (L4S) and Differentiated Services", 1643 Work in Progress, Internet-Draft, draft-briscoe-tsvwg-l4s- 1644 diffserv-02, 4 November 2018, 1645 . 1648 [I-D.cardwell-iccrg-bbr-congestion-control] 1649 Cardwell, N., Cheng, Y., Yeganeh, S. H., and V. Jacobson, 1650 "BBR Congestion Control", Work in Progress, Internet- 1651 Draft, draft-cardwell-iccrg-bbr-congestion-control-00, 3 1652 July 2017, . 1655 [I-D.ietf-tcpm-accurate-ecn] 1656 Briscoe, B., Kühlewind, M., and R. Scheffenegger, "More 1657 Accurate ECN Feedback in TCP", Work in Progress, Internet- 1658 Draft, draft-ietf-tcpm-accurate-ecn-15, 12 July 2021, 1659 . 1662 [I-D.ietf-tcpm-generalized-ecn] 1663 Bagnulo, M. and B. Briscoe, "ECN++: Adding Explicit 1664 Congestion Notification (ECN) to TCP Control Packets", 1665 Work in Progress, Internet-Draft, draft-ietf-tcpm- 1666 generalized-ecn-08, 2 August 2021, 1667 . 1670 [I-D.ietf-tsvwg-aqm-dualq-coupled] 1671 Schepper, K. D., Briscoe, B., and G. White, "DualQ Coupled 1672 AQMs for Low Latency, Low Loss and Scalable Throughput 1673 (L4S)", Work in Progress, Internet-Draft, draft-ietf- 1674 tsvwg-aqm-dualq-coupled-18, 25 October 2021, 1675 . 1678 [I-D.ietf-tsvwg-ecn-encap-guidelines] 1679 Briscoe, B. and J. Kaippallimalil, "Guidelines for Adding 1680 Congestion Notification to Protocols that Encapsulate IP", 1681 Work in Progress, Internet-Draft, draft-ietf-tsvwg-ecn- 1682 encap-guidelines-16, 25 May 2021, 1683 . 1686 [I-D.ietf-tsvwg-ecn-l4s-id] 1687 Schepper, K. D. and B. Briscoe, "Explicit Congestion 1688 Notification (ECN) Protocol for Very Low Queuing Delay 1689 (L4S)", Work in Progress, Internet-Draft, draft-ietf- 1690 tsvwg-ecn-l4s-id-19, 26 July 2021, 1691 . 1694 [I-D.ietf-tsvwg-nqb] 1695 White, G. and T. Fossati, "A Non-Queue-Building Per-Hop 1696 Behavior (NQB PHB) for Differentiated Services", Work in 1697 Progress, Internet-Draft, draft-ietf-tsvwg-nqb-07, 28 July 1698 2021, . 1701 [I-D.ietf-tsvwg-rfc6040update-shim] 1702 Briscoe, B., "Propagating Explicit Congestion Notification 1703 Across IP Tunnel Headers Separated by a Shim", Work in 1704 Progress, Internet-Draft, draft-ietf-tsvwg-rfc6040update- 1705 shim-14, 25 May 2021, 1706 . 1709 [I-D.morton-tsvwg-codel-approx-fair] 1710 Morton, J. and P. G. Heist, "Controlled Delay Approximate 1711 Fairness AQM", Work in Progress, Internet-Draft, draft- 1712 morton-tsvwg-codel-approx-fair-01, 9 March 2020, 1713 . 1716 [I-D.sridharan-tcpm-ctcp] 1717 Sridharan, M., Tan, K., Bansal, D., and D. Thaler, 1718 "Compound TCP: A New TCP Congestion Control for High-Speed 1719 and Long Distance Networks", Work in Progress, Internet- 1720 Draft, draft-sridharan-tcpm-ctcp-02, 11 November 2008, 1721 . 1724 [I-D.stewart-tsvwg-sctpecn] 1725 Stewart, R. R., Tuexen, M., and X. Dong, "ECN for Stream 1726 Control Transmission Protocol (SCTP)", Work in Progress, 1727 Internet-Draft, draft-stewart-tsvwg-sctpecn-05, 15 January 1728 2014, . 1731 [L4Sdemo16] 1732 Bondarenko, O., De Schepper, K., Tsang, I., and B. 1733 Briscoe, "Ultra-Low Delay for All: Live Experience, Live 1734 Analysis", Proc. MMSYS'16 pp33:1--33:4, May 2016, 1735 . 1739 [LEDBAT_AQM] 1740 Al-Saadi, R., Armitage, G., and J. But, "Characterising 1741 LEDBAT Performance Through Bottlenecks Using PIE, FQ-CoDel 1742 and FQ-PIE Active Queue Management", Proc. IEEE 42nd 1743 Conference on Local Computer Networks (LCN) 278--285, 1744 2017, . 1746 [Mathis09] Mathis, M., "Relentless Congestion Control", PFLDNeT'09 , 1747 May 2009, . 1752 [McIlroy78] 1753 McIlroy, M.D., Pinson, E. N., and B. A. Tague, "UNIX Time- 1754 Sharing System: Foreword", The Bell System Technical 1755 Journal 57:6(1902--1903), July 1978, 1756 . 1758 [Nadas20] Nádas, S., Gombos, G., Fejes, F., and S. Laki, "A 1759 Congestion Control Independent L4S Scheduler", Proc. 1760 Applied Networking Research Workshop (ANRW '20) 45--51, 1761 July 2020, . 1763 [NewCC_Proc] 1764 Eggert, L., "Experimental Specification of New Congestion 1765 Control Algorithms", IETF Operational Note ion-tsv-alt-cc, 1766 July 2007, . 1769 [PragueLinux] 1770 Briscoe, B., De Schepper, K., Albisser, O., Misund, J., 1771 Tilmans, O., Kühlewind, M., and A.S. Ahmed, "Implementing 1772 the `TCP Prague' Requirements for Low Latency Low Loss 1773 Scalable Throughput (L4S)", Proc. Linux Netdev 0x13 , 1774 March 2019, . 1777 [QDyn] Briscoe, B., "Rapid Signalling of Queue Dynamics", 1778 bobbriscoe.net Technical Report TR-BB-2017-001; 1779 arXiv:1904.07044 [cs.NI], September 2017, 1780 . 1782 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 1783 and W. Weiss, "An Architecture for Differentiated 1784 Services", RFC 2475, DOI 10.17487/RFC2475, December 1998, 1785 . 1787 [RFC2697] Heinanen, J. and R. Guerin, "A Single Rate Three Color 1788 Marker", RFC 2697, DOI 10.17487/RFC2697, September 1999, 1789 . 1791 [RFC2698] Heinanen, J. and R. Guerin, "A Two Rate Three Color 1792 Marker", RFC 2698, DOI 10.17487/RFC2698, September 1999, 1793 . 1795 [RFC2884] Hadi Salim, J. and U. Ahmed, "Performance Evaluation of 1796 Explicit Congestion Notification (ECN) in IP Networks", 1797 RFC 2884, DOI 10.17487/RFC2884, July 2000, 1798 . 1800 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1801 of Explicit Congestion Notification (ECN) to IP", 1802 RFC 3168, DOI 10.17487/RFC3168, September 2001, 1803 . 1805 [RFC3246] Davie, B., Charny, A., Bennet, J.C.R., Benson, K., Le 1806 Boudec, J.Y., Courtney, W., Davari, S., Firoiu, V., and D. 1807 Stiliadis, "An Expedited Forwarding PHB (Per-Hop 1808 Behavior)", RFC 3246, DOI 10.17487/RFC3246, March 2002, 1809 . 1811 [RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust Explicit 1812 Congestion Notification (ECN) Signaling with Nonces", 1813 RFC 3540, DOI 10.17487/RFC3540, June 2003, 1814 . 1816 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows", 1817 RFC 3649, DOI 10.17487/RFC3649, December 2003, 1818 . 1820 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1821 Congestion Control Protocol (DCCP)", RFC 4340, 1822 DOI 10.17487/RFC4340, March 2006, 1823 . 1825 [RFC4774] Floyd, S., "Specifying Alternate Semantics for the 1826 Explicit Congestion Notification (ECN) Field", BCP 124, 1827 RFC 4774, DOI 10.17487/RFC4774, November 2006, 1828 . 1830 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1831 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1832 . 1834 [RFC5033] Floyd, S. and M. Allman, "Specifying New Congestion 1835 Control Algorithms", BCP 133, RFC 5033, 1836 DOI 10.17487/RFC5033, August 2007, 1837 . 1839 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP 1840 Friendly Rate Control (TFRC): Protocol Specification", 1841 RFC 5348, DOI 10.17487/RFC5348, September 2008, 1842 . 1844 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1845 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 1846 . 1848 [RFC5925] Touch, J., Mankin, A., and R. Bonica, "The TCP 1849 Authentication Option", RFC 5925, DOI 10.17487/RFC5925, 1850 June 2010, . 1852 [RFC6040] Briscoe, B., "Tunnelling of Explicit Congestion 1853 Notification", RFC 6040, DOI 10.17487/RFC6040, November 1854 2010, . 1856 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 1857 and K. Carlberg, "Explicit Congestion Notification (ECN) 1858 for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 1859 2012, . 1861 [RFC6817] Shalunov, S., Hazel, G., Iyengar, J., and M. Kuehlewind, 1862 "Low Extra Delay Background Transport (LEDBAT)", RFC 6817, 1863 DOI 10.17487/RFC6817, December 2012, 1864 . 1866 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 1867 Morris, J., Hansen, M., and R. Smith, "Privacy 1868 Considerations for Internet Protocols", RFC 6973, 1869 DOI 10.17487/RFC6973, July 2013, 1870 . 1872 [RFC7540] Belshe, M., Peon, R., and M. Thomson, Ed., "Hypertext 1873 Transfer Protocol Version 2 (HTTP/2)", RFC 7540, 1874 DOI 10.17487/RFC7540, May 2015, 1875 . 1877 [RFC7560] Kuehlewind, M., Ed., Scheffenegger, R., and B. Briscoe, 1878 "Problem Statement and Requirements for Increased Accuracy 1879 in Explicit Congestion Notification (ECN) Feedback", 1880 RFC 7560, DOI 10.17487/RFC7560, August 2015, 1881 . 1883 [RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF 1884 Recommendations Regarding Active Queue Management", 1885 BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, 1886 . 1888 [RFC7665] Halpern, J., Ed. and C. Pignataro, Ed., "Service Function 1889 Chaining (SFC) Architecture", RFC 7665, 1890 DOI 10.17487/RFC7665, October 2015, 1891 . 1893 [RFC7713] Mathis, M. and B. Briscoe, "Congestion Exposure (ConEx) 1894 Concepts, Abstract Mechanism, and Requirements", RFC 7713, 1895 DOI 10.17487/RFC7713, December 2015, 1896 . 1898 [RFC8033] Pan, R., Natarajan, P., Baker, F., and G. White, 1899 "Proportional Integral Controller Enhanced (PIE): A 1900 Lightweight Control Scheme to Address the Bufferbloat 1901 Problem", RFC 8033, DOI 10.17487/RFC8033, February 2017, 1902 . 1904 [RFC8034] White, G. and R. Pan, "Active Queue Management (AQM) Based 1905 on Proportional Integral Controller Enhanced PIE) for 1906 Data-Over-Cable Service Interface Specifications (DOCSIS) 1907 Cable Modems", RFC 8034, DOI 10.17487/RFC8034, February 1908 2017, . 1910 [RFC8170] Thaler, D., Ed., "Planning for Protocol Adoption and 1911 Subsequent Transitions", RFC 8170, DOI 10.17487/RFC8170, 1912 May 2017, . 1914 [RFC8257] Bensley, S., Thaler, D., Balasubramanian, P., Eggert, L., 1915 and G. Judd, "Data Center TCP (DCTCP): TCP Congestion 1916 Control for Data Centers", RFC 8257, DOI 10.17487/RFC8257, 1917 October 2017, . 1919 [RFC8290] Hoeiland-Joergensen, T., McKenney, P., Taht, D., Gettys, 1920 J., and E. Dumazet, "The Flow Queue CoDel Packet Scheduler 1921 and Active Queue Management Algorithm", RFC 8290, 1922 DOI 10.17487/RFC8290, January 2018, 1923 . 1925 [RFC8298] Johansson, I. and Z. Sarker, "Self-Clocked Rate Adaptation 1926 for Multimedia", RFC 8298, DOI 10.17487/RFC8298, December 1927 2017, . 1929 [RFC8311] Black, D., "Relaxing Restrictions on Explicit Congestion 1930 Notification (ECN) Experimentation", RFC 8311, 1931 DOI 10.17487/RFC8311, January 2018, 1932 . 1934 [RFC8312] Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 1935 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 1936 RFC 8312, DOI 10.17487/RFC8312, February 2018, 1937 . 1939 [RFC8404] Moriarty, K., Ed. and A. Morton, Ed., "Effects of 1940 Pervasive Encryption on Operators", RFC 8404, 1941 DOI 10.17487/RFC8404, July 2018, 1942 . 1944 [RFC8511] Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst, 1945 "TCP Alternative Backoff with ECN (ABE)", RFC 8511, 1946 DOI 10.17487/RFC8511, December 2018, 1947 . 1949 [RFC8888] Sarker, Z., Perkins, C., Singh, V., and M. Ramalho, "RTP 1950 Control Protocol (RTCP) Feedback for Congestion Control", 1951 RFC 8888, DOI 10.17487/RFC8888, January 2021, 1952 . 1954 [RFC9000] Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based 1955 Multiplexed and Secure Transport", RFC 9000, 1956 DOI 10.17487/RFC9000, May 2021, 1957 . 1959 [RFC9001] Thomson, M., Ed. and S. Turner, Ed., "Using TLS to Secure 1960 QUIC", RFC 9001, DOI 10.17487/RFC9001, May 2021, 1961 . 1963 [SCReAM] Johansson, I., "SCReAM", github repository; , 1964 . 1967 [TCP-CA] Jacobson, V. and M.J. Karels, "Congestion Avoidance and 1968 Control", Laurence Berkeley Labs Technical Report , 1969 November 1988, . 1971 [TCP-sub-mss-w] 1972 Briscoe, B. and K. De Schepper, "Scaling TCP's Congestion 1973 Window for Small Round Trip Times", BT Technical Report 1974 TR-TUB8-2015-002, May 2015, 1975 . 1978 [UnorderedLTE] 1979 Austrheim, M.V., "Implementing immediate forwarding for 4G 1980 in a network simulator", Masters Thesis, Uni Oslo , June 1981 2019. 1983 Appendix A. Standardization items 1985 The following table includes all the items that will need to be 1986 standardized to provide a full L4S architecture. 1988 The table is too wide for the ASCII draft format, so it has been 1989 split into two, with a common column of row index numbers on the 1990 left. 1992 The columns in the second part of the table have the following 1993 meanings: 1995 WG: The IETF WG most relevant to this requirement. The "tcpm/iccrg" 1996 combination refers to the procedure typically used for congestion 1997 control changes, where tcpm owns the approval decision, but uses 1998 the iccrg for expert review [NewCC_Proc]; 2000 TCP: Applicable to all forms of TCP congestion control; 2002 DCTCP: Applicable to Data Center TCP as currently used (in 2003 controlled environments); 2005 DCTCP bis: Applicable to any future Data Center TCP congestion 2006 control intended for controlled environments; 2008 XXX Prague: Applicable to a Scalable variant of XXX (TCP/SCTP/RMCAT) 2009 congestion control. 2011 +=====+========================+====================================+ 2012 | Req | Requirement | Reference | 2013 | # | | | 2014 +=====+========================+====================================+ 2015 | 0 | ARCHITECTURE | | 2016 +-----+------------------------+------------------------------------+ 2017 | 1 | L4S IDENTIFIER | [I-D.ietf-tsvwg-ecn-l4s-id] S.3 | 2018 +-----+------------------------+------------------------------------+ 2019 | 2 | DUAL QUEUE AQM | [I-D.ietf-tsvwg-aqm-dualq-coupled] | 2020 +-----+------------------------+------------------------------------+ 2021 | 3 | Suitable ECN | [I-D.ietf-tcpm-accurate-ecn] | 2022 | | Feedback | S.4.2, | 2023 | | | [I-D.stewart-tsvwg-sctpecn]. | 2024 +-----+------------------------+------------------------------------+ 2025 +-----+------------------------+------------------------------------+ 2026 | | SCALABLE TRANSPORT - | | 2027 | | SAFETY ADDITIONS | | 2028 +-----+------------------------+------------------------------------+ 2029 | 4-1 | Fall back to Reno/ | [I-D.ietf-tsvwg-ecn-l4s-id] S.4.3, | 2030 | | Cubic on loss | [RFC8257] | 2031 +-----+------------------------+------------------------------------+ 2032 | 4-2 | Fall back to Reno/ | [I-D.ietf-tsvwg-ecn-l4s-id] S.4.3 | 2033 | | Cubic if classic ECN | | 2034 | | bottleneck detected | | 2035 +-----+------------------------+------------------------------------+ 2036 +-----+------------------------+------------------------------------+ 2037 | 4-3 | Reduce RTT- | [I-D.ietf-tsvwg-ecn-l4s-id] S.4.3 | 2038 | | dependence | | 2039 +-----+------------------------+------------------------------------+ 2040 +-----+------------------------+------------------------------------+ 2041 | 4-4 | Scaling TCP's | [I-D.ietf-tsvwg-ecn-l4s-id] S.4.3, | 2042 | | Congestion Window | [TCP-sub-mss-w] | 2043 | | for Small Round Trip | | 2044 | | Times | | 2045 +-----+------------------------+------------------------------------+ 2046 | | SCALABLE TRANSPORT - | | 2047 | | PERFORMANCE | | 2048 | | ENHANCEMENTS | | 2049 +-----+------------------------+------------------------------------+ 2050 | 5-1 | Setting ECT in TCP | [I-D.ietf-tcpm-generalized-ecn] | 2051 | | Control Packets and | | 2052 | | Retransmissions | | 2053 +-----+------------------------+------------------------------------+ 2054 | 5-2 | Faster-than-additive | [I-D.ietf-tsvwg-ecn-l4s-id] (Appx | 2055 | | increase | A.2.2) | 2056 +-----+------------------------+------------------------------------+ 2057 | 5-3 | Faster Convergence | [I-D.ietf-tsvwg-ecn-l4s-id] (Appx | 2058 | | at Flow Start | A.2.2) | 2059 +-----+------------------------+------------------------------------+ 2061 Table 1 2063 +=====+========+=====+=======+===========+========+========+========+ 2064 | # | WG | TCP | DCTCP | DCTCP-bis | TCP | SCTP | RMCAT | 2065 | | | | | | Prague | Prague | Prague | 2066 +=====+========+=====+=======+===========+========+========+========+ 2067 | 0 | tsvwg | Y | Y | Y | Y | Y | Y | 2068 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2069 | 1 | tsvwg | | | Y | Y | Y | Y | 2070 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2071 | 2 | tsvwg | n/a | n/a | n/a | n/a | n/a | n/a | 2072 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2073 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2074 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2075 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2076 | 3 | tcpm | Y | Y | Y | Y | n/a | n/a | 2077 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2078 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2079 | 4-1 | tcpm | | Y | Y | Y | Y | Y | 2080 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2081 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2082 | 4-2 | tcpm/ | | | | Y | Y | ? | 2083 | | iccrg? | | | | | | | 2084 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2085 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2086 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2087 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2088 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2089 | 4-3 | tcpm/ | | | Y | Y | Y | ? | 2090 | | iccrg? | | | | | | | 2091 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2092 | 4-4 | tcpm | Y | Y | Y | Y | Y | ? | 2093 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2094 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2095 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2096 | 5-1 | tcpm | Y | Y | Y | Y | n/a | n/a | 2097 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2098 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2099 | 5-2 | tcpm/ | | | Y | Y | Y | ? | 2100 | | iccrg? | | | | | | | 2101 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2102 | 5-3 | tcpm/ | | | Y | Y | Y | ? | 2103 | | iccrg? | | | | | | | 2104 +-----+--------+-----+-------+-----------+--------+--------+--------+ 2106 Table 2 2108 Authors' Addresses 2109 Bob Briscoe (editor) 2110 Independent 2111 United Kingdom 2113 Email: ietf@bobbriscoe.net 2114 URI: http://bobbriscoe.net/ 2116 Koen De Schepper 2117 Nokia Bell Labs 2118 Antwerp 2119 Belgium 2121 Email: koen.de_schepper@nokia.com 2122 URI: https://www.bell-labs.com/usr/koen.de_schepper 2124 Marcelo Bagnulo 2125 Universidad Carlos III de Madrid 2126 Av. Universidad 30 2127 Leganes, Madrid 28911 2128 Spain 2130 Phone: 34 91 6249500 2131 Email: marcelo@it.uc3m.es 2132 URI: http://www.it.uc3m.es 2134 Greg White 2135 CableLabs 2136 United States of America 2138 Email: G.White@CableLabs.com