idnits 2.17.1 draft-ietf-tsvwg-ecn-l4s-id-23.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 2 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (24 December 2021) is 846 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Experimental ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '1' on line 1505 == Missing Reference: 'RFCXXXX' is mentioned on line 1505, but not defined == Outdated reference: A later version (-07) exists of draft-briscoe-docsis-q-protection-01 == Outdated reference: A later version (-03) exists of draft-briscoe-iccrg-prague-congestion-control-00 == Outdated reference: A later version (-28) exists of draft-ietf-tcpm-accurate-ecn-15 == Outdated reference: A later version (-15) exists of draft-ietf-tcpm-generalized-ecn-08 == Outdated reference: A later version (-25) exists of draft-ietf-tsvwg-aqm-dualq-coupled-19 == Outdated reference: A later version (-22) exists of draft-ietf-tsvwg-ecn-encap-guidelines-16 == Outdated reference: A later version (-20) exists of draft-ietf-tsvwg-l4s-arch-14 == Outdated reference: A later version (-06) exists of draft-ietf-tsvwg-l4sops-02 == Outdated reference: A later version (-22) exists of draft-ietf-tsvwg-nqb-08 == Outdated reference: A later version (-23) exists of draft-ietf-tsvwg-rfc6040update-shim-14 == Outdated reference: A later version (-06) exists of draft-stewart-tsvwg-sctpecn-05 -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) -- Obsolete informational reference (is this intentional?): RFC 6347 (Obsoleted by RFC 9147) -- Obsolete informational reference (is this intentional?): RFC 8312 (Obsoleted by RFC 9438) Summary: 0 errors (**), 0 flaws (~~), 15 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Services (tsv) K. De Schepper 3 Internet-Draft Nokia Bell Labs 4 Intended status: Experimental B. Briscoe, Ed. 5 Expires: 27 June 2022 Independent 6 24 December 2021 8 Explicit Congestion Notification (ECN) Protocol for Very Low Queuing 9 Delay (L4S) 10 draft-ietf-tsvwg-ecn-l4s-id-23 12 Abstract 14 This specification defines the protocol to be used for a new network 15 service called low latency, low loss and scalable throughput (L4S). 16 L4S uses an Explicit Congestion Notification (ECN) scheme at the IP 17 layer that is similar to the original (or 'Classic') ECN approach, 18 except as specified within. L4S uses 'scalable' congestion control, 19 which induces much more frequent control signals from the network and 20 it responds to them with much more fine-grained adjustments, so that 21 very low (typically sub-millisecond on average) and consistently low 22 queuing delay becomes possible for L4S traffic without compromising 23 link utilization. Thus even capacity-seeking (TCP-like) traffic can 24 have high bandwidth and very low delay at the same time, even during 25 periods of high traffic load. 27 The L4S identifier defined in this document distinguishes L4S from 28 'Classic' (e.g. TCP-Reno-friendly) traffic. It gives an incremental 29 migration path so that suitably modified network bottlenecks can 30 distinguish and isolate existing traffic that still follows the 31 Classic behaviour, to prevent it degrading the low queuing delay and 32 low loss of L4S traffic. This specification defines the rules that 33 L4S transports and network elements need to follow with the intention 34 that L4S flows neither harm each other's performance nor that of 35 Classic traffic. Examples of new active queue management (AQM) 36 marking algorithms and examples of new transports (whether TCP-like 37 or real-time) are specified separately. 39 Status of This Memo 41 This Internet-Draft is submitted in full conformance with the 42 provisions of BCP 78 and BCP 79. 44 Internet-Drafts are working documents of the Internet Engineering 45 Task Force (IETF). Note that other groups may also distribute 46 working documents as Internet-Drafts. The list of current Internet- 47 Drafts is at https://datatracker.ietf.org/drafts/current/. 49 Internet-Drafts are draft documents valid for a maximum of six months 50 and may be updated, replaced, or obsoleted by other documents at any 51 time. It is inappropriate to use Internet-Drafts as reference 52 material or to cite them other than as "work in progress." 54 This Internet-Draft will expire on 27 June 2022. 56 Copyright Notice 58 Copyright (c) 2021 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 63 license-info) in effect on the date of publication of this document. 64 Please review these documents carefully, as they describe your rights 65 and restrictions with respect to this document. Code Components 66 extracted from this document must include Revised BSD License text as 67 described in Section 4.e of the Trust Legal Provisions and are 68 provided without warranty as described in the Revised BSD License. 70 Table of Contents 72 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 73 1.1. Latency, Loss and Scaling Problems . . . . . . . . . . . 5 74 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 7 75 1.3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 9 76 2. Choice of L4S Packet Identifier: Requirements . . . . . . . . 10 77 3. L4S Packet Identification . . . . . . . . . . . . . . . . . . 11 78 4. Transport Layer Behaviour (the 'Prague Requirements') . . . . 11 79 4.1. Codepoint Setting . . . . . . . . . . . . . . . . . . . . 12 80 4.2. Prerequisite Transport Feedback . . . . . . . . . . . . . 12 81 4.3. Prerequisite Congestion Response . . . . . . . . . . . . 13 82 4.3.1. Guidance on Congestion Response in the RFC Series . . 16 83 4.4. Filtering or Smoothing of ECN Feedback . . . . . . . . . 19 84 5. Network Node Behaviour . . . . . . . . . . . . . . . . . . . 19 85 5.1. Classification and Re-Marking Behaviour . . . . . . . . . 19 86 5.2. The Strength of L4S CE Marking Relative to Drop . . . . . 21 87 5.3. Exception for L4S Packet Identification by Network Nodes 88 with Transport-Layer Awareness . . . . . . . . . . . . . 22 89 5.4. Interaction of the L4S Identifier with other 90 Identifiers . . . . . . . . . . . . . . . . . . . . . . . 22 91 5.4.1. DualQ Examples of Other Identifiers Complementing L4S 92 Identifiers . . . . . . . . . . . . . . . . . . . . . 22 93 5.4.1.1. Inclusion of Additional Traffic with L4S . . . . 22 94 5.4.1.2. Exclusion of Traffic From L4S Treatment . . . . . 24 95 5.4.1.3. Generalized Combination of L4S and Other 96 Identifiers . . . . . . . . . . . . . . . . . . . . 25 98 5.4.2. Per-Flow Queuing Examples of Other Identifiers 99 Complementing L4S Identifiers . . . . . . . . . . . . 26 100 5.5. Limiting Packet Bursts from Links . . . . . . . . . . . . 27 101 5.5.1. Limiting Packet Bursts from Links Fed by an L4S 102 AQM . . . . . . . . . . . . . . . . . . . . . . . . . 27 103 5.5.2. Limiting Packet Bursts from Links Upstream of an L4S 104 AQM . . . . . . . . . . . . . . . . . . . . . . . . . 28 105 6. Behaviour of Tunnels and Encapsulations . . . . . . . . . . . 28 106 6.1. No Change to ECN Tunnels and Encapsulations in General . 28 107 6.2. VPN Behaviour to Avoid Limitations of Anti-Replay . . . . 29 108 7. L4S Experiments . . . . . . . . . . . . . . . . . . . . . . . 30 109 7.1. Open Questions . . . . . . . . . . . . . . . . . . . . . 30 110 7.2. Open Issues . . . . . . . . . . . . . . . . . . . . . . . 32 111 7.3. Future Potential . . . . . . . . . . . . . . . . . . . . 32 112 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 33 113 9. Security Considerations . . . . . . . . . . . . . . . . . . . 33 114 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 34 115 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 34 116 11.1. Normative References . . . . . . . . . . . . . . . . . . 34 117 11.2. Informative References . . . . . . . . . . . . . . . . . 35 118 Appendix A. Rationale for the 'Prague L4S Requirements' . . . . 44 119 A.1. Rationale for the Requirements for Scalable Transport 120 Protocols . . . . . . . . . . . . . . . . . . . . . . . . 45 121 A.1.1. Use of L4S Packet Identifier . . . . . . . . . . . . 45 122 A.1.2. Accurate ECN Feedback . . . . . . . . . . . . . . . . 45 123 A.1.3. Capable of Replacement by Classic Congestion 124 Control . . . . . . . . . . . . . . . . . . . . . . . 46 125 A.1.4. Fall back to Classic Congestion Control on Packet 126 Loss . . . . . . . . . . . . . . . . . . . . . . . . 46 127 A.1.5. Coexistence with Classic Congestion Control at Classic 128 ECN bottlenecks . . . . . . . . . . . . . . . . . . . 47 129 A.1.6. Reduce RTT dependence . . . . . . . . . . . . . . . . 50 130 A.1.7. Scaling down to fractional congestion windows . . . . 51 131 A.1.8. Measuring Reordering Tolerance in Time Units . . . . 52 132 A.2. Scalable Transport Protocol Optimizations . . . . . . . . 55 133 A.2.1. Setting ECT in Control Packets and Retransmissions . 55 134 A.2.2. Faster than Additive Increase . . . . . . . . . . . . 56 135 A.2.3. Faster Convergence at Flow Start . . . . . . . . . . 56 136 Appendix B. Compromises in the Choice of L4S Identifier . . . . 57 137 Appendix C. Potential Competing Uses for the ECT(1) Codepoint . 62 138 C.1. Integrity of Congestion Feedback . . . . . . . . . . . . 62 139 C.2. Notification of Less Severe Congestion than CE . . . . . 63 140 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 63 142 1. Introduction 144 This specification defines the protocol to be used for a new network 145 service called low latency, low loss and scalable throughput (L4S). 146 L4S uses an Explicit Congestion Notification (ECN) scheme at the IP 147 layer with the same set of codepoint transitions as the original (or 148 'Classic') Explicit Congestion Notification (ECN [RFC3168]). RFC 149 3168 required an ECN mark to be equivalent to a drop, both when 150 applied in the network and when responded to by a transport. Unlike 151 Classic ECN marking, the network applies L4S marking more immediately 152 and more aggressively than drop, and the transport response to each 153 mark is reduced and smoothed relative to that for drop. The two 154 changes counterbalance each other so that the throughput of an L4S 155 flow will be roughly the same as a comparable non-L4S flow under the 156 same conditions. Nonetheless, the much more frequent ECN control 157 signals and the finer responses to these signals result in very low 158 queuing delay without compromising link utilization, and this low 159 delay can be maintained during high load. For instance, queuing 160 delay under heavy and highly varying load with the example DCTCP/ 161 DualQ solution cited below on a DSL or Ethernet link is sub- 162 millisecond on average and roughly 1 to 2 milliseconds at the 99th 163 percentile without losing link utilization [DualPI2Linux], [DCttH19]. 164 Note that the inherent queuing delay while waiting to acquire a 165 discontinuous medium such as WiFi has to be minimized in its own 166 right, so it would be additional to the above (see section 6.3 of 167 [I-D.ietf-tsvwg-l4s-arch]). 169 L4S relies on 'scalable' congestion controls for these delay 170 properties and for preserving low delay as flow rate scales, hence 171 the name. The congestion control used in Data Center TCP (DCTCP) is 172 an example of a scalable congestion control, but DCTCP is applicable 173 solely to controlled environments like data centres [RFC8257], 174 because it is too aggressive to co-exist with existing TCP-Reno- 175 friendly traffic. The DualQ Coupled AQM, which is defined in a 176 complementary experimental specification 177 [I-D.ietf-tsvwg-aqm-dualq-coupled], is an AQM framework that enables 178 scalable congestion controls derived from DCTCP to co-exist with 179 existing traffic, each getting roughly the same flow rate when they 180 compete under similar conditions. Note that a scalable congestion 181 control is still not safe to deploy on the Internet unless it 182 satisfies the requirements listed in Section 4. 184 L4S is not only for elastic (TCP-like) traffic - there are scalable 185 congestion controls for real-time media, such as the L4S variant of 186 the SCReAM [RFC8298] real-time media congestion avoidance technique 187 (RMCAT). The factor that distinguishes L4S from Classic traffic is 188 its behaviour in response to congestion. The transport wire 189 protocol, e.g. TCP, QUIC, SCTP, DCCP, RTP/RTCP, is orthogonal (and 190 therefore not suitable for distinguishing L4S from Classic packets). 192 The L4S identifier defined in this document is the key piece that 193 distinguishes L4S from 'Classic' (e.g. Reno-friendly) traffic. It 194 gives an incremental migration path so that suitably modified network 195 bottlenecks can distinguish and isolate existing Classic traffic from 196 L4S traffic to prevent the former from degrading the very low delay 197 and loss of the new scalable transports, without harming Classic 198 performance at these bottlenecks. Initial implementation of the 199 separate parts of the system has been motivated by the performance 200 benefits. 202 1.1. Latency, Loss and Scaling Problems 204 Latency is becoming the critical performance factor for many (most?) 205 applications on the public Internet, e.g. interactive Web, Web 206 services, voice, conversational video, interactive video, interactive 207 remote presence, instant messaging, online gaming, remote desktop, 208 cloud-based applications, and video-assisted remote control of 209 machinery and industrial processes. In the 'developed' world, 210 further increases in access network bit-rate offer diminishing 211 returns, whereas latency is still a multi-faceted problem. In the 212 last decade or so, much has been done to reduce propagation time by 213 placing caches or servers closer to users. However, queuing remains 214 a major intermittent component of latency. 216 The Diffserv architecture provides Expedited Forwarding [RFC3246], so 217 that low latency traffic can jump the queue of other traffic. If 218 growth in high-throughput latency-sensitive applications continues, 219 periods with solely latency-sensitive traffic will become 220 increasingly common on links where traffic aggregation is low. For 221 instance, on the access links dedicated to individual sites (homes, 222 small enterprises or mobile devices). These links also tend to 223 become the path bottleneck under load. During these periods, if all 224 the traffic were marked for the same treatment, at these bottlenecks 225 Diffserv would make no difference. Instead, it becomes imperative to 226 remove the underlying causes of any unnecessary delay. 228 The bufferbloat project has shown that excessively-large buffering 229 ('bufferbloat') has been introducing significantly more delay than 230 the underlying propagation time. These delays appear only 231 intermittently--only when a capacity-seeking (e.g. TCP) flow is long 232 enough for the queue to fill the buffer, making every packet in other 233 flows sharing the buffer sit through the queue. 235 Active queue management (AQM) was originally developed to solve this 236 problem (and others). Unlike Diffserv, which gives low latency to 237 some traffic at the expense of others, AQM controls latency for _all_ 238 traffic in a class. In general, AQM methods introduce an increasing 239 level of discard from the buffer the longer the queue persists above 240 a shallow threshold. This gives sufficient signals to capacity- 241 seeking (aka. greedy) flows to keep the buffer empty for its intended 242 purpose: absorbing bursts. However, RED [RFC2309] and other 243 algorithms from the 1990s were sensitive to their configuration and 244 hard to set correctly. So, this form of AQM was not widely deployed. 246 More recent state-of-the-art AQM methods, e.g. FQ-CoDel [RFC8290], 247 PIE [RFC8033], Adaptive RED [ARED01], are easier to configure, 248 because they define the queuing threshold in time not bytes, so it is 249 invariant for different link rates. However, no matter how good the 250 AQM, the sawtoothing sending window of a Classic congestion control 251 will either cause queuing delay to vary or cause the link to be 252 underutilized. Even with a perfectly tuned AQM, the additional 253 queuing delay will be of the same order as the underlying speed-of- 254 light delay across the network, thereby roughly doubling the total 255 round-trip time. 257 If a sender's own behaviour is introducing queuing delay variation, 258 no AQM in the network can 'un-vary' the delay without significantly 259 compromising link utilization. Even flow-queuing (e.g. [RFC8290]), 260 which isolates one flow from another, cannot isolate a flow from the 261 delay variations it inflicts on itself. Therefore those applications 262 that need to seek out high bandwidth but also need low latency will 263 have to migrate to scalable congestion control. 265 Altering host behaviour is not enough on its own though. Even if 266 hosts adopt low latency behaviour (scalable congestion controls), 267 they need to be isolated from the behaviour of existing Classic 268 congestion controls that induce large queue variations. L4S enables 269 that migration by providing latency isolation in the network and 270 distinguishing the two types of packets that need to be isolated: L4S 271 and Classic. L4S isolation can be achieved with a queue per flow 272 (e.g. [RFC8290]) but a DualQ [I-D.ietf-tsvwg-aqm-dualq-coupled] is 273 sufficient, and actually gives better tail latency. Both approaches 274 are addressed in this document. 276 The DualQ solution was developed to make very low latency available 277 without requiring per-flow queues at every bottleneck. This was 278 because per-flow-queuing (FQ) has well-known downsides - not least 279 the need to inspect transport layer headers in the network, which 280 makes it incompatible with privacy approaches such as IPSec VPN 281 tunnels, and incompatible with link layer queue management, where 282 transport layer headers can be hidden, e.g. 5G. 284 Latency is not the only concern addressed by L4S: It was known when 285 TCP congestion avoidance was first developed that it would not scale 286 to high bandwidth-delay products (footnote 6 of Jacobson and Karels 287 [TCP-CA]). Given regular broadband bit-rates over WAN distances are 288 already [RFC3649] beyond the scaling range of Reno congestion 289 control, 'less unscalable' Cubic [RFC8312] and 290 Compound [I-D.sridharan-tcpm-ctcp] variants of TCP have been 291 successfully deployed. However, these are now approaching their 292 scaling limits. Unfortunately, fully scalable congestion controls 293 such as DCTCP [RFC8257] outcompete Classic ECN congestion controls 294 sharing the same queue, which is why they have been confined to 295 private data centres or research testbeds. 297 It turns out that these scalable congestion control algorithms that 298 solve the latency problem can also solve the scalability problem of 299 Classic congestion controls. The finer sawteeth in the congestion 300 window have low amplitude, so they cause very little queuing delay 301 variation and the average time to recover from one congestion signal 302 to the next (the average duration of each sawtooth) remains 303 invariant, which maintains constant tight control as flow-rate 304 scales. A background paper [DCttH19] gives the full explanation of 305 why the design solves both the latency and the scaling problems, both 306 in plain English and in more precise mathematical form. The 307 explanation is summarised without the maths in Section 4 of the L4S 308 architecture document [I-D.ietf-tsvwg-l4s-arch]. 310 1.2. Terminology 312 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 313 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 314 "OPTIONAL" in this document are to be interpreted as described in 315 [RFC2119]. In this document, these words will appear with that 316 interpretation only when in ALL CAPS. Lower case uses of these words 317 are not to be interpreted as carrying RFC-2119 significance. 319 Note: [I-D.ietf-tsvwg-l4s-arch] repeats the following definitions, 320 but if there are accidental differences those below take precedence. 322 Classic Congestion Control: A congestion control behaviour that can 323 co-exist with standard Reno [RFC5681] without causing 324 significantly negative impact on its flow rate [RFC5033]. With 325 Classic congestion controls, such as Reno or Cubic, because flow 326 rate has scaled since TCP congestion control was first designed in 327 1988, it now takes hundreds of round trips (and growing) to 328 recover after a congestion signal (whether a loss or an ECN mark) 329 as shown in the examples in section 5.1 of 330 [I-D.ietf-tsvwg-l4s-arch] and in [RFC3649]. Therefore control of 331 queuing and utilization becomes very slack, and the slightest 332 disturbances (e.g. from new flows starting) prevent a high rate 333 from being attained. 335 Scalable Congestion Control: A congestion control where the average 336 time from one congestion signal to the next (the recovery time) 337 remains invariant as the flow rate scales, all other factors being 338 equal. This maintains the same degree of control over queueing 339 and utilization whatever the flow rate, as well as ensuring that 340 high throughput is robust to disturbances. For instance, DCTCP 341 averages 2 congestion signals per round-trip whatever the flow 342 rate, as do other recently developed scalable congestion controls, 343 e.g. Relentless TCP [Mathis09], TCP Prague 344 [I-D.briscoe-iccrg-prague-congestion-control], [PragueLinux], 345 BBRv2 [BBRv2] and the L4S variant of SCREAM for real-time 346 media [SCReAM], [RFC8298]). See Section 4.3 for more explanation. 348 Classic service: The Classic service is intended for all the 349 congestion control behaviours that co-exist with Reno [RFC5681] 350 (e.g. Reno itself, Cubic [RFC8312], Compound 351 [I-D.sridharan-tcpm-ctcp], TFRC [RFC5348]). The term 'Classic 352 queue' means a queue providing the Classic service. 354 Low-Latency, Low-Loss Scalable throughput (L4S) service: The 'L4S' 355 service is intended for traffic from scalable congestion control 356 algorithms, such as TCP Prague 357 [I-D.briscoe-iccrg-prague-congestion-control], which was derived 358 from DCTCP [RFC8257]. The L4S service is for more general traffic 359 than just TCP Prague--it allows the set of congestion controls 360 with similar scaling properties to Prague to evolve, such as the 361 examples listed above (Relentless, SCReAM). The term 'L4S queue' 362 means a queue providing the L4S service. 364 The terms Classic or L4S can also qualify other nouns, such as 365 'queue', 'codepoint', 'identifier', 'classification', 'packet', 366 'flow'. For example: an L4S packet means a packet with an L4S 367 identifier sent from an L4S congestion control. 369 Both Classic and L4S services can cope with a proportion of 370 unresponsive or less-responsive traffic as well, but in the L4S 371 case its rate has to be smooth enough or low enough not to build a 372 queue (e.g. DNS, VoIP, game sync datagrams, etc). 374 Reno-friendly: The subset of Classic traffic that is friendly to the 375 standard Reno congestion control defined for TCP in [RFC5681]. 376 The TFRC spec. [RFC5348] indirectly implies that 'friendly' is 377 defined as "generally within a factor of two of the sending rate 378 of a TCP flow under the same conditions". Reno-friendly is used 379 here in place of 'TCP-friendly', given the latter has become 380 imprecise, because the TCP protocol is now used with so many 381 different congestion control behaviours, and Reno is used in non- 382 TCP transports such as QUIC [RFC9000]. 384 Classic ECN: The original Explicit Congestion Notification (ECN) 385 protocol [RFC3168], which requires ECN signals to be treated the 386 same as drops, both when generated in the network and when 387 responded to by the sender. For L4S, the names used for the four 388 codepoints of the 2-bit IP-ECN field are unchanged from those 389 defined in [RFC3168]: Not ECT, ECT(0), ECT(1) and CE, where ECT 390 stands for ECN-Capable Transport and CE stands for Congestion 391 Experienced. A packet marked with the CE codepoint is termed 392 'ECN-marked' or sometimes just 'marked' where the context makes 393 ECN obvious. 395 Site: A home, mobile device, small enterprise or campus, where the 396 network bottleneck is typically the access link to the site. Not 397 all network arrangements fit this model but it is a useful, widely 398 applicable generalization. 400 1.3. Scope 402 The new L4S identifier defined in this specification is applicable 403 for IPv4 and IPv6 packets (as for Classic ECN [RFC3168]). It is 404 applicable for the unicast, multicast and anycast forwarding modes. 406 The L4S identifier is an orthogonal packet classification to the 407 Differentiated Services Code Point (DSCP) [RFC2474]. Section 5.4 408 explains what this means in practice. 410 This document is intended for experimental status, so it does not 411 update any standards track RFCs. Therefore it depends on [RFC8311], 412 which is a standards track specification that: 414 * updates the ECN proposed standard [RFC3168] to allow experimental 415 track RFCs to relax the requirement that an ECN mark must be 416 equivalent to a drop (when the network applies markings and/or 417 when the sender responds to them). For instance, in the ABE 418 experiment [RFC8511] this permits a sender to respond less to ECN 419 marks than to drops; 421 * changes the status of the experimental ECN nonce [RFC3540] to 422 historic; 424 * makes consequent updates to the following additional proposed 425 standard RFCs to reflect the above two bullets: 427 - ECN for RTP [RFC6679]; 429 - the congestion control specifications of various DCCP 430 congestion control identifier (CCID) profiles [RFC4341], 431 [RFC4342], [RFC5622]. 433 This document is about identifiers that are used for interoperation 434 between hosts and networks. So the audience is broad, covering 435 developers of host transports and network AQMs, as well as covering 436 how operators might wish to combine various identifiers, which would 437 require flexibility from equipment developers. 439 2. Choice of L4S Packet Identifier: Requirements 441 This subsection briefly records the process that led to the chosen 442 L4S identifier. 444 The identifier for packets using the Low Latency, Low Loss, Scalable 445 throughput (L4S) service needs to meet the following requirements: 447 * it SHOULD survive end-to-end between source and destination end- 448 points: across the boundary between host and network, between 449 interconnected networks, and through middleboxes; 451 * it SHOULD be visible at the IP layer; 453 * it SHOULD be common to IPv4 and IPv6 and transport-agnostic; 455 * it SHOULD be incrementally deployable; 457 * it SHOULD enable an AQM to classify packets encapsulated by outer 458 IP or lower-layer headers; 460 * it SHOULD consume minimal extra codepoints; 462 * it SHOULD be consistent on all the packets of a transport layer 463 flow, so that some packets of a flow are not served by a different 464 queue to others. 466 Whether the identifier would be recoverable if the experiment failed 467 is a factor that could be taken into account. However, this has not 468 been made a requirement, because that would favour schemes that would 469 be easier to fail, rather than those more likely to succeed. 471 It is recognised that any choice of identifier is unlikely to satisfy 472 all these requirements, particularly given the limited space left in 473 the IP header. Therefore a compromise will always be necessary, 474 which is why all the above requirements are expressed with the word 475 'SHOULD' not 'MUST'. 477 After extensive assessment of alternative schemes, "ECT(1) and CE 478 codepoints" was chosen as the best compromise. Therefore this scheme 479 is defined in detail in the following sections, while Appendix B 480 records its pros and cons against the above requirements. 482 3. L4S Packet Identification 484 The L4S treatment is an experimental track alternative packet marking 485 treatment to the Classic ECN treatment in [RFC3168], which has been 486 updated by [RFC8311] to allow experiments such as the one defined in 487 the present specification. [RFC4774] discusses some of the issues 488 and evaluation criteria when defining alternative ECN semantics. 489 Like Classic ECN, L4S ECN identifies both network and host behaviour: 490 it identifies the marking treatment that network nodes are expected 491 to apply to L4S packets, and it identifies packets that have been 492 sent from hosts that are expected to comply with a broad type of 493 sending behaviour. 495 For a packet to receive L4S treatment as it is forwarded, the sender 496 sets the ECN field in the IP header to the ECT(1) codepoint. See 497 Section 4 for full transport layer behaviour requirements, including 498 feedback and congestion response. 500 A network node that implements the L4S service always classifies 501 arriving ECT(1) packets for L4S treatment and by default classifies 502 CE packets for L4S treatment unless the heuristics described in 503 Section 5.3 are employed. See Section 5 for full network element 504 behaviour requirements, including classification, ECN-marking and 505 interaction of the L4S identifier with other identifiers and per-hop 506 behaviours. 508 4. Transport Layer Behaviour (the 'Prague Requirements') 509 4.1. Codepoint Setting 511 A sender that wishes a packet to receive L4S treatment as it is 512 forwarded, MUST set the ECN field in the IP header (v4 or v6) to the 513 ECT(1) codepoint. 515 4.2. Prerequisite Transport Feedback 517 For a transport protocol to provide scalable congestion control 518 (Section 4.3) it MUST provide feedback of the extent of CE marking on 519 the forward path. When ECN was added to TCP [RFC3168], the feedback 520 method reported no more than one CE mark per round trip. Some 521 transport protocols derived from TCP mimic this behaviour while 522 others report the accurate extent of ECN marking. This means that 523 some transport protocols will need to be updated as a prerequisite 524 for scalable congestion control. The position for a few well-known 525 transport protocols is given below. 527 TCP: Support for the accurate ECN feedback requirements [RFC7560] 528 (such as that provided by AccECN [I-D.ietf-tcpm-accurate-ecn]) by 529 both ends is a prerequisite for scalable congestion control in 530 TCP. Therefore, the presence of ECT(1) in the IP headers even in 531 one direction of a TCP connection will imply that both ends 532 support accurate ECN feedback. However, the converse does not 533 apply. So even if both ends support AccECN, either of the two 534 ends can choose not to use a scalable congestion control, whatever 535 the other end's choice. 537 SCTP: A suitable ECN feedback mechanism for SCTP could add a chunk 538 to report the number of received CE marks 539 (e.g. [I-D.stewart-tsvwg-sctpecn]), and update the ECN feedback 540 protocol sketched out in Appendix A of the standards track 541 specification of SCTP [RFC4960]. 543 RTP over UDP: A prerequisite for scalable congestion control is for 544 both (all) ends of one media-level hop to signal ECN support 545 [RFC6679] and use the new generic RTCP feedback format of 546 [RFC8888]. The presence of ECT(1) implies that both (all) ends of 547 that media-level hop support ECN. However, the converse does not 548 apply. So each end of a media-level hop can independently choose 549 not to use a scalable congestion control, even if both ends 550 support ECN. 552 QUIC: Support for sufficiently fine-grained ECN feedback is provided 553 by the v1 IETF QUIC transport [RFC9000]. 555 DCCP: The ACK vector in DCCP [RFC4340] is already sufficient to 556 report the extent of CE marking as needed by a scalable congestion 557 control. 559 4.3. Prerequisite Congestion Response 561 As a condition for a host to send packets with the L4S identifier 562 (ECT(1)), it SHOULD implement a congestion control behaviour that 563 ensures that, in steady state, the average duration between induced 564 ECN marks does not increase as flow rate scales up, all other factors 565 being equal. This is termed a scalable congestion control. This 566 invariant duration ensures that, as flow rate scales, the average 567 period with no feedback information about capacity does not become 568 excessive. It also ensures that queue variations remain small, 569 without having to sacrifice utilization. 571 With a congestion control that sawtooths to probe capacity, this 572 duration is called the recovery time, because each time the sawtooth 573 yields, on average it take this time to recover to its previous high 574 point. A scalable congestion control does not have to sawtooth, but 575 it has to coexist with scalable congestion controls that do. 577 For instance, for DCTCP [RFC8257], TCP Prague 578 [I-D.briscoe-iccrg-prague-congestion-control], [PragueLinux] and the 579 L4S variant of SCReAM [RFC8298], the average recovery time is always 580 half a round trip (or half a reference round trip), whatever the flow 581 rate. 583 As with all transport behaviours, a detailed specification (probably 584 an experimental RFC) is expected for each congestion control, 585 following the guidelines for specifying new congestion control 586 algorithms in [RFC5033]. In addition it is expected to document 587 these L4S-specific matters, specifically the timescale over which the 588 proportionality is averaged, and control of burstiness. The recovery 589 time requirement above is worded as a 'SHOULD' rather than a 'MUST' 590 to allow reasonable flexibility for such implementations. 592 The condition 'all other factors being equal', allows the recovery 593 time to be different for different round trip times, as long as it 594 does not increase with flow rate for any particular RTT. 596 Saying that the recovery time remains roughly invariant is equivalent 597 to saying that the number of ECN CE marks per round trip remains 598 invariant as flow rate scales, all other factors being equal. For 599 instance, an average recovery time of half of 1 RTT is equivalent to 600 2 ECN marks per round trip. For those familiar with steady-state 601 congestion response functions, it is also equivalent to say that the 602 congestion window is inversely proportional to the proportion of 603 bytes in packets marked with the CE codepoint (see section 2 of 604 [PI2]). 606 In order to coexist safely with other Internet traffic, a scalable 607 congestion control MUST NOT tag its packets with the ECT(1) codepoint 608 unless it complies with the following bulleted requirements: 610 1. A scalable congestion control MUST be capable of being replaced 611 by a Classic congestion control (by application and/or by 612 administrative control). If a Classic congestion control is 613 activated, it will not tag its packets with the ECT(1) codepoint 614 (see Appendix A.1.3 for rationale). 616 2. As well as responding to ECN markings, a scalable congestion 617 control MUST react to packet loss in a way that will coexist 618 safely with Classic congestion controls such as standard Reno 619 [RFC5681], as required by [RFC5033] (see Appendix A.1.4 for 620 rationale). 622 3. In uncontrolled environments, monitoring MUST be implemented to 623 support detection of problems with an ECN-capable AQM at the path 624 bottleneck that appears not to support L4S and might be in a 625 shared queue. Such monitoring SHOULD be applied to live traffic 626 that is using Scalable congestion control. Alternatively, 627 monitoring need not be applied to live traffic, if monitoring has 628 been arranged to cover the paths that live traffic takes through 629 uncontrolled environments. 631 A function to detect the above problems with an ECN-capable AQM 632 MUST also be implemented. The detection function SHOULD be 633 capable of making the congestion control adapt its ECN-marking 634 response to coexist safely with Classic congestion controls such 635 as standard Reno [RFC5681], as required by [RFC5033]. 636 Alternatively, if adaptation is not implemented and problems with 637 such an AQM are detected, the scalable congestion control MUST be 638 replaced by a Classic congestion control. 640 Note that a scalable congestion control is not expected to change 641 to setting ECT(0) while it transiently adapts to coexist with 642 Classic congestion controls, whereas a replacement congestion 643 control that solely behaves in the Classic way will set ECT(0). 645 See Appendix A.1.5 and [I-D.ietf-tsvwg-l4sops] for rationale. 647 4. In the range between the minimum likely RTT and typical RTTs 648 expected in the intended deployment scenario, a scalable 649 congestion control MUST converge towards a rate that is as 650 independent of RTT as is possible without compromising stability 651 or efficiency (see Appendix A.1.6 for rationale). 653 5. A scalable congestion control SHOULD remain responsive to 654 congestion when typical RTTs over the public Internet are 655 significantly smaller because they are no longer inflated by 656 queuing delay. It would be preferable for the minimum window of 657 a scalable congestion control to be lower than 1 segment rather 658 than use the timeout approach described for TCP in S.6.1.2 of 659 [RFC3168] (or an equivalent for other transports). However, a 660 lower minimum is not set as a formal requirement for L4S 661 experiments (see Appendix A.1.7 for rationale). 663 6. A scalable congestion control's loss detection SHOULD be 664 resilient to reordering over an adaptive time interval that 665 scales with throughput and adapts to reordering (as in 666 [RFC8985]), as opposed to counting only in fixed units of packets 667 (as in the 3 DupACK rule of [RFC5681] and [RFC6675], which is not 668 scalable). As data rates increase (e.g., due to new and/or 669 improved technology), congestion controls that detect loss by 670 counting in units of packets become more likely to incorrectly 671 treat reordering events as congestion-caused loss events (see 672 Appendix A.1.8 for further rationale). This requirement does not 673 apply to congestion controls that are solely used in controlled 674 environments where the network introduces hardly any reordering. 676 7. A scalable congestion control is expected to limit the queue 677 caused by bursts of packets. It would not seem necessary to set 678 the limit any lower than 10% of the minimum RTT expected in a 679 typical deployment (e.g. additional queuing of roughly 250 us for 680 the public Internet). This would be converted to a number of 681 packets under the worst-case assumption that the bottleneck link 682 capacity equals the current flow rate. No normative requirement 683 to limit bursts is given here and, until there is more industry 684 experience from the L4S experiment, it is not even known whether 685 one is needed - it seems to be in an L4S sender's self-interest 686 to limit bursts. 688 Each sender in a session can use a scalable congestion control 689 independently of the congestion control used by the receiver(s) when 690 they send data. Therefore there might be ECT(1) packets in one 691 direction and ECT(0) or Not-ECT in the other. 693 Later (Section 5.4.1.1) this document discusses the conditions for 694 mixing other "'Safe' Unresponsive Traffic" (e.g. DNS, LDAP, NTP, 695 voice, game sync packets) with L4S traffic. To be clear, although 696 such traffic can share the same queue as L4S traffic, it is not 697 appropriate for the sender to tag it as ECT(1), except in the 698 (unlikely) case that it satisfies the above conditions. 700 4.3.1. Guidance on Congestion Response in the RFC Series 702 RFC 3168 requires the congestion responses to a CE-marked packet and 703 a dropped packet to be the same. RFC 8311 is a standards-track 704 update to RFC 3168 intended to enable experimentation with ECN, 705 including the L4S experiment. RFC 8311 allows an experimental 706 congestion control's response to a CE-marked packet to differ from 707 the response to a dropped packet, provided that the differences are 708 documented in an experimental RFC, such as the present document. 710 BCP 124 [RFC4774] gives guidance to protocol designers, when 711 specifying alternative semantics for the ECN field. RFC 8311 712 explained that it did not need to update the best current practice in 713 BCP 124 in order to relax the 'equivalence with drop' requirement 714 because, although BCP 124 quotes the same requirement from RFC 3168, 715 the BCP does not impose requirements based on it. BCP124 describes 716 three options for incremental deployment, with Option 3 (in 717 Section 4.3 of BCP 124) fitting the L4S case. This requires end- 718 nodes to respond to CE marks "in a way that is friendly to flows 719 using IETF-conformant congestion control." This echoes other general 720 congestion control requirements in the RFC series, for example 721 [RFC5033], which says "...congestion controllers that have a 722 significantly negative impact on traffic using standard congestion 723 control may be suspect", or [RFC8085] concerning UDP congestion 724 control says "Bulk-transfer applications that choose not to implement 725 TFRC or TCP-like windowing SHOULD implement a congestion control 726 scheme that results in bandwidth (capacity) use that competes fairly 727 with TCP within an order of magnitude." 729 The third normative bullet in Section 4.3 above (which concerns L4S 730 response to congestion from a Classic ECN AQM) aims to ensure that 731 these 'coexistence' requirements are satisfied, but it makes some 732 compromises. This subsection highlights and justifies those 733 compromises and Appendix A.1.5 and [I-D.ietf-tsvwg-l4sops] give 734 detailed analysis, examples and references (the normative text in 735 that bullet takes precedence if any informative elaboration leads to 736 ambiguity). The approach is based on an assessment of the risk of 737 harm, which is a combination of the prevalence of the conditions 738 necessary for harm to occur, and the potential severity of the harm 739 if they do. 741 Prevalence: There are three cases: 743 * Drop Tail: Coexistence between L4S and Classic flows is not in 744 doubt where the bottleneck does not support any form of ECN, 745 which has remained by far the most prevalent case since the ECN 746 RFC was published in 2001. 748 * L4S: Coexistence is not in doubt if the bottleneck supports 749 L4S. 751 * Classic ECN [RFC3168]: The compromises centre around cases 752 where the bottleneck supports Classic ECN but not L4S. But it 753 depends on which sub-case: 755 - Shared Queue with Classic ECN: The members of the Transport 756 Working group are not aware of any current deployments of 757 single-queue Classic ECN bottlenecks in the Internet. 758 Nonetheless, at the scale of the Internet, rarity need not 759 imply small numbers, nor that there will be rarity in 760 future. 762 - Per-Flow-queues with Classic ECN: Most AQMs with per-flow- 763 queuing (FQ) deployed from 2012 onwards had Classic ECN 764 enabled by default, specifically FQ-CoDel [RFC8290] and 765 COBALT [COBALT]. But the compromises only apply to the 766 second of two further sub-cases: 768 o With per-flow-queuing, co-existence between Classic and 769 L4S flows is not normally a problem, because different 770 flows are not meant to coexist within the same queue, 772 o However, the isolation between L4S and Classic flows is 773 not perfect in cases where the hashes of flow IDs collide 774 or where multiple flows within a layer-3 VPN are 775 encapsulated within one flow ID. 777 To summarize, the coexistence problem is confined to cases of 778 imperfect flow isolation in an FQ, or in potential cases where a 779 Classic ECN AQM has been deployed in a shared queue (see 780 [I-D.ietf-tsvwg-l4sops] for further details including recent 781 surveys attempting to quantify prevalence). Further, if one of 782 these cases does occur, the coexistence problem does not arise 783 unless sources of Classic and L4S flows are simultaneously sharing 784 the same bottleneck queue (e.g. different applications in the same 785 household) and flows of each type have to be large enough to 786 coincide for long enough for any throughput imbalance to have 787 developed. 789 Severity: Where long-running L4S and Classic flows coincide in a 790 shared queue, testing [ecn-fallback] has found that the imbalance 791 in average throughput between an L4S and a Classic flow can reach 792 25:1 in favour of L4S in the worst case. However, when capacity 793 is most scarce, the Classic flow gets a higher proportion of the 794 link, for instance over a 4 Mb/s link the throughput ratio is 795 below ~10:1 over paths with a base RTT below 100 ms, and falls 796 below ~5:1 for base RTTs below 20ms. 798 These throughput ratios can clearly fall well outside current RFC 799 guidance on coexistence. However, the tendency towards leaving a 800 greater share for Classic flows at lower link rate and the very 801 limited prevalence of the conditions necessary for harm to occur led 802 to the possibility of allowing the RFC requirements to be 803 compromised, albeit briefly:: 805 * The recommended approach is still to detect and adapt to a Classic 806 ECN AQM in real-time, which is fully consistent with all the RFCs 807 on coexistence. In other words, the "SHOULD"s in the third bullet 808 of Section 4.3 above expect the sender to implement something 809 similar to the proof of concept code that detects the presence of 810 a Classic ECN AQM and falls back to a Classic congestion response 811 within a few round trips [ecn-fallback]. However, although this 812 code reliably detects a Classic ECN AQM, the current code can also 813 wrongly categorize an L4S AQM as Classic, most often in cases when 814 link rate is low or RTT is high. Although this is the safe way 815 round, and although implementers are expected to be able to 816 improve on this proof of concept, concerns have been raised that 817 implementers might lose faith in such detection and disable it. 819 * Therefore the third bullet in Section 4.3 above allows a 820 compromise where coexistence could diverge from the requirements 821 in the RFC Series briefly, but mandatory monitoring is required, 822 in order to detect such cases and trigger remedial action. This 823 approach tolerates a brief divergence from the RFCs given the 824 likely low prevalence and given harm here means a flow progresses 825 more slowly than otherwise, but it does progress. 826 [I-D.ietf-tsvwg-l4sops] outlines a range of example remedial 827 actions that include alterations either to the sender or to the 828 network. However, the final normative requirement in the third 829 bullet of Section 4.3 above places ultimate responsibility for 830 remedial action on the sender. If coexistence problems with a 831 Classic ECN AQM are detected (implying they have not been resolved 832 by the network), it says the sender "MUST" revert to a Classic 833 congestion control." 835 [I-D.ietf-tsvwg-l4sops] also gives example ways in which L4S 836 congestion controls can be rolled out initially in lower risk 837 scenarios. 839 4.4. Filtering or Smoothing of ECN Feedback 841 Section 5.2 below specifies that an L4S AQM is expected to signal L4S 842 ECN immediately, to avoid introducing delay due to filtering or 843 smoothing. This contrasts with a Classic AQM, which filters out 844 variations in the queue before signalling ECN marking or drop. In 845 the L4S architecture [I-D.ietf-tsvwg-l4s-arch], responsibility for 846 smoothing out these variations shifts to the sender's congestion 847 control. 849 This shift of responsibility has the advantage that each sender can 850 smooth variations over a timescale proportionate to its own RTT. 851 Whereas, in the Classic approach, the network doesn't know the RTTs 852 of any of the flows, so it has to smooth out variations for a worst- 853 case RTT to ensure stability. For all the typical flows with shorter 854 RTT than the worst-case, this makes congestion control unnecessarily 855 sluggish. 857 This also gives an L4S sender the choice not to smooth, depending on 858 its context (start-up, congestion avoidance, etc). Therefore, this 859 document places no requirement on an L4S congestion control to smooth 860 out variations in any particular way. Implementers are encouraged to 861 openly publish the approach they take to smoothing, and the results 862 and experience they gain during the L4S experiment. 864 5. Network Node Behaviour 866 5.1. Classification and Re-Marking Behaviour 868 A network node that implements the L4S service: 870 * MUST classify arriving ECT(1) packets for L4S treatment, unless 871 overridden by another classifier (e.g., see Section 5.4.1.2); 873 * MUST classify arriving CE packets for L4S treatment as well, 874 unless overridden by a another classifier or unless the exception 875 referred to next applies; 876 CE packets might have originated as ECT(1) or ECT(0), but the 877 above rule to classify them as if they originated as ECT(1) is the 878 safe choice (see Appendix B for rationale). The exception is 879 where some flow-aware in-network mechanism happens to be available 880 for distinguishing CE packets that originated as ECT(0), as 881 described in Section 5.3, but there is no implication that such a 882 mechanism is necessary. 884 An L4S AQM treatment follows similar codepoint transition rules to 885 those in RFC 3168. Specifically, the ECT(1) codepoint MUST NOT be 886 changed to any other codepoint than CE, and CE MUST NOT be changed to 887 any other codepoint. An ECT(1) packet is classified as ECN-capable 888 and, if congestion increases, an L4S AQM algorithm will increasingly 889 mark the ECN field as CE, otherwise forwarding packets unchanged as 890 ECT(1). Necessary conditions for an L4S marking treatment are 891 defined in Section 5.2. 893 Under persistent overload an L4S marking treatment MUST begin 894 applying drop to L4S traffic until the overload episode has subsided, 895 as recommended for all AQM methods in [RFC7567] (Section 4.2.1), 896 which follows the similar advice in RFC 3168 (Section 7). During 897 overload, it MUST apply the same drop probability to L4S traffic as 898 it would to Classic traffic. 900 Where an L4S AQM is transport-aware, this requirement could be 901 satisfied by using drop in only the most overloaded individual per- 902 flow AQMs. In a DualQ with flow-aware queue protection (e.g. 903 [I-D.briscoe-docsis-q-protection]), this could be achieved by 904 redirecting packets in those flows contributing most to the overload 905 out of the L4S queue so that they are subjected to drop in the 906 Classic queue. 908 For backward compatibility in uncontrolled environments, a network 909 node that implements the L4S treatment MUST also implement an AQM 910 treatment for the Classic service as defined in Section 1.2. This 911 Classic AQM treatment need not mark ECT(0) packets, but if it does, 912 see Section 5.2 for the strengths of the markings relative to drop. 913 It MUST classify arriving ECT(0) and Not-ECT packets for treatment by 914 this Classic AQM (for the DualQ Coupled AQM, see the extensive 915 discussion on classification in Sections 2.3 and 2.5.1.1 of 916 [I-D.ietf-tsvwg-aqm-dualq-coupled]). 918 In case unforeseen problems arise with the L4S experiment, it MUST be 919 possible to configure an L4S implementation to disable the L4S 920 treatment. Once disabled, all packets of all ECN codepoints will 921 receive Classic treatment and ECT(1) packets MUST be treated as if 922 they were Not-ECT. 924 5.2. The Strength of L4S CE Marking Relative to Drop 926 The relative strengths of L4S CE and drop are irrelevant where AQMs 927 are implemented in separate queues per-application-flow, which are 928 then explicitly scheduled (e.g. with an FQ scheduler as in 929 [RFC8290]). Nonetheless, the relationship between them needs to be 930 defined for the coupling between L4S and Classic congestion signals 931 in a DualQ Coupled AQM [I-D.ietf-tsvwg-aqm-dualq-coupled], as below. 933 Unless an AQM node schedules application flows explicitly, the 934 likelihood that the AQM drops a Not-ECT Classic packet (p_C) MUST be 935 roughly proportional to the square of the likelihood that it would 936 have marked it if it had been an L4S packet (p_L). That is 938 p_C ~= (p_L / k)^2 940 The constant of proportionality (k) does not have to be standardised 941 for interoperability, but a value of 2 is RECOMMENDED. The term 942 'likelihood' is used above to allow for marking and dropping to be 943 either probabilistic or deterministic. 945 This formula ensures that Scalable and Classic flows will converge to 946 roughly equal congestion windows, for the worst case of Reno 947 congestion control. This is because the congestion windows of 948 Scalable and Classic congestion controls are inversely proportional 949 to p_L and sqrt(p_C) respectively. So squaring p_C in the above 950 formula counterbalances the square root that characterizes Reno- 951 friendly flows. 953 Note that, contrary to RFC 3168, an AQM implementing the L4S and 954 Classic treatments does not mark an ECT(1) packet under the same 955 conditions that it would have dropped a Not-ECT packet, as allowed by 956 [RFC8311], which updates RFC 3168. However, if it marks ECT(0) 957 packets, it does so under the same conditions that it would have 958 dropped a Not-ECT packet [RFC3168]. 960 Also, In the L4S architecture [I-D.ietf-tsvwg-l4s-arch], the sender, 961 not the network, is responsible for smoothing out variations in the 962 queue. So, an L4S AQM MUST signal congestion as soon as possible. 963 Then, an L4S sender generally interprets CE marking as an unsmoothed 964 signal. 966 This requirement does not prevent an L4S AQM from mixing in 967 additional congestion signals that are smoothed, such as the signals 968 from a Classic smoothed AQM that are coupled with unsmoothed L4S 969 signals in the coupled DualQ [I-D.ietf-tsvwg-aqm-dualq-coupled]. But 970 only as long as the onset of congestion can be signalled immediately, 971 and can be interpreted by the sender as if it has been signalled 972 immediately, which is important for interoperability 974 5.3. Exception for L4S Packet Identification by Network Nodes with 975 Transport-Layer Awareness 977 To implement L4S packet classification, a network node does not need 978 to identify transport-layer flows. Nonetheless, if an L4S network 979 node classifies packets by their transport-layer flow ID and their 980 ECN field, and if all the ECT packets in a flow have been ECT(0), the 981 node MAY classify any CE packets in the same flow as if they were 982 Classic ECT(0) packets. In all other cases, a network node MUST 983 classify all CE packets as if they were ECT(1) packets. Examples of 984 such other cases are: i) if no ECT packets have yet been identified 985 in a flow; ii) if it is not desirable for a network node to identify 986 transport-layer flows; or iii) if some ECT packets in a flow have 987 been ECT(1) (this advice will need to be verified as part of L4S 988 experiments). 990 5.4. Interaction of the L4S Identifier with other Identifiers 992 The examples in this section concern how additional identifiers might 993 complement the L4S identifier to classify packets between class-based 994 queues. Firstly Section 5.4.1 considers two queues, L4S and Classic, 995 as in the Coupled DualQ AQM [I-D.ietf-tsvwg-aqm-dualq-coupled], 996 either alone (Section 5.4.1.1) or within a larger queuing hierarchy 997 (Section 5.4.1.2). Then Section 5.4.2 considers schemes that might 998 combine per-flow 5-tuples with other identifiers. 1000 5.4.1. DualQ Examples of Other Identifiers Complementing L4S 1001 Identifiers 1003 5.4.1.1. Inclusion of Additional Traffic with L4S 1005 In a typical case for the public Internet a network element that 1006 implements L4S in a shared queue might want to classify some low-rate 1007 but unresponsive traffic (e.g. DNS, LDAP, NTP, voice, game sync 1008 packets) into the low latency queue to mix with L4S traffic. In this 1009 case it would not be appropriate to call the queue an L4S queue, 1010 because it is shared by L4S and non-L4S traffic. Instead it will be 1011 called the low latency or L queue. The L queue then offers two 1012 different treatments: 1014 * The L4S treatment, which is a combination of the L4S AQM treatment 1015 and a priority scheduling treatment; 1017 * The low latency treatment, which is solely the priority scheduling 1018 treatment, without ECN-marking by the AQM. 1020 To identify packets for just the scheduling treatment, it would be 1021 inappropriate to use the L4S ECT(1) identifier, because such traffic 1022 is unresponsive to ECN marking. Examples of relevant non-ECN 1023 identifiers are: 1025 * address ranges of specific applications or hosts configured to be, 1026 or known to be, safe, e.g. hard-coded IoT devices sending low 1027 intensity traffic; 1029 * certain low data-volume applications or protocols (e.g. ARP, DNS); 1031 * specific Diffserv codepoints that indicate traffic with limited 1032 burstiness such as the EF (Expedited Forwarding [RFC3246]), Voice- 1033 Admit [RFC5865] or proposed NQB (Non-Queue-Building 1034 [I-D.ietf-tsvwg-nqb]) service classes or equivalent local-use 1035 DSCPs (see [I-D.briscoe-tsvwg-l4s-diffserv]). 1037 In summary, a network element that implements L4S in a shared queue 1038 MAY classify additional types of packets into the L queue based on 1039 identifiers other than the ECN field, but the types SHOULD be 'safe' 1040 to mix with L4S traffic, where 'safe' is explained in 1041 Section 5.4.1.1.1. 1043 A packet that carries one of these non-ECN identifiers to classify it 1044 into the L queue would not be subject to the L4S ECN marking 1045 treatment, unless it also carried an ECT(1) or CE codepoint. The 1046 specification of an L4S AQM MUST define the behaviour for packets 1047 with unexpected combinations of codepoints, e.g. a non-ECN-based 1048 classifier for the L queue, but ECT(0) in the ECN field (for examples 1049 see section 2.5.1.1 of [I-D.ietf-tsvwg-aqm-dualq-coupled]). 1051 For clarity, non-ECN identifiers, such as the examples itemized 1052 above, might be used by some network operators who believe they 1053 identify non-L4S traffic that would be safe to mix with L4S traffic. 1054 They are not alternative ways for a host to indicate that it is 1055 sending L4S packets. Only the ECT(1) ECN codepoint indicates to a 1056 network element that a host is sending L4S packets (and CE indicates 1057 that it could have originated as ECT(1)). Specifically ECT(1) 1058 indicates that the host claims its behaviour satisfies the 1059 prerequisite transport requirements in Section 4. 1061 In order to include non-L4S packets in the L queue, a network node 1062 MUST NOT alter Not-ECT or ECT(0) in the IP-ECN field to an L4S 1063 identifier. This ensures that these codepoints survive for any 1064 potential use later on the network path. 1066 5.4.1.1.1. 'Safe' Unresponsive Traffic 1068 The above section requires unresponsive traffic to be 'safe' to mix 1069 with L4S traffic. Ideally this means that the sender never sends any 1070 sequence of packets at a rate that exceeds the available capacity of 1071 the bottleneck link. However, typically an unresponsive transport 1072 does not even know the bottleneck capacity of the path, let alone its 1073 available capacity. Nonetheless, an application can be considered 1074 safe enough if it paces packets out (not necessarily completely 1075 regularly) such that its maximum instantaneous rate from packet to 1076 packet stays well below a typical broadband access rate. 1078 This is a vague but useful definition, because many low latency 1079 applications of interest, such as DNS, voice, game sync packets, RPC, 1080 ACKs, keep-alives, could match this description. 1082 Low rate streams such as voice and game sync packets, might not use 1083 continuously adapting ECN-based congestion control, but they ought to 1084 at least use a 'circuit-breaker' style of congestion response 1085 [RFC8083]. If the volume of traffic from unresponsive applications 1086 is high enough to overload the link, this will at least protect the 1087 capacity available to responsive applications. However, queuing 1088 delay in the L queue will probably rise to that controlled by the 1089 Classic (drop-based) AQM. If a network operator considers that such 1090 self-restraint is not enough, it might want to police the L queue 1091 (see Section 8.2 of [I-D.ietf-tsvwg-l4s-arch]). 1093 5.4.1.2. Exclusion of Traffic From L4S Treatment 1095 To extend the above example, an operator might want to exclude some 1096 traffic from the L4S treatment for a policy reason, e.g. security 1097 (traffic from malicious sources) or commercial (e.g. initially the 1098 operator may wish to confine the benefits of L4S to business 1099 customers). 1101 In this exclusion case, the classifier MUST classify on the relevant 1102 locally-used identifiers (e.g. source addresses) before classifying 1103 the non-matching traffic on the end-to-end L4S ECN identifier. 1105 A network node MUST NOT alter the end-to-end L4S ECN identifier from 1106 L4S to Classic, because an operator decision to exclude certain 1107 traffic from L4S treatment is local-only. The end-to-end L4S 1108 identifier then survives for other operators to use, or indeed, they 1109 can apply their own policy, independently based on their own choice 1110 of locally-used identifiers. This approach also allows any operator 1111 to remove its locally-applied exclusions in future, e.g. if it wishes 1112 to widen the benefit of the L4S treatment to all its customers. 1114 A network node that supports L4S but excludes certain packets 1115 carrying the L4S identifier from L4S treatment MUST still apply 1116 marking or dropping that is compatible with an L4S congestion 1117 response. For instance, it could either drop such packets with the 1118 same likelihood as Classic packets or it could ECN-mark them with a 1119 likelihood appropriate to L4S traffic (e.g. the coupled probability 1120 in a DualQ coupled AQM) but aiming for the Classic delay target. It 1121 MUST NOT ECN-mark such packets with a Classic marking probability, 1122 which could confuse the sender. 1124 5.4.1.3. Generalized Combination of L4S and Other Identifiers 1126 L4S concerns low latency, which it can provide for all traffic 1127 without differentiation and without _necessarily_ affecting bandwidth 1128 allocation. Diffserv provides for differentiation of both bandwidth 1129 and low latency, but its control of latency depends on its control of 1130 bandwidth. The two can be combined if a network operator wants to 1131 control bandwidth allocation but it also wants to provide low latency 1132 - for any amount of traffic within one of these allocations of 1133 bandwidth (rather than only providing low latency by limiting 1134 bandwidth) [I-D.briscoe-tsvwg-l4s-diffserv]. 1136 The DualQ examples so far have been framed in the context of 1137 providing the default Best Efforts Per-Hop Behaviour (PHB) using two 1138 queues - a Low Latency (L) queue and a Classic (C) Queue. This 1139 single DualQ structure is expected to be the most common and useful 1140 arrangement. But, more generally, an operator might choose to 1141 control bandwidth allocation through a hierarchy of Diffserv PHBs at 1142 a node, and to offer one (or more) of these PHBs with a low latency 1143 and a Classic variant. 1145 In the first case, if we assume that a network element provides no 1146 PHBs except the DualQ, if a packet carries ECT(1) or CE, the network 1147 element would classify it for the L4S treatment irrespective of its 1148 DSCP. And, if a packet carried (say) the EF DSCP, the network 1149 element could classify it into the L queue irrespective of its ECN 1150 codepoint. However, where the DualQ is in a hierarchy of other PHBs, 1151 the classifier would classify some traffic into other PHBs based on 1152 DSCP before classifying between the low latency and Classic queues 1153 (based on ECT(1), CE and perhaps also the EF DSCP or other 1154 identifiers as in the above example). 1155 [I-D.briscoe-tsvwg-l4s-diffserv] gives a number of examples of such 1156 arrangements to address various requirements. 1158 [I-D.briscoe-tsvwg-l4s-diffserv] describes how an operator might use 1159 L4S to offer low latency as well as using Diffserv for bandwidth 1160 differentiation. It identifies two main types of approach, which can 1161 be combined: the operator might split certain Diffserv PHBs between 1162 L4S and a corresponding Classic service. Or it might split the L4S 1163 and/or the Classic service into multiple Diffserv PHBs. In either of 1164 these cases, a packet would have to be classified on its Diffserv and 1165 ECN codepoints. 1167 In summary, there are numerous ways in which the L4S ECN identifier 1168 (ECT(1) and CE) could be combined with other identifiers to achieve 1169 particular objectives. The following categorization articulates 1170 those that are valid, but it is not necessarily exhaustive. Those 1171 tagged 'Recommended-standard-use' could be set by the sending host or 1172 a network. Those tagged 'Local-use' would only be set by a network: 1174 1. Identifiers Complementing the L4S Identifier 1176 a. Including More Traffic in the L Queue 1178 (Could use Recommended-standard-use or Local-use identifiers) 1180 b. Excluding Certain Traffic from the L Queue 1182 (Local-use only) 1184 2. Identifiers to place L4S classification in a PHB Hierarchy 1186 (Could use Recommended-standard-use or Local-use identifiers) 1188 a. PHBs Before L4S ECN Classification 1190 b. PHBs After L4S ECN Classification 1192 5.4.2. Per-Flow Queuing Examples of Other Identifiers Complementing L4S 1193 Identifiers 1195 At a node with per-flow queueing (e.g. FQ-CoDel [RFC8290]), the L4S 1196 identifier could complement the Layer-4 flow ID as a further level of 1197 flow granularity (i.e. Not-ECT and ECT(0) queued separately from 1198 ECT(1) and CE packets). "Risk of reordering Classic CE packets" in 1199 Appendix B discusses the resulting ambiguity if packets originally 1200 marked ECT(0) are marked CE by an upstream AQM before they arrive at 1201 a node that classifies CE as L4S. It argues that the risk of 1202 reordering is vanishingly small and the consequence of such a low 1203 level of reordering is minimal. 1205 Alternatively, it could be assumed that it is not in a flow's own 1206 interest to mix Classic and L4S identifiers. Then the AQM could use 1207 the ECN field to switch itself between a Classic and an L4S AQM 1208 behaviour within one per-flow queue. For instance, for ECN-capable 1209 packets, the AQM might consist of a simple marking threshold and an 1210 L4S ECN identifier might simply select a shallower threshold than a 1211 Classic ECN identifier would. 1213 5.5. Limiting Packet Bursts from Links 1215 As well as senders needing to limit packet bursts (Section 4.3), 1216 links need to limit the degree of burstiness they introduce. In both 1217 cases (senders and links) this is a tradeoff, because batch-handling 1218 of packets is done for good reason, e.g. processing efficiency or to 1219 make efficient use of medium acquisition delay. Some take the 1220 attitude that there is no point reducing burst delay at the sender 1221 below that introduced by links (or vice versa). However, delay 1222 reduction proceeds by cutting down 'the longest pole in the tent', 1223 which turns the spotlight on the next longest, and so on. 1225 This document does not set any quantified requirements for links to 1226 limit burst delay, primarily because link technologies are outside 1227 the remit of L4S specifications. Nonetheless, the following two 1228 subsections outline opportunities for addressing bursty links in the 1229 process of L4S implementation and deployment. 1231 5.5.1. Limiting Packet Bursts from Links Fed by an L4S AQM 1233 It would not make sense to implement an L4S AQM that feeds into a 1234 particular link technology without also reviewing opportunities to 1235 reduce any form of burst delay introduced by that link technology. 1236 This would at least limit the bursts that the link would otherwise 1237 introduce into the onward traffic, which would cause jumpy feedback 1238 to the sender as well as potential extra queuing delay downstream. 1239 This document does not presume to even give guidance on an 1240 appropriate target for such burst delay until there is more industry 1241 experience of L4S. However, as suggested in Section 4.3 it would not 1242 seem necessary to limit bursts lower than roughly 10% of the minimum 1243 base RTT expected in the typical deployment scenario (e.g. 250 us 1244 burst duration for links within the public Internet). 1246 5.5.2. Limiting Packet Bursts from Links Upstream of an L4S AQM 1248 The initial scope of the L4S experiment is to deploy L4S AQMs at 1249 bottlenecks and L4S congestion controls at senders. This is expected 1250 to highlight interactions with the most bursty upstream links and 1251 lead operators to tune down the burstiness of those links in their 1252 network that are configurable, or failing that, to have to compromise 1253 on the delay target of some L4S AQMs. It might also require specific 1254 redesign work relevant to the most problematic link types. Such 1255 knock-on effects of initial L4S deployment would all be part of the 1256 learning from the L4S experiment. 1258 The details of such link changes are beyond the scope of the present 1259 document. Nonetheless, where L4S technology is being implemented on 1260 an outgoing interface of a device, it would make sense to consider 1261 opportunities for reducing bursts arriving at other incoming 1262 interface(s). For instance, where an L4S AQM is implemented to feed 1263 into the upstream WAN interface of a home gateway, there would be 1264 opportunities to alter the WiFi profiles sent out of any WiFi 1265 interfaces from the same device, in order to mitigate incoming bursts 1266 of aggregated WiFi frames from other WiFi stations. 1268 6. Behaviour of Tunnels and Encapsulations 1270 6.1. No Change to ECN Tunnels and Encapsulations in General 1272 The L4S identifier is expected to work through and within any tunnel 1273 without modification, as long as the tunnel propagates the ECN field 1274 in any of the ways that have been defined since the first variant in 1275 the year 2001 [RFC3168]. L4S will also work with (but does not rely 1276 on) any of the more recent updates to ECN propagation in [RFC4301], 1277 [RFC6040] or [I-D.ietf-tsvwg-rfc6040update-shim]. However, it is 1278 likely that some tunnels still do not implement ECN propagation at 1279 all. In these cases, L4S will work through such tunnels, but within 1280 them the outer header of L4S traffic will appear as Classic. 1282 AQMs are typically implemented where an IP-layer buffer feeds into a 1283 lower layer, so they are agnostic to link layer encapsulations. 1284 Where a bottleneck link is not IP-aware, the L4S identifier is still 1285 expected to work within any lower layer encapsulation without 1286 modification, as long it propagates the ECN field as defined for the 1287 link technology, for example for MPLS [RFC5129] or TRILL 1288 [I-D.ietf-trill-ecn-support]. In some of these cases, e.g. layer-3 1289 Ethernet switches, the AQM accesses the IP layer header within the 1290 outer encapsulation, so again the L4S identifier is expected to work 1291 without modification. Nonetheless, the programme to define ECN for 1292 other lower layers is still in progress 1293 [I-D.ietf-tsvwg-ecn-encap-guidelines]. 1295 6.2. VPN Behaviour to Avoid Limitations of Anti-Replay 1297 If a mix of L4S and Classic packets is sent into the same security 1298 association (SA) of a virtual private network (VPN), and if the VPN 1299 egress is employing the optional anti-replay feature, it could 1300 inappropriately discard Classic packets (or discard the records in 1301 Classic packets) by mistaking their greater queuing delay for a 1302 replay attack (see "Dropped Packets for Tunnels with Replay 1303 Protection Enabled" in [Heist21] for the potential performance 1304 impact). This known problem is common to both IPsec [RFC4301] and 1305 DTLS [RFC6347] VPNs, given they use similar anti-replay window 1306 mechanisms. The mechanism used can only check for replay within its 1307 window, so if the window is smaller than the degree of reordering, it 1308 can only assume there might be a replay attack and discard all the 1309 packets behind the trailing edge of the window. The specifications 1310 of IPsec AH [RFC4302] and ESP [RFC4303] suggest that an implementer 1311 scales the size of the anti-replay window with interface speed, and 1312 the current draft of DTLS 1.3 [I-D.ietf-tls-dtls13] says "The 1313 receiver SHOULD pick a window large enough to handle any plausible 1314 reordering, which depends on the data rate." However, in practice, 1315 the size of a VPN's anti-replay window is not always scaled 1316 appropriately. 1318 If a VPN carrying traffic participating in the L4S experiment 1319 experiences inappropriate replay detection, the foremost remedy would 1320 be to ensure that the egress is configured to comply with the above 1321 window-sizing requirements. 1323 If an implementation of a VPN egress does not support a sufficiently 1324 large anti-replay window, e.g. due to hardware limitations, one of 1325 the temporary alternatives listed in order of preference below might 1326 be feasible instead: 1328 * If the VPN can be configured to classify packets into different 1329 SAs indexed by DSCP, apply the appropriate locally defined DSCPs 1330 to Classic and L4S packets. The DSCPs could be applied by the 1331 network (based on the least significant bit of the ECN field), or 1332 by the sending host. Such DSCPs would only need to survive as far 1333 as the VPN ingress. 1335 * If the above is not possible and it is necessary to use L4S, 1336 either of the following might be appropriate as a last resort: 1338 - disable anti-replay protection at the VPN egress, after 1339 considering the security implications (optional anti-replay is 1340 mandatory in both IPsec and DTLS); 1342 - configure the tunnel ingress not to propagate ECN to the outer, 1343 which would lose the benefits of L4S and Classic ECN over the 1344 VPN. 1346 Modification to VPN implementations is outside the present scope, 1347 which is why this section has so far focused on reconfiguration. 1348 Although this document does not define any requirements for VPN 1349 implementations, determining whether there is a need for such 1350 requirements could be one aspect of L4S experimentation. 1352 7. L4S Experiments 1354 This section describes open questions that L4S Experiments ought to 1355 focus on. This section also documents outstanding open issues that 1356 will need to be investigated as part of L4S experimentation, given 1357 they could not be fully resolved during the WG phase. It also lists 1358 metrics that will need to be monitored during experiments 1359 (summarizing text elsewhere in L4S documents) and finally lists some 1360 potential future directions that researchers might wish to 1361 investigate. 1363 In addition to this section, [I-D.ietf-tsvwg-aqm-dualq-coupled] sets 1364 operational and management requirements for experiments with DualQ 1365 Coupled AQMs; and General operational and management requirements for 1366 experiments with L4S congestion controls are given in Section 4 and 1367 Section 5 above, e.g. co-existence and scaling requirements, 1368 incremental deployment arrangements. 1370 The specification of each scalable congestion control will need to 1371 include protocol-specific requirements for configuration and 1372 monitoring performance during experiments. Appendix A of [RFC5706] 1373 provides a helpful checklist. 1375 7.1. Open Questions 1377 L4S experiments would be expected to answer the following questions: 1379 * Have all the parts of L4S been deployed, and if so, what 1380 proportion of paths support it? 1382 - What types of L4S AQMs were deployed, e.g. FQ, coupled DualQ, 1383 uncoupled DualQ, other? And how prevalent was each? 1385 - Are the signalling patterns emitted by the deployed AQMs in any 1386 way different from those expected when the Prague requirements 1387 for endpoints were written? 1389 * Does use of L4S over the Internet result in significantly improved 1390 user experience? 1392 * Has L4S enabled novel interactive applications? 1394 * Did use of L4S over the Internet result in improvements to the 1395 following metrics: 1397 - queue delay (mean and 99th percentile) under various loads; 1399 - utilization; 1401 - starvation / fairness; 1403 - scaling range of flow rates and RTTs? 1405 * How dependent was the performance of L4S service on the bottleneck 1406 bandwidth or the path RTT? 1408 * How much do bursty links in the Internet affect L4S performance 1409 (see "Underutilization with Bursty Links" in [Heist21]) and how 1410 prevalent are they? How much limitation of burstiness from 1411 upstream links was needed and/or was realized - both at senders 1412 and at links, especially radio links or how much did L4S target 1413 delay have to be increased to accommodate the bursts (see bullet 1414 #7 in Section 4.3 and Section 5.5.2)? 1416 * Is the initial experiment with mis-marked bursty traffic at high 1417 RTT (see "Underutilization with Bursty Traffic" in [Heist21]) 1418 indicative of similar problems at lower RTTs and, if so, how 1419 effective is the suggested remedy in Appendix A.1 of 1420 [I-D.ietf-tsvwg-aqm-dualq-coupled] (or possible other remedies)? 1422 * Was per-flow queue protection typically (un)necessary? 1424 - How well did overload protection or queue protection work? 1426 * How well did L4S flows coexist with Classic flows when sharing a 1427 bottleneck? 1429 - How frequently did problems arise? 1431 - What caused any coexistence problems, and were any problems due 1432 to single-queue Classic ECN AQMs (this assumes single-queue 1433 Classic ECN AQMs can be distinguished from FQ ones)? 1435 * How prevalent were problems with the L4S service due to tunnels / 1436 encapsulations that do not support ECN decapsulation? 1438 * How easy was it to implement a fully compliant L4S congestion 1439 control, over various different transport protocols (TCP, QUIC, 1440 RMCAT, etc)? 1442 Monitoring for harm to other traffic, specifically bandwidth 1443 starvation or excess queuing delay, will need to be conducted 1444 alongside all early L4S experiments. It is hard, if not impossible, 1445 for an individual flow to measure its impact on other traffic. So 1446 such monitoring will need to be conducted using bespoke monitoring 1447 across flows and/or across classes of traffic. 1449 7.2. Open Issues 1451 * What is the best way forward to deal with L4S over single-queue 1452 Classic ECN AQM bottlenecks, given current problems with 1453 misdetecting L4S AQMs as Classic ECN AQMs? See 1454 [I-D.ietf-tsvwg-l4sops]. 1456 * Fixing the poor Interaction between current L4S congestion 1457 controls and CoDel with only Classic ECN support during flow 1458 startup. Originally, this was due to a bug in the initialization 1459 of the congestion EWMA in the Linux implementation of TCP Prague. 1460 That was quickly fixed, which removed the main performance impact, 1461 but further improvement would be useful (either by modifying 1462 CoDel, Scalable congestion controls, or both). 1464 7.3. Future Potential 1466 Researchers might find that L4S opens up the following interesting 1467 areas for investigation: 1469 * Potential for faster convergence time and tracking of available 1470 capacity; 1472 * Potential for improvements to particular link technologies, and 1473 cross-layer interactions with them; 1475 * Potential for using virtual queues, e.g. to further reduce latency 1476 jitter, or to leave headroom for capacity variation in radio 1477 networks; 1479 * Development and specification of reverse path congestion control 1480 using L4S building bocks (e.g. AccECN, QUIC); 1482 * Once queuing delay is cut down, what becomes the 'second longest 1483 pole in the tent' (other than the speed of light)? 1485 * Novel alternatives to the existing set of L4S AQMs; 1486 * Novel applications enabled by L4S. 1488 8. IANA Considerations 1490 The 01 codepoint of the ECN Field of the IP header is specified by 1491 the present Experimental RFC. The process for an experimental RFC to 1492 assign this codepoint in the IP header (v4 and v6) is documented in 1493 Proposed Standard [RFC8311], which updates the Proposed Standard 1494 [RFC3168]. 1496 When the present document is published as an RFC, IANA is asked to 1497 update the 01 entry in the registry, "ECN Field (Bits 6-7)" to the 1498 following (see https://www.iana.org/assignments/dscp-registry/dscp- 1499 registry.xhtml#ecn-field ): 1501 +========+=====================+=============================+ 1502 | Binary | Keyword | References | 1503 +========+=====================+=============================+ 1504 | 01 | ECT(1) (ECN-Capable | [RFC8311] [RFC Errata 5399] | 1505 | | Transport(1))[1] | [RFCXXXX] | 1506 +--------+---------------------+-----------------------------+ 1508 Table 1 1510 [XXXX is the number that the RFC Editor assigns to the present 1511 document (this sentence to be removed by the RFC Editor)]. 1513 9. Security Considerations 1515 Approaches to assure the integrity of signals using the new 1516 identifier are introduced in Appendix C.1. See the security 1517 considerations in the L4S architecture [I-D.ietf-tsvwg-l4s-arch] for 1518 further discussion of mis-use of the identifier, as well as extensive 1519 discussion of policing rate and latency in regard to L4S. 1521 If the anti-replay window of a VPN egress is too small, it will 1522 mistake deliberate delay differences as a replay attack, and discard 1523 higher delay packets (e.g. Classic) carried within the same security 1524 association (SA) as low delay packets (e.g. L4S). Section 6.2 1525 recommends that VPNs used in L4S experiments are configured with a 1526 sufficiently large anti-replay window, as required by the relevant 1527 specifications. It also discusses other alternatives. 1529 If a user taking part in the L4S experiment sets up a VPN without 1530 being aware of the above advice, and if the user allows anyone to 1531 send traffic into their VPN, they would open up a DoS vulnerability 1532 in which an attacker could induce the VPN's anti-replay mechanism to 1533 discard enough of the user's Classic (C) traffic (if they are 1534 receiving any) to cause a significant rate reduction. While the user 1535 is actively downloading C traffic, the attacker sends C traffic into 1536 the VPN to fill the remainder of the bottleneck link, then sends 1537 intermittent L4S packets to maximize the chance of exceeding the 1538 VPN's replay window. The user can prevent this attack by following 1539 the recommendations in Section 6.2. 1541 The recommendation to detect loss in time units prevents the ACK- 1542 splitting attacks described in [Savage-TCP]. 1544 10. Acknowledgements 1546 Thanks to Richard Scheffenegger, John Leslie, David Taeht, Jonathan 1547 Morton, Gorry Fairhurst, Michael Welzl, Mikael Abrahamsson and Andrew 1548 McGregor for the discussions that led to this specification. Ing-jyh 1549 (Inton) Tsang was a contributor to the early drafts of this document. 1550 And thanks to Mikael Abrahamsson, Lloyd Wood, Nicolas Kuhn, Greg 1551 White, Tom Henderson, David Black, Gorry Fairhurst, Brian Carpenter, 1552 Jake Holland, Rod Grimes, Richard Scheffenegger, Sebastian Moeller, 1553 Neal Cardwell, Praveen Balasubramanian, Reza Marandian Hagh, Pete 1554 Heist, Stuart Cheshire, Vidhi Goel, Mirja Kuehlewind and Ermin Sakic 1555 for providing help and reviewing this draft and thanks to Ingemar 1556 Johansson for reviewing and providing substantial text. Thanks to 1557 Sebastian Moeller for identifying the interaction with VPN anti- 1558 replay and to Jonathan Morton for identifying the attack based on 1559 this. Particular thanks to tsvwg chairs Gorry Fairhurst, David Black 1560 and Wes Eddy for patiently helping this and the other L4S drafts 1561 through the IETF process. Appendix A listing the Prague L4S 1562 Requirements is based on text authored by Marcelo Bagnulo Braun that 1563 was originally an appendix to [I-D.ietf-tsvwg-l4s-arch]. That text 1564 was in turn based on the collective output of the attendees listed in 1565 the minutes of a 'bar BoF' on DCTCP Evolution during IETF-94 1566 [TCPPrague]. 1568 The authors' contributions were part-funded by the European Community 1569 under its Seventh Framework Programme through the Reducing Internet 1570 Transport Latency (RITE) project (ICT-317700). Bob Briscoe was also 1571 funded partly by the Research Council of Norway through the TimeIn 1572 project, partly by CableLabs and partly by the Comcast Innovation 1573 Fund. The views expressed here are solely those of the authors. 1575 11. References 1577 11.1. Normative References 1579 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1580 Requirement Levels", BCP 14, RFC 2119, 1581 DOI 10.17487/RFC2119, March 1997, 1582 . 1584 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1585 of Explicit Congestion Notification (ECN) to IP", 1586 RFC 3168, DOI 10.17487/RFC3168, September 2001, 1587 . 1589 [RFC4774] Floyd, S., "Specifying Alternate Semantics for the 1590 Explicit Congestion Notification (ECN) Field", BCP 124, 1591 RFC 4774, DOI 10.17487/RFC4774, November 2006, 1592 . 1594 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 1595 and K. Carlberg, "Explicit Congestion Notification (ECN) 1596 for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 1597 2012, . 1599 11.2. Informative References 1601 [A2DTCP] Zhang, T., Wang, J., Huang, J., Huang, Y., Chen, J., and 1602 Y. Pan, "Adaptive-Acceleration Data Center TCP", IEEE 1603 Transactions on Computers 64(6):1522-1533, June 2015, 1604 . 1607 [Ahmed19] Ahmed, A.S., "Extending TCP for Low Round Trip Delay", 1608 Masters Thesis, Uni Oslo , August 2019, 1609 . 1611 [Alizadeh-stability] 1612 Alizadeh, M., Javanmard, A., and B. Prabhakar, "Analysis 1613 of DCTCP: Stability, Convergence, and Fairness", ACM 1614 SIGMETRICS 2011 , June 2011, 1615 . 1618 [ARED01] Floyd, S., Gummadi, R., and S. Shenker, "Adaptive RED: An 1619 Algorithm for Increasing the Robustness of RED's Active 1620 Queue Management", ACIRI Technical Report , August 2001, 1621 . 1623 [BBRv2] Cardwell, N., "TCP BBR v2 Alpha/Preview Release", github 1624 repository; Linux congestion control module, 1625 . 1627 [COBALT] Palmei, J., Gupta, S., Imputato, P., Morton, J., 1628 Tahiliani, M., Avallone, S., and D. Taht, "Design and 1629 Evaluation of COBALT Queue Discipline", In Proc. IEEE 1630 Int'l Symp. on Local and Metropolitan Area Networks 2019, 1631 pp1--6, 2019, 1632 . 1634 [DCttH19] De Schepper, K., Bondarenko, O., Tilmans, O., and B. 1635 Briscoe, "`Data Centre to the Home': Ultra-Low Latency for 1636 All", Updated RITE project Technical Report , July 2019, 1637 . 1639 [DualPI2Linux] 1640 Albisser, O., De Schepper, K., Briscoe, B., Tilmans, O., 1641 and H. Steen, "DUALPI2 - Low Latency, Low Loss and 1642 Scalable (L4S) AQM", Proc. Linux Netdev 0x13 , March 2019, 1643 . 1646 [ecn-fallback] 1647 Briscoe, B. and A.S. Ahmed, "TCP Prague Fall-back on 1648 Detection of a Classic ECN AQM", bobbriscoe.net Technical 1649 Report TR-BB-2019-002, April 2020, 1650 . 1652 [Heist21] Heist, P. and J. Morton, "L4S Tests", github README, May 1653 2021, . 1655 [I-D.briscoe-docsis-q-protection] 1656 Briscoe, B. and G. White, "The DOCSIS(r) Queue Protection 1657 Algorithm to Preserve Low Latency", Work in Progress, 1658 Internet-Draft, draft-briscoe-docsis-q-protection-01, 17 1659 December 2021, . 1662 [I-D.briscoe-iccrg-prague-congestion-control] 1663 Schepper, K. D., Tilmans, O., and B. Briscoe, "Prague 1664 Congestion Control", Work in Progress, Internet-Draft, 1665 draft-briscoe-iccrg-prague-congestion-control-00, 9 March 1666 2021, . 1669 [I-D.briscoe-tsvwg-l4s-diffserv] 1670 Briscoe, B., "Interactions between Low Latency, Low Loss, 1671 Scalable Throughput (L4S) and Differentiated Services", 1672 Work in Progress, Internet-Draft, draft-briscoe-tsvwg-l4s- 1673 diffserv-02, 4 November 2018, 1674 . 1677 [I-D.ietf-tcpm-accurate-ecn] 1678 Briscoe, B., Kühlewind, M., and R. Scheffenegger, "More 1679 Accurate ECN Feedback in TCP", Work in Progress, Internet- 1680 Draft, draft-ietf-tcpm-accurate-ecn-15, 12 July 2021, 1681 . 1684 [I-D.ietf-tcpm-generalized-ecn] 1685 Bagnulo, M. and B. Briscoe, "ECN++: Adding Explicit 1686 Congestion Notification (ECN) to TCP Control Packets", 1687 Work in Progress, Internet-Draft, draft-ietf-tcpm- 1688 generalized-ecn-08, 2 August 2021, 1689 . 1692 [I-D.ietf-tls-dtls13] 1693 Rescorla, E., Tschofenig, H., and N. Modadugu, "The 1694 Datagram Transport Layer Security (DTLS) Protocol Version 1695 1.3", Work in Progress, Internet-Draft, draft-ietf-tls- 1696 dtls13-43, 30 April 2021, 1697 . 1700 [I-D.ietf-trill-ecn-support] 1701 Eastlake, D. E. and B. Briscoe, "TRILL (TRansparent 1702 Interconnection of Lots of Links): ECN (Explicit 1703 Congestion Notification) Support", Work in Progress, 1704 Internet-Draft, draft-ietf-trill-ecn-support-07, 25 1705 February 2018, . 1708 [I-D.ietf-tsvwg-aqm-dualq-coupled] 1709 Schepper, K. D., Briscoe, B., and G. White, "DualQ Coupled 1710 AQMs for Low Latency, Low Loss and Scalable Throughput 1711 (L4S)", Work in Progress, Internet-Draft, draft-ietf- 1712 tsvwg-aqm-dualq-coupled-19, 3 November 2021, 1713 . 1716 [I-D.ietf-tsvwg-ecn-encap-guidelines] 1717 Briscoe, B. and J. Kaippallimalil, "Guidelines for Adding 1718 Congestion Notification to Protocols that Encapsulate IP", 1719 Work in Progress, Internet-Draft, draft-ietf-tsvwg-ecn- 1720 encap-guidelines-16, 25 May 2021, 1721 . 1724 [I-D.ietf-tsvwg-l4s-arch] 1725 Briscoe, B., Schepper, K. D., Bagnulo, M., and G. White, 1726 "Low Latency, Low Loss, Scalable Throughput (L4S) Internet 1727 Service: Architecture", Work in Progress, Internet-Draft, 1728 draft-ietf-tsvwg-l4s-arch-14, 8 November 2021, 1729 . 1732 [I-D.ietf-tsvwg-l4sops] 1733 White, G., "Operational Guidance for Deployment of L4S in 1734 the Internet", Work in Progress, Internet-Draft, draft- 1735 ietf-tsvwg-l4sops-02, 25 October 2021, 1736 . 1739 [I-D.ietf-tsvwg-nqb] 1740 White, G. and T. Fossati, "A Non-Queue-Building Per-Hop 1741 Behavior (NQB PHB) for Differentiated Services", Work in 1742 Progress, Internet-Draft, draft-ietf-tsvwg-nqb-08, 25 1743 October 2021, . 1746 [I-D.ietf-tsvwg-rfc6040update-shim] 1747 Briscoe, B., "Propagating Explicit Congestion Notification 1748 Across IP Tunnel Headers Separated by a Shim", Work in 1749 Progress, Internet-Draft, draft-ietf-tsvwg-rfc6040update- 1750 shim-14, 25 May 2021, 1751 . 1754 [I-D.sridharan-tcpm-ctcp] 1755 Sridharan, M., Tan, K., Bansal, D., and D. Thaler, 1756 "Compound TCP: A New TCP Congestion Control for High-Speed 1757 and Long Distance Networks", Work in Progress, Internet- 1758 Draft, draft-sridharan-tcpm-ctcp-02, 11 November 2008, 1759 . 1762 [I-D.stewart-tsvwg-sctpecn] 1763 Stewart, R. R., Tuexen, M., and X. Dong, "ECN for Stream 1764 Control Transmission Protocol (SCTP)", Work in Progress, 1765 Internet-Draft, draft-stewart-tsvwg-sctpecn-05, 15 January 1766 2014, . 1769 [LinuxPacedChirping] 1770 Misund, J. and B. Briscoe, "Paced Chirping - Rethinking 1771 TCP start-up", Proc. Linux Netdev 0x13 , March 2019, 1772 . 1774 [Mathis09] Mathis, M., "Relentless Congestion Control", PFLDNeT'09 , 1775 May 2009, . 1778 [Paced-Chirping] 1779 Misund, J., "Rapid Acceleration in TCP Prague", Masters 1780 Thesis , May 2018, 1781 . 1784 [PI2] De Schepper, K., Bondarenko, O., Tsang, I., and B. 1785 Briscoe, "PI^2 : A Linearized AQM for both Classic and 1786 Scalable TCP", Proc. ACM CoNEXT 2016 pp.105-119, December 1787 2016, 1788 . 1790 [PragueLinux] 1791 Briscoe, B., De Schepper, K., Albisser, O., Misund, J., 1792 Tilmans, O., Kühlewind, M., and A.S. Ahmed, "Implementing 1793 the `TCP Prague' Requirements for Low Latency Low Loss 1794 Scalable Throughput (L4S)", Proc. Linux Netdev 0x13 , 1795 March 2019, . 1798 [QV] Briscoe, B. and P. Hurtig, "Up to Speed with Queue View", 1799 RITE Technical Report D2.3; Appendix C.2, August 2015, 1800 . 1803 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 1804 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 1805 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 1806 S., Wroclawski, J., and L. Zhang, "Recommendations on 1807 Queue Management and Congestion Avoidance in the 1808 Internet", RFC 2309, DOI 10.17487/RFC2309, April 1998, 1809 . 1811 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 1812 "Definition of the Differentiated Services Field (DS 1813 Field) in the IPv4 and IPv6 Headers", RFC 2474, 1814 DOI 10.17487/RFC2474, December 1998, 1815 . 1817 [RFC3246] Davie, B., Charny, A., Bennet, J.C.R., Benson, K., Le 1818 Boudec, J.Y., Courtney, W., Davari, S., Firoiu, V., and D. 1819 Stiliadis, "An Expedited Forwarding PHB (Per-Hop 1820 Behavior)", RFC 3246, DOI 10.17487/RFC3246, March 2002, 1821 . 1823 [RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust Explicit 1824 Congestion Notification (ECN) Signaling with Nonces", 1825 RFC 3540, DOI 10.17487/RFC3540, June 2003, 1826 . 1828 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows", 1829 RFC 3649, DOI 10.17487/RFC3649, December 2003, 1830 . 1832 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 1833 Internet Protocol", RFC 4301, DOI 10.17487/RFC4301, 1834 December 2005, . 1836 [RFC4302] Kent, S., "IP Authentication Header", RFC 4302, 1837 DOI 10.17487/RFC4302, December 2005, 1838 . 1840 [RFC4303] Kent, S., "IP Encapsulating Security Payload (ESP)", 1841 RFC 4303, DOI 10.17487/RFC4303, December 2005, 1842 . 1844 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1845 Congestion Control Protocol (DCCP)", RFC 4340, 1846 DOI 10.17487/RFC4340, March 2006, 1847 . 1849 [RFC4341] Floyd, S. and E. Kohler, "Profile for Datagram Congestion 1850 Control Protocol (DCCP) Congestion Control ID 2: TCP-like 1851 Congestion Control", RFC 4341, DOI 10.17487/RFC4341, March 1852 2006, . 1854 [RFC4342] Floyd, S., Kohler, E., and J. Padhye, "Profile for 1855 Datagram Congestion Control Protocol (DCCP) Congestion 1856 Control ID 3: TCP-Friendly Rate Control (TFRC)", RFC 4342, 1857 DOI 10.17487/RFC4342, March 2006, 1858 . 1860 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1861 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1862 . 1864 [RFC5033] Floyd, S. and M. Allman, "Specifying New Congestion 1865 Control Algorithms", BCP 133, RFC 5033, 1866 DOI 10.17487/RFC5033, August 2007, 1867 . 1869 [RFC5129] Davie, B., Briscoe, B., and J. Tay, "Explicit Congestion 1870 Marking in MPLS", RFC 5129, DOI 10.17487/RFC5129, January 1871 2008, . 1873 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP 1874 Friendly Rate Control (TFRC): Protocol Specification", 1875 RFC 5348, DOI 10.17487/RFC5348, September 2008, 1876 . 1878 [RFC5562] Kuzmanovic, A., Mondal, A., Floyd, S., and K. 1879 Ramakrishnan, "Adding Explicit Congestion Notification 1880 (ECN) Capability to TCP's SYN/ACK Packets", RFC 5562, 1881 DOI 10.17487/RFC5562, June 2009, 1882 . 1884 [RFC5622] Floyd, S. and E. Kohler, "Profile for Datagram Congestion 1885 Control Protocol (DCCP) Congestion ID 4: TCP-Friendly Rate 1886 Control for Small Packets (TFRC-SP)", RFC 5622, 1887 DOI 10.17487/RFC5622, August 2009, 1888 . 1890 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1891 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 1892 . 1894 [RFC5706] Harrington, D., "Guidelines for Considering Operations and 1895 Management of New Protocols and Protocol Extensions", 1896 RFC 5706, DOI 10.17487/RFC5706, November 2009, 1897 . 1899 [RFC5865] Baker, F., Polk, J., and M. Dolly, "A Differentiated 1900 Services Code Point (DSCP) for Capacity-Admitted Traffic", 1901 RFC 5865, DOI 10.17487/RFC5865, May 2010, 1902 . 1904 [RFC5925] Touch, J., Mankin, A., and R. Bonica, "The TCP 1905 Authentication Option", RFC 5925, DOI 10.17487/RFC5925, 1906 June 2010, . 1908 [RFC6040] Briscoe, B., "Tunnelling of Explicit Congestion 1909 Notification", RFC 6040, DOI 10.17487/RFC6040, November 1910 2010, . 1912 [RFC6077] Papadimitriou, D., Ed., Welzl, M., Scharf, M., and B. 1913 Briscoe, "Open Research Issues in Internet Congestion 1914 Control", RFC 6077, DOI 10.17487/RFC6077, February 2011, 1915 . 1917 [RFC6347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer 1918 Security Version 1.2", RFC 6347, DOI 10.17487/RFC6347, 1919 January 2012, . 1921 [RFC6660] Briscoe, B., Moncaster, T., and M. Menth, "Encoding Three 1922 Pre-Congestion Notification (PCN) States in the IP Header 1923 Using a Single Diffserv Codepoint (DSCP)", RFC 6660, 1924 DOI 10.17487/RFC6660, July 2012, 1925 . 1927 [RFC6675] Blanton, E., Allman, M., Wang, L., Jarvinen, I., Kojo, M., 1928 and Y. Nishida, "A Conservative Loss Recovery Algorithm 1929 Based on Selective Acknowledgment (SACK) for TCP", 1930 RFC 6675, DOI 10.17487/RFC6675, August 2012, 1931 . 1933 [RFC7560] Kuehlewind, M., Ed., Scheffenegger, R., and B. Briscoe, 1934 "Problem Statement and Requirements for Increased Accuracy 1935 in Explicit Congestion Notification (ECN) Feedback", 1936 RFC 7560, DOI 10.17487/RFC7560, August 2015, 1937 . 1939 [RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF 1940 Recommendations Regarding Active Queue Management", 1941 BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, 1942 . 1944 [RFC7713] Mathis, M. and B. Briscoe, "Congestion Exposure (ConEx) 1945 Concepts, Abstract Mechanism, and Requirements", RFC 7713, 1946 DOI 10.17487/RFC7713, December 2015, 1947 . 1949 [RFC8033] Pan, R., Natarajan, P., Baker, F., and G. White, 1950 "Proportional Integral Controller Enhanced (PIE): A 1951 Lightweight Control Scheme to Address the Bufferbloat 1952 Problem", RFC 8033, DOI 10.17487/RFC8033, February 2017, 1953 . 1955 [RFC8083] Perkins, C. and V. Singh, "Multimedia Congestion Control: 1956 Circuit Breakers for Unicast RTP Sessions", RFC 8083, 1957 DOI 10.17487/RFC8083, March 2017, 1958 . 1960 [RFC8085] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage 1961 Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085, 1962 March 2017, . 1964 [RFC8257] Bensley, S., Thaler, D., Balasubramanian, P., Eggert, L., 1965 and G. Judd, "Data Center TCP (DCTCP): TCP Congestion 1966 Control for Data Centers", RFC 8257, DOI 10.17487/RFC8257, 1967 October 2017, . 1969 [RFC8290] Hoeiland-Joergensen, T., McKenney, P., Taht, D., Gettys, 1970 J., and E. Dumazet, "The Flow Queue CoDel Packet Scheduler 1971 and Active Queue Management Algorithm", RFC 8290, 1972 DOI 10.17487/RFC8290, January 2018, 1973 . 1975 [RFC8298] Johansson, I. and Z. Sarker, "Self-Clocked Rate Adaptation 1976 for Multimedia", RFC 8298, DOI 10.17487/RFC8298, December 1977 2017, . 1979 [RFC8311] Black, D., "Relaxing Restrictions on Explicit Congestion 1980 Notification (ECN) Experimentation", RFC 8311, 1981 DOI 10.17487/RFC8311, January 2018, 1982 . 1984 [RFC8312] Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 1985 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 1986 RFC 8312, DOI 10.17487/RFC8312, February 2018, 1987 . 1989 [RFC8511] Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst, 1990 "TCP Alternative Backoff with ECN (ABE)", RFC 8511, 1991 DOI 10.17487/RFC8511, December 2018, 1992 . 1994 [RFC8888] Sarker, Z., Perkins, C., Singh, V., and M. Ramalho, "RTP 1995 Control Protocol (RTCP) Feedback for Congestion Control", 1996 RFC 8888, DOI 10.17487/RFC8888, January 2021, 1997 . 1999 [RFC8985] Cheng, Y., Cardwell, N., Dukkipati, N., and P. Jha, "The 2000 RACK-TLP Loss Detection Algorithm for TCP", RFC 8985, 2001 DOI 10.17487/RFC8985, February 2021, 2002 . 2004 [RFC9000] Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based 2005 Multiplexed and Secure Transport", RFC 9000, 2006 DOI 10.17487/RFC9000, May 2021, 2007 . 2009 [Savage-TCP] 2010 Savage, S., Cardwell, N., Wetherall, D., and T. Anderson, 2011 "TCP Congestion Control with a Misbehaving Receiver", ACM 2012 SIGCOMM Computer Communication Review 29(5):71--78, 2013 October 1999. 2015 [SCReAM] Johansson, I., "SCReAM", github repository; , 2016 . 2019 [sub-mss-prob] 2020 Briscoe, B. and K. De Schepper, "Scaling TCP's Congestion 2021 Window for Small Round Trip Times", BT Technical Report 2022 TR-TUB8-2015-002, May 2015, 2023 . 2025 [TCP-CA] Jacobson, V. and M.J. Karels, "Congestion Avoidance and 2026 Control", Laurence Berkeley Labs Technical Report , 2027 November 1988, . 2029 [TCPPrague] 2030 Briscoe, B., "Notes: DCTCP evolution 'bar BoF': Tue 21 Jul 2031 2015, 17:40, Prague", tcpprague mailing list archive , 2032 July 2015, . 2035 [VCP] Xia, Y., Subramanian, L., Stoica, I., and S. Kalyanaraman, 2036 "One more bit is enough", Proc. SIGCOMM'05, ACM CCR 2037 35(4)37--48, 2005, 2038 . 2040 Appendix A. Rationale for the 'Prague L4S Requirements' 2042 This appendix is informative, not normative. It gives a list of 2043 modifications to current scalable congestion controls so that they 2044 can be deployed over the public Internet and coexist safely with 2045 existing traffic. The list complements the normative requirements in 2046 Section 4 that a sender has to comply with before it can set the L4S 2047 identifier in packets it sends into the Internet. As well as 2048 rationale for safety improvements (the requirements in Section 4) 2049 this appendix also includes preferable performance improvements 2050 (optimizations). 2052 The requirements and recommendations in Section 4) have become know 2053 as the Prague L4S Requirements, because they were originally 2054 identified at an ad hoc meeting during IETF-94 in Prague [TCPPrague]. 2055 They were originally called the 'TCP Prague Requirements', but they 2056 are not solely applicable to TCP, so the name and wording has been 2057 generalized for all transport protocols, and the name 'TCP Prague' is 2058 now used for a specific implementation of the requirements. 2060 At the time of writing, DCTCP [RFC8257] is the most widely used 2061 scalable transport protocol. In its current form, DCTCP is specified 2062 to be deployable only in controlled environments. Deploying it in 2063 the public Internet would lead to a number of issues, both from the 2064 safety and the performance perspective. The modifications and 2065 additional mechanisms listed in this section will be necessary for 2066 its deployment over the global Internet. Where an example is needed, 2067 DCTCP is used as a base, but the requirements in Section 4 apply 2068 equally to other scalable congestion controls, covering adaptive 2069 real-time media, etc., not just capacity-seeking behaviours. 2071 A.1. Rationale for the Requirements for Scalable Transport Protocols 2073 A.1.1. Use of L4S Packet Identifier 2075 Description: A scalable congestion control needs to distinguish the 2076 packets it sends from those sent by Classic congestion controls (see 2077 the precise normative requirement wording in Section 4.1). 2079 Motivation: It needs to be possible for a network node to classify 2080 L4S packets without flow state into a queue that applies an L4S ECN 2081 marking behaviour and isolates L4S packets from the queuing delay of 2082 Classic packets. 2084 A.1.2. Accurate ECN Feedback 2086 Description: The transport protocol for a scalable congestion control 2087 needs to provide timely, accurate feedback about the extent of ECN 2088 marking experienced by all packets (see the precise normative 2089 requirement wording in Section 4.2). 2091 Motivation: Classic congestion controls only need feedback about the 2092 existence of a congestion episode within a round trip, not precisely 2093 how many packets were marked with ECN or dropped. Therefore, in 2094 2001, when ECN feedback was added to TCP [RFC3168], it could not 2095 inform the sender of more than one ECN mark per RTT. Since then, 2096 requirements for more accurate ECN feedback in TCP have been defined 2097 in [RFC7560] and [I-D.ietf-tcpm-accurate-ecn] specifies a change to 2098 the TCP protocol to satisfy these requirements. Most other transport 2099 protocols already satisfy this requirement (see Section 4.2). 2101 A.1.3. Capable of Replacement by Classic Congestion Control 2103 Description: It needs to be possible to replace the implementation of 2104 a scalable congestion control with a Classic control (see the precise 2105 normative requirement wording in Section 4.3). 2107 Motivation: L4S is an experimental protocol, therefore it seems 2108 prudent to be able to disable it at source in case of insurmountable 2109 problems, perhaps due to some unexpected interaction on a particular 2110 sender; over a particular path or network; with a particular receiver 2111 or even ultimately an insurmountable problem with the experiment as a 2112 whole. 2114 A.1.4. Fall back to Classic Congestion Control on Packet Loss 2116 Description: As well as responding to ECN markings in a scalable way, 2117 a scalable congestion control needs to react to packet loss in a way 2118 that will coexist safely with a Reno congestion control [RFC5681] 2119 (see the precise normative requirement wording in Section 4.3). 2121 Motivation: Part of the safety conditions for deploying a scalable 2122 congestion control on the public Internet is to make sure that it 2123 behaves properly when it builds a queue at a network bottleneck that 2124 has not been upgraded to support L4S. Packet loss can have many 2125 causes, but it usually has to be conservatively assumed that it is a 2126 sign of congestion. Therefore, on detecting packet loss, a scalable 2127 congestion control will need to fall back to Classic congestion 2128 control behaviour. If it does not comply, it could starve Classic 2129 traffic. 2131 A scalable congestion control can be used for different types of 2132 transport, e.g. for real-time media or for reliable transport like 2133 TCP. Therefore, the particular Classic congestion control behaviour 2134 to fall back on will need to be dependent on the specific congestion 2135 control implementation. In the particular case of DCTCP, the DCTCP 2136 specification [RFC8257] states that "It is RECOMMENDED that an 2137 implementation deal with loss episodes in the same way as 2138 conventional TCP." For safe deployment, Section 4.3 requires any 2139 specification of a scalable congestion control for the public 2140 Internet to define the above requirement as a "MUST". 2142 Even though a bottleneck is L4S capable, it might still become 2143 overloaded and have to drop packets. In this case, the sender may 2144 receive a high proportion of packets marked with the CE bit set and 2145 also experience loss. Current DCTCP implementations each react 2146 differently to this situation. At least one implementation reacts 2147 only to the drop signal (e.g. by halving the CWND) and at least 2148 another DCTCP implementation reacts to both signals (e.g. by halving 2149 the CWND due to the drop and also further reducing the CWND based on 2150 the proportion of marked packet). A third approach for the public 2151 Internet has been proposed that adjusts the loss response to result 2152 in a halving when combined with the ECN response. We believe that 2153 further experimentation is needed to understand what is the best 2154 behaviour for the public Internet, which may or not be one of these 2155 existing approaches. 2157 A.1.5. Coexistence with Classic Congestion Control at Classic ECN 2158 bottlenecks 2160 Description: Monitoring has to be in place so that a non-L4S but ECN- 2161 capable AQM can be detected at path bottlenecks. This is in case 2162 such an AQM has been implemented in a shared queue, in which case any 2163 long-running scalable flow would predominate over any simultaneous 2164 long-running Classic flow sharing the queue. The precise requirement 2165 wording in Section 4.3 is written so that such a problem could either 2166 be resolved in real-time, or via administrative intervention. 2168 Motivation: Similarly to the discussion in Appendix A.1.4, this 2169 requirement in Section 4.3 is a safety condition to ensure an L4S 2170 congestion control coexists well with Classic flows when it builds a 2171 queue at a shared network bottleneck that has not been upgraded to 2172 support L4S. Nonetheless, if necessary, it is considered reasonable 2173 to resolve such problems over management timescales (possibly 2174 involving human intervention) because: 2176 * although a Classic flow can considerably reduce its throughput in 2177 the face of a competing scalable flow, it still makes progress and 2178 does not starve; 2180 * implementations of a Classic ECN AQM in a queue that is intended 2181 to be shared are believed to be rare; 2183 * detection of such AQMs is not always clear-cut; so focused out-of- 2184 band testing (or even contacting the relevant network operator) 2185 would improve certainty. 2187 Therefore, the relevant normative requirement (Section 4.3) is 2188 divided into three stages: monitoring, detection and action: 2190 Monitoring: Monitoring involves collection of the measurement data 2191 to be analysed. Monitoring is expressed as a 'MUST' for 2192 uncontrolled environments, although the placement of the 2193 monitoring function is left open. Whether monitoring has to be 2194 applied in real-time is expressed as a 'SHOULD'. This allows for 2195 the possibility that the operator of an L4S sender (e.g. a CDN) 2196 might prefer to test out-of-band for signs of Classic ECN AQMs, 2197 perhaps to avoid continually consuming resources to monitor live 2198 traffic. 2200 Detection: Detection involves analysis of the monitored data to 2201 detect the likelihood of a Classic ECN AQM. The requirements 2202 recommend that detection occurs live in real-time. However, 2203 detection is allowed to be deferred (e.g. it might involve further 2204 testing targeted at candidate AQMs); 2206 Action: This involves the act of switching the sender to a Classic 2207 congestion control. This might occur in real-time within the 2208 congestion control for the subsequent duration of a flow, or it 2209 might involve administrative action to switch to Classic 2210 congestion control for a specific interface or for a certain set 2211 of destination addresses. 2213 Instead of the sender taking action itself, the operator of the 2214 sender (e.g. a CDN) might prefer to ask the network operator to 2215 modify the Classic AQM's treatment of L4S packets; or to ensure 2216 L4S packets bypass the AQM; or to upgrade the AQM to support L4S. 2217 Once L4S flows no longer shared the Classic ECN AQM they would 2218 obviously no longer detect it, and the requirement to act on it 2219 would no longer apply. 2221 The whole set of normative requirements concerning Classic ECN AQMs 2222 in Section 4.3 is worded so that it does not apply in controlled 2223 environments, such as private networks or data centre networks. CDN 2224 servers placed within an access ISP's network can be considered as a 2225 single controlled environment, but any onward networks served by the 2226 access network, including all the attached customer networks, would 2227 be unlikely to fall under the same degree of coordinated control. 2229 Monitoring is expressed as a 'MUST' for these uncontrolled segments 2230 of paths (e.g. beyond the access ISP in a home network), because 2231 there is a possibility that there might be a shared queue Classic ECN 2232 AQM in that segment. Nonetheless, the intent of the wording is to 2233 only require occasional monitoring of these uncontrolled regions, and 2234 not to burden CDN operators if monitoring never uncovers any 2235 potential problems, given it is anyway in the CDN's own interests not 2236 to degrade the service of its own customers. 2238 More detailed discussion of all the above options and alternatives 2239 can be found in [I-D.ietf-tsvwg-l4sops]. 2241 Having said all the above, the approach recommended in Section 4.3 is 2242 to monitor, detect and act in real-time on live traffic. A passive 2243 monitoring algorithm to detect a Classic ECN AQM at the bottleneck 2244 and fall back to Classic congestion control is described in an 2245 extensive technical report [ecn-fallback], which also provides a link 2246 to Linux source code, and a large online visualization of its 2247 evaluation results. Very briefly, the algorithm primarily monitors 2248 RTT variation using the same algorithm that maintains the mean 2249 deviation of TCP's smoothed RTT, but it smooths over a duration of 2250 the order of a Classic sawtooth. The outcome is also conditioned on 2251 other metrics such as the presence of CE marking and congestion 2252 avoidance phase having stabilized. The report also identifies 2253 further work to improve the approach, for instance improvements with 2254 low capacity links and combining the measurements with a cache of 2255 what had been learned about a path in previous connections. The 2256 report also suggests alternative approaches. 2258 Although using passive measurements within live traffic (as above) 2259 can detect a Classic ECN AQM, it is much harder (perhaps impossible) 2260 to determine whether or not the AQM is in a shared queue. 2261 Nonetheless, this is much easier using active test traffic out-of- 2262 band, because two flows can be used. Section 4 of the same report 2263 [ecn-fallback] describes a simple technique to detect a Classic ECN 2264 AQM and determine whether it is in a shared queue, summarized here. 2266 An L4S-enabled test server could be set up so that, when a test 2267 client accesses it, it serves a script that gets the client to open 2268 two parallel long-running flows. It could serve one with a Classic 2269 congestion control (C, that sets ECT(0)) and one with a scalable CC 2270 (L, that sets ECT(1)).If neither flow induces any ECN marks, it can 2271 be presumed the path does not contain a Classic ECN AQM. If either 2272 flow induces some ECN marks, the server could measure the relative 2273 flow rates and round trip times of the two flows. Table 2 shows the 2274 AQM that can be inferred for various cases. 2276 +========+=======+========================+ 2277 | Rate | RTT | Inferred AQM | 2278 +========+=======+========================+ 2279 | L > C | L = C | Classic ECN AQM (FIFO) | 2280 +--------+-------+------------------------+ 2281 | L = C | L = C | Classic ECN AQM (FQ) | 2282 +--------+-------+------------------------+ 2283 | L = C | L < C | FQ-L4S AQM | 2284 +--------+-------+------------------------+ 2285 | L ~= C | L < C | Coupled DualQ AQM | 2286 +--------+-------+------------------------+ 2288 Table 2: Out-of-band testing with two 2289 parallel flows. L:=L4S, C:=Classic. 2291 Finally, we motivate the recommendation in Section 4.3 that a 2292 scalable congestion control is not expected to change to setting 2293 ECT(0) while it adapts its behaviour to coexist with Classic flows. 2294 This is because the sender needs to continue to check whether it made 2295 the right decision - and switch back if it was wrong, or if a 2296 different link becomes the bottleneck: 2298 * If, as recommended, the sender changes only its behaviour but not 2299 its codepoint to Classic, its codepoint will still be compatible 2300 with either an L4S or a Classic AQM. If the bottleneck does 2301 actually support both, it will still classify ECT(1) into the same 2302 L4S queue, where the sender can measure that switching to Classic 2303 behaviour was wrong, so that it can switch back. 2305 * In contrast, if the sender changes both its behaviour and its 2306 codepoint to Classic, even if the bottleneck supports both, it 2307 will classify ECT(0) into the Classic queue, reinforcing the 2308 sender's incorrect decision so that it never switches back. 2310 * Also, not changing codepoint avoids the risk of being flipped to a 2311 different path by a load balancer or multipath routing that hashes 2312 on the whole of the ex-ToS byte (unfortunately still a common 2313 pathology). 2315 Note that if a flow is configured to _only_ use a Classic congestion 2316 control, it is then entirely appropriate not to use ECT(1). 2318 A.1.6. Reduce RTT dependence 2320 Description: A scalable congestion control needs to reduce RTT bias 2321 as much as possible at least over the low to typical range of RTTs 2322 that will interact in the intended deployment scenario (see the 2323 precise normative requirement wording in Section 4.3). 2325 Motivation: The throughput of Classic congestion controls is known to 2326 be inversely proportional to RTT, so one would expect flows over very 2327 low RTT paths to nearly starve flows over larger RTTs. However, 2328 Classic congestion controls have never allowed a very low RTT path to 2329 exist because they induce a large queue. For instance, consider two 2330 paths with base RTT 1 ms and 100 ms. If a Classic congestion control 2331 induces a 100 ms queue, it turns these RTTs into 101 ms and 200 ms 2332 leading to a throughput ratio of about 2:1. Whereas if a scalable 2333 congestion control induces only a 1 ms queue, the ratio is 2:101, 2334 leading to a throughput ratio of about 50:1. 2336 Therefore, with very small queues, long RTT flows will essentially 2337 starve, unless scalable congestion controls comply with this 2338 requirement in Section 4.3. 2340 The RTT bias in current Classic congestion controls works 2341 satisfactorily when the RTT is higher than typical, and L4S does not 2342 change that. So, there is no additional requirement in Section 4.3 2343 for high RTT L4S flows to remove RTT bias - they can but they don't 2344 have to. 2346 A.1.7. Scaling down to fractional congestion windows 2348 Description: A scalable congestion control needs to remain responsive 2349 to congestion when typical RTTs over the public Internet are 2350 significantly smaller because they are no longer inflated by queuing 2351 delay (see the precise normative requirement wording in Section 4.3). 2353 Motivation: As currently specified, the minimum congestion window of 2354 ECN-capable TCP (and its derivatives) is expected to be 2 sender 2355 maximum segment sizes (SMSS), or 1 SMSS after a retransmission 2356 timeout. Once the congestion window reaches this minimum, if there 2357 is further ECN-marking, TCP is meant to wait for a retransmission 2358 timeout before sending another segment (see section 6.1.2 of 2359 [RFC3168]). In practice, most known window-based congestion control 2360 algorithms become unresponsive to ECN congestion signals at this 2361 point. No matter how much ECN marking, the congestion window no 2362 longer reduces. Instead, the sender's lack of any further congestion 2363 response forces the queue to grow, overriding any AQM and increasing 2364 queuing delay (making the window large enough to become responsive 2365 again). This can result in a stable but deeper queue, or it might 2366 drive the queue to loss, then the retransmission timeout mechanism 2367 acts as a backstop. 2369 Most window-based congestion controls for other transport protocols 2370 have a similar minimum window, albeit when measured in bytes for 2371 those that use smaller packets. 2373 L4S mechanisms significantly reduce queueing delay so, over the same 2374 path, the RTT becomes lower. Then this problem becomes surprisingly 2375 common [sub-mss-prob]. This is because, for the same link capacity, 2376 smaller RTT implies a smaller window. For instance, consider a 2377 residential setting with an upstream broadband Internet access of 8 2378 Mb/s, assuming a max segment size of 1500 B. Two upstream flows will 2379 each have the minimum window of 2 SMSS if the RTT is 6 ms or less, 2380 which is quite common when accessing a nearby data centre. So, any 2381 more than two such parallel TCP flows will become unresponsive to ECN 2382 and increase queuing delay. 2384 Unless scalable congestion controls address the requirement in 2385 Section 4.3 from the start, they will frequently become unresponsive 2386 to ECN, negating the low latency benefit of L4S, for themselves and 2387 for others. 2389 That would seem to imply that scalable congestion controllers ought 2390 to be required to be able work with a congestion window less than 2391 1 SMSS. For instance, if an ECN-capable TCP gets an ECN-mark when it 2392 is already sitting at a window of 1 SMSS, RFC 3168 requires it to 2393 defer sending for a retransmission timeout. A less drastic but more 2394 complex mechanism can maintain a congestion window less than 1 SMSS 2395 (significantly less if necessary), as described in [Ahmed19]. Other 2396 approaches are likely to be feasible. 2398 However, the requirement in Section 4.3 is worded as a "SHOULD" 2399 because it is believed that the existence of a minimum window is not 2400 all bad. When competing with an unresponsive flow, a minimum window 2401 naturally protects the flow from starvation by at least keeping some 2402 data flowing. 2404 By stating the requirement to go lower than 1 SMSS as a "SHOULD", 2405 while the requirement in RFC 3168 still stands as well, we shall be 2406 able to watch the choices of minimum window evolve in different 2407 scalable congestion controllers. 2409 A.1.8. Measuring Reordering Tolerance in Time Units 2411 Description: When detecting loss, a scalable congestion control needs 2412 to be tolerant to reordering over an adaptive time interval, which 2413 scales with throughput, rather than counting only in fixed units of 2414 packets, which does not scale (see the precise normative requirement 2415 wording in Section 4.3). 2417 Motivation: A primary purpose of L4S is scalable throughput (it's in 2418 the name). Scalability in all dimensions is, of course, also a goal 2419 of all IETF technology. The inverse linear congestion response in 2420 Section 4.3 is necessary, but not sufficient, to solve the congestion 2421 control scalability problem identified in [RFC3649]. As well as 2422 maintaining frequent ECN signals as rate scales, it is also important 2423 to ensure that a potentially false perception of loss does not limit 2424 throughput scaling. 2426 End-systems cannot know whether a missing packet is due to loss or 2427 reordering, except in hindsight - if it appears later. So they can 2428 only deem that there has been a loss if a gap in the sequence space 2429 has not been filled, either after a certain number of subsequent 2430 packets has arrived (e.g. the 3 DupACK rule of standard TCP 2431 congestion control [RFC5681]) or after a certain amount of time 2432 (e.g. the RACK approach [RFC8985]). 2434 As we attempt to scale packet rate over the years: 2436 * Even if only _some_ sending hosts still deem that loss has 2437 occurred by counting reordered packets, _all_ networks will have 2438 to keep reducing the time over which they keep packets in order. 2439 If some link technologies keep the time within which reordering 2440 occurs roughly unchanged, then loss over these links, as perceived 2441 by these hosts, will appear to continually rise over the years. 2443 * In contrast, if all senders detect loss in units of time, the time 2444 over which the network has to keep packets in order stays roughly 2445 invariant. 2447 Therefore hosts have an incentive to detect loss in time units (so as 2448 not to fool themselves too often into detecting losses when there are 2449 none). And for hosts that are changing their congestion control 2450 implementation to L4S, there is no downside to including time-based 2451 loss detection code in the change (loss recovery implemented in 2452 hardware is an exception, covered later). Therefore requiring L4S 2453 hosts to detect loss in time-based units would not be a burden. 2455 If the requirement in Section 4.3 were not placed on L4S hosts, even 2456 though it would be no burden on hosts to comply, all networks would 2457 face unnecessary uncertainty over whether some L4S hosts might be 2458 detecting loss by counting packets. Then _all_ link technologies 2459 will have to unnecessarily keep reducing the time within which 2460 reordering occurs. That is not a problem for some link technologies, 2461 but it becomes increasingly challenging for other link technologies 2462 to continue to scale, particularly those relying on channel bonding 2463 for scaling, such as LTE, 5G and DOCSIS. 2465 Given Internet paths traverse many link technologies, any scaling 2466 limit for these more challenging access link technologies would 2467 become a scaling limit for the Internet as a whole. 2469 It might be asked how it helps to place this loss detection 2470 requirement only on L4S hosts, because networks will still face 2471 uncertainty over whether non-L4S flows are detecting loss by counting 2472 DupACKs. The answer is that those link technologies for which it is 2473 challenging to keep squeezing the reordering time will only need to 2474 do so for non-L4S traffic (which they can do because the L4S 2475 identifier is visible at the IP layer). Therefore, they can focus 2476 their processing and memory resources into scaling non-L4S (Classic) 2477 traffic. Then, the higher the proportion of L4S traffic, the less of 2478 a scaling challenge they will have. 2480 To summarize, there is no reason for L4S hosts not to be part of the 2481 solution instead of part of the problem. 2483 Requirement ("MUST") or recommendation ("SHOULD")? As explained 2484 above, this is a subtle interoperability issue between hosts and 2485 networks, which seems to need a "MUST". Unless networks can be 2486 certain that all L4S hosts follow the time-based approach, they still 2487 have to cater for the worst case - continually squeeze reordering 2488 into a smaller and smaller duration - just for hosts that might be 2489 using the counting approach. However, it was decided to express this 2490 as a recommendation, using "SHOULD". The main justification was that 2491 networks can still be fairly certain that L4S hosts will follow this 2492 recommendation, because following it offers only gain and no pain. 2494 Details: 2496 The speed of loss recovery is much more significant for short flows 2497 than long, therefore a good compromise is to adapt the reordering 2498 window; from a small fraction of the RTT at the start of a flow, to a 2499 larger fraction of the RTT for flows that continue for many round 2500 trips. 2502 This is broadly the approach adopted by TCP RACK (Recent 2503 ACKnowledgements) [RFC8985]. However, RACK starts with the 3 DupACK 2504 approach, because the RTT estimate is not necessarily stable. As 2505 long as the initial window is paced, such initial use of 3 DupACK 2506 counting would amount to time-based loss detection and therefore 2507 would satisfy the time-based loss detection recommendation of 2508 Section 4.3. This is because pacing of the initial window would 2509 ensure that 3 DupACKs early in the connection would be spread over a 2510 small fraction of the round trip. 2512 As mentioned above, hardware implementations of loss recovery using 2513 DupACK counting exist (e.g. some implementations of RoCEv2 for RDMA). 2514 For low latency, these implementations can change their congestion 2515 control to implement L4S, because the congestion control (as distinct 2516 from loss recovery) is implemented in software. But they cannot 2517 easily satisfy this loss recovery requirement. However, it is 2518 believed they do not need to, because such implementations are 2519 believed to solely exist in controlled environments, where the 2520 network technology keeps reordering extremely low anyway. This is 2521 why controlled environments with hardly any reordering are excluded 2522 from the scope of the normative recommendation in Section 4.3. 2524 Detecting loss in time units also prevents the ACK-splitting attacks 2525 described in [Savage-TCP]. 2527 A.2. Scalable Transport Protocol Optimizations 2529 A.2.1. Setting ECT in Control Packets and Retransmissions 2531 Description: This item concerns TCP and its derivatives (e.g. SCTP) 2532 as well as RTP/RTCP [RFC6679]. The original specification of ECN for 2533 TCP precluded the use of ECN on control packets and retransmissions. 2534 Similarly [RFC6679] precludes the use of ECT on RTCP datagrams, in 2535 case the path changes after it has been checked for ECN traversal. 2536 To improve performance, scalable transport protocols ought to enable 2537 ECN at the IP layer in TCP control packets (SYN, SYN-ACK, pure ACKs, 2538 etc.) and in retransmitted packets. The same is true for other 2539 transports, e.g. SCTP, RTCP. 2541 Motivation (TCP): RFC 3168 prohibits the use of ECN on these types of 2542 TCP packet, based on a number of arguments. This means these packets 2543 are not protected from congestion loss by ECN, which considerably 2544 harms performance, particularly for short flows. 2545 [I-D.ietf-tcpm-generalized-ecn] proposes experimental use of ECN on 2546 all types of TCP packet as long as AccECN feedback 2547 [I-D.ietf-tcpm-accurate-ecn] is available (which itself satisfies the 2548 accurate feedback requirement in Section 4.2 for using a scalable 2549 congestion control). 2551 Motivation (RTCP): L4S experiments in general will need to observe 2552 the rule in [RFC6679] that precludes ECT on RTCP datagrams. 2553 Nonetheless, as ECN usage becomes more widespread, it would be useful 2554 to conduct specific experiments with ECN-capable RTCP to gather data 2555 on whether such caution is necessary. 2557 A.2.2. Faster than Additive Increase 2559 Description: It would improve performance if scalable congestion 2560 controls did not limit their congestion window increase to the 2561 standard additive increase of 1 SMSS per round trip [RFC5681] during 2562 congestion avoidance. The same is true for derivatives of TCP 2563 congestion control, including similar approaches used for real-time 2564 media. 2566 Motivation: As currently defined [RFC8257], DCTCP uses the 2567 traditional Reno additive increase in congestion avoidance phase. 2568 When the available capacity suddenly increases (e.g. when another 2569 flow finishes, or if radio capacity increases) it can take very many 2570 round trips to take advantage of the new capacity. TCP Cubic 2571 [RFC8312] was designed to solve this problem, but as flow rates have 2572 continued to increase, the delay accelerating into available capacity 2573 has become prohibitive. See, for instance, the examples in 2574 Section 5.1 of [I-D.ietf-tsvwg-l4s-arch]. Even when out of its Reno- 2575 compatibility mode, every 8x scaling of Cubic's flow rate leads to 2x 2576 more acceleration delay. 2578 In the steady state, DCTCP induces about 2 ECN marks per round trip, 2579 so it is possible to quickly detect when these signals have 2580 disappeared and seek available capacity more rapidly, while 2581 minimizing the impact on other flows (Classic and scalable) 2582 [LinuxPacedChirping]. Alternatively, approaches such as Adaptive 2583 Acceleration (A2DTCP [A2DTCP]) have been proposed to address this 2584 problem in data centres, which might be deployable over the public 2585 Internet. 2587 A.2.3. Faster Convergence at Flow Start 2589 Description: It would improve performance if scalable congestion 2590 controls converged (reached their steady-state share of the capacity) 2591 faster than Classic congestion controls or at least no slower. This 2592 affects the flow start behaviour of any L4S congestion control 2593 derived from a Classic transport that uses TCP slow start, including 2594 those for real-time media. 2596 Motivation: As an example, a new DCTCP flow takes longer than a 2597 Classic congestion control to obtain its share of the capacity of the 2598 bottleneck when there are already ongoing flows using the bottleneck 2599 capacity. In a data centre environment DCTCP takes about a factor of 2600 1.5 to 2 longer to converge due to the much higher typical level of 2601 ECN marking that DCTCP background traffic induces, which causes new 2602 flows to exit slow start early [Alizadeh-stability]. In testing for 2603 use over the public Internet the convergence time of DCTCP relative 2604 to a regular loss-based TCP slow start is even less favourable 2606 [Paced-Chirping] due to the shallow ECN marking threshold needed for 2607 L4S. It is exacerbated by the typically greater mismatch between the 2608 link rate of the sending host and typical Internet access 2609 bottlenecks. This problem is detrimental in general, but would 2610 particularly harm the performance of short flows relative to Classic 2611 congestion controls. 2613 Appendix B. Compromises in the Choice of L4S Identifier 2615 This appendix is informative, not normative. As explained in 2616 Section 2, there is insufficient space in the IP header (v4 or v6) to 2617 fully accommodate every requirement. So the choice of L4S identifier 2618 involves tradeoffs. This appendix records the pros and cons of the 2619 choice that was made. 2621 Non-normative recap of the chosen codepoint scheme: 2623 Packets with ECT(1) and conditionally packets with CE signify L4S 2624 semantics as an alternative to the semantics of Classic ECN 2625 [RFC3168], specifically: 2627 - The ECT(1) codepoint signifies that the packet was sent by an 2628 L4S-capable sender. 2630 - Given shortage of codepoints, both L4S and Classic ECN sides of 2631 an AQM have to use the same CE codepoint to indicate that a 2632 packet has experienced congestion. If a packet that had 2633 already been marked CE in an upstream buffer arrived at a 2634 subsequent AQM, this AQM would then have to guess whether to 2635 classify CE packets as L4S or Classic ECN. Choosing the L4S 2636 treatment is a safer choice, because then a few Classic packets 2637 might arrive early, rather than a few L4S packets arriving 2638 late. 2640 - Additional information might be available if the classifier 2641 were transport-aware. Then it could classify a CE packet for 2642 Classic ECN treatment if the most recent ECT packet in the same 2643 flow had been marked ECT(0). However, the L4S service ought 2644 not to need transport-layer awareness. 2646 Cons: 2648 Consumes the last ECN codepoint: The L4S service could potentially 2649 supersede the service provided by Classic ECN, therefore using 2650 ECT(1) to identify L4S packets could ultimately mean that the 2651 ECT(0) codepoint was 'wasted' purely to distinguish one form of 2652 ECN from its successor. 2654 ECN hard in some lower layers: It is not always possible to support 2655 the equivalent of an IP-ECN field in an AQM acting in a buffer 2656 below the IP layer [I-D.ietf-tsvwg-ecn-encap-guidelines]. Then, 2657 depending on the lower layer scheme, the L4S service might have to 2658 drop rather than mark frames even though they might encapsulate an 2659 ECN-capable packet. 2661 Risk of reordering Classic CE packets within a flow: Classifying all 2662 CE packets into the L4S queue risks any CE packets that were 2663 originally ECT(0) being incorrectly classified as L4S. If there 2664 were delay in the Classic queue, these incorrectly classified CE 2665 packets would arrive early, which is a form of reordering. 2666 Reordering within a microflow can cause TCP senders (and senders 2667 of similar transports) to retransmit spuriously. However, the 2668 risk of spurious retransmissions would be extremely low for the 2669 following reasons: 2671 1. It is quite unusual to experience queuing at more than one 2672 bottleneck on the same path (the available capacities have to 2673 be identical). 2675 2. In only a subset of these unusual cases would the first 2676 bottleneck support Classic ECN marking while the second 2677 supported L4S ECN marking, which would be the only scenario 2678 where some ECT(0) packets could be CE marked by an AQM 2679 supporting Classic ECN then the remainder experienced further 2680 delay through the Classic side of a subsequent L4S DualQ AQM. 2682 3. Even then, when a few packets are delivered early, it takes 2683 very unusual conditions to cause a spurious retransmission, in 2684 contrast to when some packets are delivered late. The first 2685 bottleneck has to apply CE-marks to at least N contiguous 2686 packets and the second bottleneck has to inject an 2687 uninterrupted sequence of at least N of these packets between 2688 two packets earlier in the stream (where N is the reordering 2689 window that the transport protocol allows before it considers 2690 a packet is lost). 2692 For example consider N=3, and consider the sequence of 2693 packets 100, 101, 102, 103,... and imagine that packets 2694 150,151,152 from later in the flow are injected as follows: 2695 100, 150, 151, 101, 152, 102, 103... If this were late 2696 reordering, even one packet arriving out of sequence would 2697 trigger a spurious retransmission, but there is no spurious 2698 retransmission here with early reordering, because packet 2699 101 moves the cumulative ACK counter forward before 3 2700 packets have arrived out of order. Later, when packets 2701 148, 149, 153... arrive, even though there is a 3-packet 2702 hole, there will be no problem, because the packets to fill 2703 the hole are already in the receive buffer. 2705 4. Even with the current TCP recommendation of N=3 [RFC5681] 2706 spurious retransmissions will be unlikely for all the above 2707 reasons. As RACK [RFC8985] is becoming widely deployed, it 2708 tends to adapt its reordering window to a larger value of N, 2709 which will make the chance of a contiguous sequence of N early 2710 arrivals vanishingly small. 2712 5. Even a run of 2 CE marks within a Classic ECN flow is 2713 unlikely, given FQ-CoDel is the only known widely deployed AQM 2714 that supports Classic ECN marking and it takes great care to 2715 separate out flows and to space any markings evenly along each 2716 flow. 2718 It is extremely unlikely that the above set of 5 eventualities 2719 that are each unusual in themselves would all happen 2720 simultaneously. But, even if they did, the consequences would 2721 hardly be dire: the odd spurious fast retransmission. Whenever 2722 the traffic source (a Classic congestion control) mistakes the 2723 reordering of a string of CE marks for a loss, one might think 2724 that it will reduce its congestion window as well as emitting a 2725 spurious retransmission. However, it would have already reduced 2726 its congestion window when the CE markings arrived early. If it 2727 is using ABE [RFC8511], it might reduce cwnd a little more for a 2728 loss than for a CE mark. But it will revert that reduction once 2729 it detects that the retransmission was spurious. 2731 In conclusion, the impact of early reordering on spurious 2732 retransmissions due to CE being ambiguous will generally be 2733 vanishingly small. 2735 Insufficient anti-replay window in some pre-existing VPNs: If delay 2736 is reduced for a subset of the flows within a VPN, the anti-replay 2737 feature of some VPNs is known to potentially mistake the 2738 difference in delay for a replay attack. Section 6.2 recommends 2739 that the anti-replay window at the VPN egress is sufficiently 2740 sized, as required by the relevant specifications. However, in 2741 some VPN implementations the maximum anti-replay window is 2742 insufficient to cater for a large delay difference at prevailing 2743 packet rates. Section 6.2 suggests alternative work-rounds for 2744 such cases, but end-users using L4S over a VPN will need to be 2745 able to recognize the symptoms of this problem, in order to seek 2746 out these work-rounds. 2748 Hard to distinguish Classic ECN AQM: With this scheme, when a source 2749 receives ECN feedback, it is not explicitly clear which type of 2750 AQM generated the CE markings. This is not a problem for Classic 2751 ECN sources that send ECT(0) packets, because an L4S AQM will 2752 recognize the ECT(0) packets as Classic and apply the appropriate 2753 Classic ECN marking behaviour. 2755 However, in the absence of explicit disambiguation of the CE 2756 markings, an L4S source needs to use heuristic techniques to work 2757 out which type of congestion response to apply (see 2758 Appendix A.1.5). Otherwise, if long-running Classic flow(s) are 2759 sharing a Classic ECN AQM bottleneck with long-running L4S 2760 flow(s), which then apply an L4S response to Classic CE signals, 2761 the L4S flows would outcompete the Classic flow(s). Experiments 2762 have shown that L4S flows can take about 20 times more capacity 2763 share than equivalent Classic flows. Nonetheless, as link 2764 capacity reduces (e.g. to 4 Mb/s), the inequality reduces. So 2765 Classic flows always make progress and are not starved. 2767 When L4S was first proposed (in 2015, 14 years after [RFC3168] was 2768 published), it was believed that Classic ECN AQMs had failed to be 2769 deployed, because research measurements had found little or no 2770 evidence of CE marking. In subsequent years Classic ECN was 2771 included in per-flow-queuing (FQ) deployments, however an FQ 2772 scheduler stops an L4S flow outcompeting Classic, because it 2773 enforces equality between flow rates. It is not known whether 2774 there have been any non-FQ deployments of Classic ECN AQMs in the 2775 subsequent years, or whether there will be in future. 2777 An algorithm for detecting a Classic ECN AQM as soon as a flow 2778 stabilizes after start-up has been proposed [ecn-fallback] (see 2779 Appendix A.1.5 for a brief summary). Testbed evaluations of v2 of 2780 the algorithm have shown detection is reasonably good for Classic 2781 ECN AQMs, in a wide range of circumstances. However, although it 2782 can correctly detect an L4S ECN AQM in many circumstances, its is 2783 often incorrect at low link capacities and/or high RTTs. Although 2784 this is the safe way round, there is a danger that it will 2785 discourage use of the algorithm. 2787 Non-L4S service for control packets: Solely for the case of TCP, the 2788 Classic ECN RFCs [RFC3168] and [RFC5562] require a sender to clear 2789 the ECN field to Not-ECT on retransmissions and on certain control 2790 packets specifically pure ACKs, window probes and SYNs. When L4S 2791 packets are classified by the ECN field, these TCP control packets 2792 would not be classified into an L4S queue, and could therefore be 2793 delayed relative to the other packets in the flow. This would not 2794 cause reordering (because retransmissions are already out of 2795 order, and these control packets typically carry no data). 2796 However, it would make critical TCP control packets more 2797 vulnerable to loss and delay. To address this problem, 2798 [I-D.ietf-tcpm-generalized-ecn] proposes an experiment in which 2799 all TCP control packets and retransmissions are ECN-capable as 2800 long as appropriate ECN feedback is available in each case. 2802 Pros: 2804 Should work e2e: The ECN field generally propagates end-to-end 2805 across the Internet without being wiped or mangled, at least over 2806 fixed networks. Unlike the DSCP, the setting of the ECN field is 2807 at least meant to be forwarded unchanged by networks that do not 2808 support ECN. 2810 Should work in tunnels: The L4S identifiers work across and within 2811 any tunnel that propagates the ECN field in any of the variant 2812 ways it has been defined since ECN-tunneling was first specified 2813 in the year 2001 [RFC3168]. However, it is likely that some 2814 tunnels still do not implement ECN propagation at all. 2816 Should work for many link technologies: At most, but not all, path 2817 bottlenecks there is IP-awareness, so that L4S AQMs can be located 2818 where the IP-ECN field can be manipulated. Bottlenecks at lower 2819 layer nodes without IP-awareness either have to use drop to signal 2820 congestion or a specific congestion notification facility has to 2821 be defined for that link technology, including propagation to and 2822 from IP-ECN. The programme to define these is progressing and in 2823 each case so far the scheme already defined for ECN inherently 2824 supports L4S as well (see Section 6.1). 2826 Could migrate to one codepoint: If all Classic ECN senders 2827 eventually evolve to use the L4S service, the ECT(0) codepoint 2828 could be reused for some future purpose, but only once use of 2829 ECT(0) packets had reduced to zero, or near-zero, which might 2830 never happen. 2832 L4 not required: Being based on the ECN field, this scheme does not 2833 need the network to access transport layer flow identifiers. 2834 Nonetheless, it does not preclude solutions that do. 2836 Appendix C. Potential Competing Uses for the ECT(1) Codepoint 2838 The ECT(1) codepoint of the ECN field has already been assigned once 2839 for the ECN nonce [RFC3540], which has now been categorized as 2840 historic [RFC8311]. ECN is probably the only remaining field in the 2841 Internet Protocol that is common to IPv4 and IPv6 and still has 2842 potential to work end-to-end, with tunnels and with lower layers. 2843 Therefore, ECT(1) should not be reassigned to a different 2844 experimental use (L4S) without carefully assessing competing 2845 potential uses. These fall into the following categories: 2847 C.1. Integrity of Congestion Feedback 2849 Receiving hosts can fool a sender into downloading faster by 2850 suppressing feedback of ECN marks (or of losses if retransmissions 2851 are not necessary or available otherwise). 2853 The historic ECN nonce protocol [RFC3540] proposed that a TCP sender 2854 could set either of ECT(0) or ECT(1) in each packet of a flow and 2855 remember the sequence it had set. If any packet was lost or 2856 congestion marked, the receiver would miss that bit of the sequence. 2857 An ECN Nonce receiver had to feed back the least significant bit of 2858 the sum, so it could not suppress feedback of a loss or mark without 2859 a 50-50 chance of guessing the sum incorrectly. 2861 It is highly unlikely that ECT(1) will be needed for integrity 2862 protection in future. The ECN Nonce RFC [RFC3540] as been 2863 reclassified as historic, partly because other ways have been 2864 developed to protect feedback integrity of TCP and other transports 2865 [RFC8311] that do not consume a codepoint in the IP header. For 2866 instance: 2868 * the sender can test the integrity of the receiver's feedback by 2869 occasionally setting the IP-ECN field to a value normally only set 2870 by the network. Then it can test whether the receiver's feedback 2871 faithfully reports what it expects (see para 2 of Section 20.2 of 2872 [RFC3168]. This works for loss and it will work for the accurate 2873 ECN feedback [RFC7560] intended for L4S. 2875 * A network can enforce a congestion response to its ECN markings 2876 (or packet losses) by auditing congestion exposure (ConEx) 2877 [RFC7713]. Whether the receiver or a downstream network is 2878 suppressing congestion feedback or the sender is unresponsive to 2879 the feedback, or both, ConEx audit can neutralise any advantage 2880 that any of these three parties would otherwise gain. 2882 * The TCP authentication option (TCP-AO [RFC5925]) can be used to 2883 detect any tampering with TCP congestion feedback (whether 2884 malicious or accidental). TCP's congestion feedback fields are 2885 immutable end-to-end, so they are amenable to TCP-AO protection, 2886 which covers the main TCP header and TCP options by default. 2887 However, TCP-AO is often too brittle to use on many end-to-end 2888 paths, where middleboxes can make verification fail in their 2889 attempts to improve performance or security, e.g. by 2890 resegmentation or shifting the sequence space. 2892 C.2. Notification of Less Severe Congestion than CE 2894 Various researchers have proposed to use ECT(1) as a less severe 2895 congestion notification than CE, particularly to enable flows to fill 2896 available capacity more quickly after an idle period, when another 2897 flow departs or when a flow starts, e.g. VCP [VCP], Queue View (QV) 2898 [QV]. 2900 Before assigning ECT(1) as an identifier for L4S, we must carefully 2901 consider whether it might be better to hold ECT(1) in reserve for 2902 future standardisation of rapid flow acceleration, which is an 2903 important and enduring problem [RFC6077]. 2905 Pre-Congestion Notification (PCN) is another scheme that assigns 2906 alternative semantics to the ECN field. It uses ECT(1) to signify a 2907 less severe level of pre-congestion notification than CE [RFC6660]. 2908 However, the ECN field only takes on the PCN semantics if packets 2909 carry a Diffserv codepoint defined to indicate PCN marking within a 2910 controlled environment. PCN is required to be applied solely to the 2911 outer header of a tunnel across the controlled region in order not to 2912 interfere with any end-to-end use of the ECN field. Therefore a PCN 2913 region on the path would not interfere with the L4S service 2914 identifier defined in Section 3. 2916 Authors' Addresses 2918 Koen De Schepper 2919 Nokia Bell Labs 2920 Antwerp 2921 Belgium 2923 Email: koen.de_schepper@nokia.com 2924 URI: https://www.bell-labs.com/usr/koen.de_schepper 2925 Bob Briscoe (editor) 2926 Independent 2927 United Kingdom 2929 Email: ietf@bobbriscoe.net 2930 URI: http://bobbriscoe.net/