idnits 2.17.1 draft-briscoe-tsvwg-l4s-arch-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 31, 2016) is 2734 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-28) exists of draft-ietf-tcpm-accurate-ecn-02 == Outdated reference: A later version (-07) exists of draft-ietf-tcpm-cubic-02 == Outdated reference: A later version (-10) exists of draft-ietf-tcpm-dctcp-02 == Outdated reference: A later version (-07) exists of draft-stewart-tsvwg-sctpecn-05 -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe, Ed. 3 Internet-Draft Simula Research Lab 4 Intended status: Informational K. De Schepper 5 Expires: May 4, 2017 Nokia Bell Labs 6 M. Bagnulo Braun 7 Universidad Carlos III de Madrid 8 October 31, 2016 10 Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: 11 Architecture 12 draft-briscoe-tsvwg-l4s-arch-00 14 Abstract 16 This document describes the L4S architecture for the provision of a 17 new service that the Internet could provide to eventually replace 18 best efforts for all traffic: Low Latency, Low Loss, Scalable 19 throughput (L4S). It is becoming common for _all_ (or most) 20 applications being run by a user at any one time to require low 21 latency. However, the only solution the IETF can offer for ultra-low 22 queuing delay is Diffserv, which only favours a minority of packets 23 at the expense of others. In extensive testing the new L4S service 24 keeps average queuing delay under a millisecond for _all_ 25 applications even under very heavy load, without sacrificing 26 utilization; and it keeps congestion loss to zero. It is becoming 27 widely recognized that adding more access capacity gives diminishing 28 returns, because latency is becoming the critical problem. Even with 29 a high capacity broadband access, the reduced latency of L4S 30 remarkably and consistently improves performance under load for 31 applications such as interactive video, conversational video, voice, 32 Web, gaming, instant messaging, remote desktop and cloud-based apps 33 (even when all being used at once over the same access link). The 34 insight is that the root cause of queuing delay is in TCP, not in the 35 queue. By fixing the sending TCP (and other transports) queuing 36 latency becomes so much better than today that operators will want to 37 deploy the network part of L4S to enable new products and services. 38 Further, the network part is simple to deploy - incrementally with 39 zero-config. Both parts, sender and network, ensure coexistence with 40 other legacy traffic. At the same time L4S solves the long- 41 recognized problem with the future scalability of TCP throughput. 43 This document describes the L4S architecture, briefly describing the 44 different components and how the work together to provide the 45 aforementioned enhanced Internet service. 47 Status of This Memo 49 This Internet-Draft is submitted in full conformance with the 50 provisions of BCP 78 and BCP 79. 52 Internet-Drafts are working documents of the Internet Engineering 53 Task Force (IETF). Note that other groups may also distribute 54 working documents as Internet-Drafts. The list of current Internet- 55 Drafts is at http://datatracker.ietf.org/drafts/current/. 57 Internet-Drafts are draft documents valid for a maximum of six months 58 and may be updated, replaced, or obsoleted by other documents at any 59 time. It is inappropriate to use Internet-Drafts as reference 60 material or to cite them other than as "work in progress." 62 This Internet-Draft will expire on May 4, 2017. 64 Copyright Notice 66 Copyright (c) 2016 IETF Trust and the persons identified as the 67 document authors. All rights reserved. 69 This document is subject to BCP 78 and the IETF Trust's Legal 70 Provisions Relating to IETF Documents 71 (http://trustee.ietf.org/license-info) in effect on the date of 72 publication of this document. Please review these documents 73 carefully, as they describe your rights and restrictions with respect 74 to this document. Code Components extracted from this document must 75 include Simplified BSD License text as described in Section 4.e of 76 the Trust Legal Provisions and are provided without warranty as 77 described in the Simplified BSD License. 79 Table of Contents 81 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 82 2. L4S architecture overview . . . . . . . . . . . . . . . . . . 4 83 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 84 4. L4S architecture components . . . . . . . . . . . . . . . . . 7 85 5. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . 8 86 5.1. Why These Primary Components? . . . . . . . . . . . . . . 8 87 5.2. Why Not Alternative Approaches? . . . . . . . . . . . . . 10 88 6. Applicability statement . . . . . . . . . . . . . . . . . . . 11 89 6.1. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 12 90 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 91 8. Security Considerations . . . . . . . . . . . . . . . . . . . 13 92 8.1. Traffic (Non-)Policing . . . . . . . . . . . . . . . . . 13 93 8.2. 'Latency Friendliness' . . . . . . . . . . . . . . . . . 14 94 8.3. ECN Integrity . . . . . . . . . . . . . . . . . . . . . . 14 96 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 15 97 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 15 98 10.1. Normative References . . . . . . . . . . . . . . . . . . 15 99 10.2. Informative References . . . . . . . . . . . . . . . . . 16 100 Appendix A. Required features for scalable transport protocols 101 to be safely deployable in the Internet (a.k.a. TCP 102 Prague requirements) . . . . . . . . . . . . . . . . 19 103 Appendix B. Standardization items . . . . . . . . . . . . . . . 23 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 106 1. Introduction 108 It is increasingly common for _all_ of a user's applications at any 109 one time to require low delay: interactive Web, Web services, voice, 110 conversational video, interactive video, instant messaging, online 111 gaming, remote desktop and cloud-based applications. In the last 112 decade or so, much has been done to reduce propagation delay by 113 placing caches or servers closer to users. However, queuing remains 114 a major, albeit intermittent, component of latency. When present it 115 typically doubles the path delay from that due to the base speed-of- 116 light. Low loss is also important because, for interactive 117 applications, losses translate into even longer retransmission 118 delays. 120 It has been demonstrated that, once access network bit rates reach 121 levels now common in the developed world, increasing capacity offers 122 diminishing returns if latency (delay) is not addressed. 123 Differentiated services (Diffserv) offers Expedited Forwarding 124 [RFC3246] for some packets at the expense of others, but this is not 125 applicable when all (or most) of a user's applications require low 126 latency. 128 Therefore, the goal is an Internet service with ultra-Low queueing 129 Latency, ultra-Low Loss and Scalable throughput (L4S) - for _all_ 130 traffic. This document describes the L4S architecture for achieving 131 that goal. 133 It must be said that queuing delay only degrades performance 134 infrequently [Hohlfeld14]. It only occurs when a large enough 135 capacity-seeking (e.g. TCP) flow is running alongside the user's 136 traffic in the bottleneck link, which is typically in the access 137 network. Or when the low latency application is itself a large 138 capacity-seeking flow (e.g. interactive video). At these times, the 139 performance improvement must be so remarkable that network operators 140 will be motivated to deploy it. 142 Active Queue Management (AQM) is part of the solution to queuing 143 under load. AQM improves performance for all traffic, but there is a 144 limit to how much queuing delay can be reduced by solely changing the 145 network; without addressing the root of the problem. 147 The root of the problem is the presence of standard TCP congestion 148 control (Reno [RFC5681]) or compatible variants (e.g. TCP Cubic 149 [I-D.ietf-tcpm-cubic]). We shall call this family of congestion 150 controls 'Classic' TCP. It has been demonstrated that if the sending 151 host replaces Classic TCP with a 'Scalable' alternative, when a 152 suitable AQM is deployed in the network the performance under load of 153 all the above interactive applications can be stunningly improved. 154 For instance, queuing delay under heavy load with the example DCTCP/ 155 DualQ solution cited below is roughly 1 millisecond (1 ms) at the 156 99th percentile without losing link utilization. This compares with 157 5 to 20 ms on _average_ with a Classic TCP and current state-of-the- 158 art AQMs such as fq_CoDel [I-D.ietf-aqm-fq-codel] or 159 PIE [I-D.ietf-aqm-pie]. Also, with a Classic TCP, 5 ms of queuing is 160 usually only possible by losing some utilization. 162 It has been convincingly demonstrated [DCttH15] that it is possible 163 to deploy such an L4S service alongside the existing best efforts 164 service so that all of a user's applications can shift to it when 165 their stack is updated. Access networks are typically designed with 166 one link as the bottleneck for each site (which might be a home, 167 small enterprise or mobile device), so deployment at a single node 168 should give nearly all the benefit. The L4S approach requires a 169 number of mechanisms in different parts of the Internet to fulfill 170 its goal. This document presents the L4S architecture, by describing 171 the different components and how they interact to provide the 172 scalable low-latency, low-loss, Internet service. 174 2. L4S architecture overview 176 There are three main components to the L4S architecture (illustrated 177 in Figure 1): 179 2) Network: The L4S service traffic needs to be isolated from the 180 queuing latency of the Classic service traffic. However, the two 181 should be able to freely share a common pool of capacity. This is 182 because there is no way to predict how many flows at any one time 183 might use each service and capacity in access networks is too 184 scarce to partition into two. So a 'semi-permeable' membrane is 185 needed that partitions latency but not bandwidth. The Dual Queue 186 Coupled AQM [I-D.briscoe-aqm-dualq-coupled] is an example of such 187 a semi-permeable membrane. 189 Per-flow queuing such as in [I-D.ietf-aqm-fq-codel] could be used, 190 but it partitions both latency and bandwidth between every end-to- 191 end flow. So it is rather overkill, which brings disadvantages 192 (see Section 5.2), not least that thousands of queues are needed 193 when two are sufficient. 195 1) Protocol: A host needs to distinguish L4S and Classic packets 196 with an identifier so that the network can classify them into 197 their separate treatments. [I-D.briscoe-tsvwg-ecn-l4s-id] 198 considers various alternative identifiers, and concludes that all 199 alternatives involve compromises, but the ECT(1) codepoint of the 200 ECN field is a workable solution. 202 3) Host: Scalable congestion controls already exist. They solve the 203 scaling problem with TCP first pointed out in [RFC3649]. The one 204 used most widely (in controlled environments) is Data Centre TCP 205 (DCTCP [I-D.ietf-tcpm-dctcp]), which has been implemented and 206 deployed in Windows Server Editions (since 2012), in Linux and in 207 FreeBSD. Although DCTCP as-is 'works' well over the public 208 Internet, most implementations lack certain safety features that 209 will be necessary once it is used outside controlled environments 210 like data centres (see later). A similar scalable congestion 211 control will also need to be transplanted into protocols other 212 than TCP (SCTP, RTP/RTCP, RMCAT, etc.) 214 (1) (2) 215 .-------^------. .--------------^-------------------. 216 ,-(3)-----. ______ 217 ; ________ : L4S --------. | | 218 :|Scalable| : _\ ||___\_| mark | 219 :| sender | : __________ / / || / |______|\ _________ 220 :|________|\; | |/ --------' ^ \1| | 221 `---------'\__| IP-ECN | Coupling : \|priority |_\ 222 ________ / |Classifier| : /|scheduler| / 223 |Classic |/ |__________|\ --------. ___:__ / |_________| 224 | sender | \_\ || | |||___\_| mark/|/ 225 |________| / || | ||| / | drop | 226 Classic --------' |______| 228 Figure 1: Components of an L4S Solution: 1) Isolation in separate 229 network queues; 2) Packet Identification Protocol; and 3) Scalable 230 Sending Host 232 3. Terminology 234 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 235 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 236 document are to be interpreted as described in [RFC2119]. In this 237 document, these words will appear with that interpretation only when 238 in ALL CAPS. Lower case uses of these words are not to be 239 interpreted as carrying RFC-2119 significance. COMMENT: Since this 240 will be an information document, This should be removed. 242 Classic service: The 'Classic' service is intended for all the 243 congestion control behaviours that currently co-exist with TCP 244 Reno (e.g. TCP Cubic, Compound, SCTP, etc). 246 Low-Latency, Low-Loss and Scalable (L4S) service: The 'L4S' service 247 is intended for traffic from scalable TCP algorithms such as Data 248 Centre TCP. But it is also more general--it will allow a set of 249 congestion controls with similar scaling properties to DCTCP (e.g. 250 Relentless [Mathis09]) to evolve. 252 Both Classic and L4S services can cope with a proportion of 253 unresponsive or less-responsive traffic as well (e.g. DNS, VoIP, 254 etc). 256 Scalable Congestion Control: A congestion control where flow rate is 257 inversely proportional to the level of congestion signals. Then, 258 as flow rate scales, the number of congestion signals per round 259 trip remains invariant, maintaining the same degree of control. 260 For instance, DCTCP averages 2 congestion signals per round-trip 261 whatever the flow rate. 263 Classic Congestion Control: A congestion control with a flow rate 264 compatible with standard TCP Reno [RFC5681]. With Classic 265 congestion controls, as capacity increases enabling higher flow 266 rates, the number of round trips between congestion signals 267 (losses or ECN marks) rises in proportion to the flow rate. So 268 control of queuing and/or utilization becomes very slack. For 269 instance, with 1500 B packets and an RTT of 18 ms, as TCP Reno 270 flow rate increases from 2 to 100 Mb/s the number of round trips 271 between congestion signals rises proportionately, from 2 to 100. 273 The default congestion control in Linux (TCP Cubic) is Reno- 274 compatible for most scenarios expected for some years. For 275 instance, with a typical domestic round-trip time (RTT) of 18ms, 276 TCP Cubic only switches out of Reno-compatibility mode once the 277 flow rate approaches 1 Gb/s. For a typical data centre RTT of 1 278 ms, the switch-over point is theoretically 1.3 Tb/s. However, 279 with a less common transcontinental RTT of 100 ms, it only remains 280 Reno-compatible up to 13 Mb/s. All examples assume 1,500 B 281 packets. 283 Classic ECN: The original proposed standard Explicit Congestion 284 Notification (ECN) protocol [RFC3168], which requires ECN signals 285 to be treated the same as drops, both when generated in the 286 network and when responded to by the sender. 288 Site: A home, mobile device, small enterprise or campus, where the 289 network bottleneck is typically the access link to the site. Not 290 all network arrangements fit this model but it is a useful, widely 291 applicable generalisation. 293 4. L4S architecture components 295 The L4S architecture is composed by the following elements. 297 Protocols:The L4S architecture encompass the two protocols we 298 describe next: 300 a. [I-D.briscoe-tsvwg-ecn-l4s-id] recommends ECT(1) is used as the 301 identifier to classify L4S and Classic packets into their 302 separate treatments, as required by [RFC4774]. The draft also 303 points out that the original experimental assignment of this 304 codepoint as an ECN nonce [RFC3540] needs to be made obsolete (it 305 was never deployed, and it offers no security benefit now that 306 deployment is optional). 308 b. An essential aspect of a scalable congestion control is the use 309 of explicit congestion signals rather than losses, because the 310 signals need to be sent immediately and frequently--too often to 311 use drops. 'Classic' ECN [RFC3168] requires an ECN signal to be 312 treated the same as a drop, both when it is generated in the 313 network and when it is responded to by hosts. L4S allows 314 networks and hosts to support two separate meanings for ECN. So 315 the standards track [RFC3168] will need to be updated to allow 316 ECT(1) packets to depart from the 'same as drop' constraint. 318 Network components:The Dual Queue Coupled AQM has been specified as 319 generically as possible [I-D.briscoe-aqm-dualq-coupled] as a 'semi- 320 permeable' membrane without specifying the particular AQMs to use in 321 the two queues. An informational appendix of the draft is provided 322 for pseudocode examples of different possible AQM approaches. 323 Initially a zero-config variant of RED called Curvy RED was 324 implemented, tested and documented. A variant of PIE has been 325 implemented and tested and is about to be documented. The aim is for 326 designers to be free to implement diverse ideas. So the brief 327 normative body of the draft only specifies the minimum constraints an 328 AQM needs to comply with to ensure that the L4S and Classic services 329 will coexist. 331 Host mechanisms: The L4S architecture includes a number of mechanisms 332 in the end host that we enumerate next: 334 a. Data Centre TCP is the most widely used example of a scalable 335 congestion control. It is being documented in the TCPM WG as an 336 informational record of the protocol currently in use 337 [I-D.ietf-tcpm-dctcp]. It will be necessary to define a number 338 of safety features for a variant usable on the public Internet. 339 A draft list of these, known as the TCP Prague requirements, has 340 been drawn up (see Appendix A). 342 b. Transport protocols other than TCP use various congestion 343 controls designed to be friendly with Classic TCP. It will be 344 necessary to implement scalable variants of each of these 345 transport behaviours before they can use the L4S service. The 346 following standards track RFCs currently define these protocols, 347 and they will need to be updated to allow a different congestion 348 response, which they will have to indicate by using the ECT(1) 349 codepoint: ECN in TCP [RFC3168], in SCTP [RFC4960], in RTP 350 [RFC6679], and in DCCP [RFC4340]. 352 c. ECN feedback is sufficient for L4S in some transport protocols 353 (RTCP, DCCP) but not others: 355 * For the case of TCP, the feedback protocol for ECN embeds the 356 assumption from Classic ECN that it is the same as drop, 357 making it unusable for a scalable TCP. Therefore, the 358 implementation of TCP receivers will have to be upgraded 359 [RFC7560]. Work to standardize more accurate ECN feedback for 360 TCP (AccECN [I-D.ietf-tcpm-accurate-ecn]) is already in 361 progress. 363 * ECN feedback is only roughly sketched in an appendix of the 364 SCTP specification. A fuller specification has been proposed 365 [I-D.stewart-tsvwg-sctpecn], which would need to be 366 implemented and deployed. 368 5. Rationale 370 5.1. Why These Primary Components? 372 Explicit congestion signalling (protocol): Explicit congestion 373 signalling is a key part of the L4S approach. In contrast, use of 374 drop as a congestion signal creates a tension because drop is both 375 a useful signal (more would reduce delay) and an impairment (less 376 would reduce delay). Explicit congestion signals can be used many 377 times per round trip, to keep tight control, without any 378 impairment. Under heavy load, even more explicit signals can be 379 applied so the queue can be kept short whatever the load. Whereas 380 state-of-the-art AQMs have to introduce very high packet drop at 381 high load to keep the queue short. Further, TCP's sawtooth 382 reduction can be smaller, and therefore return to the operating 383 point more often, without worrying that this causes more signals 384 (one at the top of each smaller sawtooth). The consequent smaller 385 amplitude sawteeth fit between a very shallow marking threshold 386 and an empty queue, so delay variation can be very low, without 387 risk of under-utilization. 389 All the above makes it clear that explicit congestion signalling 390 is only advantageous for latency if it does not have to be 391 considered 'the same as' drop (as required with Classic ECN 392 [RFC3168]). Before Classic ECN was standardized, there were 393 various proposals to give an ECN mark a different meaning from 394 drop. However, there was no particular reason to agree on any one 395 of the alternative meanings, so 'the same as drop' was the only 396 compromise that could be reached. RFC 3168 contains a statement 397 that: 399 "An environment where all end nodes were ECN-Capable could 400 allow new criteria to be developed for setting the CE 401 codepoint, and new congestion control mechanisms for end-node 402 reaction to CE packets. However, this is a research issue, and 403 as such is not addressed in this document." 405 Latency isolation with coupled congestion notification (network): 406 Using just two queues is not essential to L4S (more would be 407 possible), but it is the simplest way to isolate all the L4S 408 traffic that keeps latency low from all the legacy Classic traffic 409 that does not. 411 Similarly, coupling the congestion notification between the queues 412 is not necessarily essential, but it is a clever and simple way to 413 allow senders to determine their rate, packet-by-packet, rather 414 than be overridden by a network scheduler. Because otherwise a 415 network scheduler would have to inspect at least transport layer 416 headers, and it would have to continually assign a rate to each 417 flow without any easy way to understand application intent. 419 L4S packet identifier (protocol): Once there are at least two 420 separate treatments in the network, hosts need an identifier at 421 the IP layer to distinguish which treatment they intend to use. 423 Scalable congestion notification (host): A scalable congestion 424 control keeps the signalling frequency high so that rate 425 variations can be small when signalling is stable, and rate can 426 track variations in available capacity as rapidly as possible 427 otherwise. 429 5.2. Why Not Alternative Approaches? 431 All the following approaches address some part of the same problem 432 space as L4S. In each case, it is shown that L4S complements them or 433 improves on them, rather than being a mutually exclusive alternative: 435 Diffserv: Diffserv addresses the problem of bandwidth apportionment 436 for important traffic as well as queuing latency for delay- 437 sensitive traffic. L4S solely addresses the problem of queuing 438 latency. Diffserv will still be necessary where important traffic 439 requires priority (e.g. for commercial reasons, or for protection 440 of critical infrastructure traffic). Nonetheless, if there are 441 Diffserv classes for important traffic, the L4S approach can 442 provide low latency for _all_ traffic within each Diffserv class 443 (including the case where there is only one Diffserv class). 445 Also, as already explained, Diffserv only works for a small subset 446 of the traffic on a link. It is not applicable when all the 447 applications in use at one time at a single site (home, small 448 business or mobile device) require low latency. Also, because L4S 449 is for all traffic, it needs none of the management baggage 450 (traffic policing, traffic contracts) associated with favouring 451 some packets over others. This baggage has held Diffserv back 452 from widespread end-to-end deployment. 454 State-of-the-art AQMs: AQMs such as PIE and fq_CoDel give a 455 significant reduction in queuing delay relative to no AQM at all. 456 The L4S work is intended to complement these AQMs, and we 457 definitely do not want to distract from the need to deploy them as 458 widely as possible. Nonetheless, without addressing the large 459 saw-toothing rate variations of Classic congestion controls, AQMs 460 alone cannot reduce queuing delay too far without significantly 461 reducing link utilization. The L4S approach resolves this tension 462 by ensuring hosts can minimize the size of their sawteeth without 463 appearing so aggressive to legacy flows that they starve. 465 Per-flow queuing: Similarly per-flow queuing is not incompatible 466 with the L4S approach. However, one queue for every flow can be 467 thought of as overkill compared to the minimum of two queues for 468 all traffic needed for the L4S approach. The overkill of per-flow 469 queuing has side-effects: 471 A. fq makes high performance networking equipment costly 472 (processing and memory) - in contrast dual queue code can be 473 very simple; 475 B. fq requires packet inspection into the end-to-end transport 476 layer, which doesn't sit well alongside encryption for privacy 477 - in contrast a dual queue only operates at the IP layer; 479 C. fq decides packet-by-packet which flow to schedule without 480 knowing application intent. In contrast, in the L4S approach 481 the sender still controls the relative rate of each flow 482 dependent on the needs of each application. 484 Alternative Back-off ECN (ABE): Yet again, L4S is not an alternative 485 to ABE but a complement that introduces much lower queuing delay. 486 ABE [I-D.khademi-tcpm-alternativebackoff-ecn] alters the host 487 behaviour in response to ECN marking to utilize a link better and 488 give ECN flows a faster throughput, but it assumes the network 489 still treats ECN and drop the same. Therefore ABE exploits any 490 lower queuing delay that AQMs can provide. But as explained 491 above, AQMs still cannot reduce queuing delay too far without 492 losing link utilization (for other non-ABE flows). 494 6. Applicability statement 496 A transport layer that solves the current latency issues will provide 497 new service, product and application opportunities. 499 With the L4S approach, the following existing applications will 500 immediately experience significantly better quality of experience 501 under load in the best effort class: 503 o Gaming 505 o VoIP 507 o Video conferencing 509 o Web browsing 511 o (Adaptive) video streaming 513 o Instant messaging 515 The significantly lower queuing latency also enables some interactive 516 application functions to be offloaded to the cloud that would hardly 517 even be usable today: 519 o Cloud based interactive video 521 o Cloud based virtual and augmented reality 522 The above two applications have been successfully demonstrated with 523 L4S, both running together over a 40 Mb/s broadband access link 524 loaded up with the numerous other latency sensitive applications in 525 the previous list as well as numerous downloads. A panoramic video 526 of a football stadium can be swiped and pinched so that on the fly a 527 proxy in the cloud generates a sub-window of the match video under 528 the finger-gesture control of each user. At the same time, a virtual 529 reality headset fed from a 360 degree camera in a racing car has been 530 demonstrated, where the user's head movements control the scene 531 generated in the cloud. In both cases, with 7 ms end-to-end base 532 delay, the additional queuing delay of roughly 1 ms is so low that it 533 seems the video is generated locally. See https://riteproject.eu/ 534 dctth/ for videos of these demonstrations. 536 Using a swiping finger gesture or head movement to pan a video are 537 extremely demanding applications--far more demanding than VoIP. 538 Because human vision can detect extremely low delays of the order of 539 single milliseconds when delay is translated into a visual lag 540 between a video and a reference point (the finger or the orientation 541 of the head). 543 If low network delay is not available, all fine interaction has to be 544 done locally and therefore much more redundant data has to be 545 downloaded. When all interactive processing can be done in the 546 cloud, only the data to be rendered for the end user needs to be 547 sent. Whereas, once applications can rely on minimal queues in the 548 network, they can focus on reducing their own latency by only 549 minimizing the application send queue. 551 6.1. Use Cases 553 The following use-cases for L4S are being considered by various 554 interested parties: 556 o Where the bottleneck is one of various types of access network: 557 DSL, cable, mobile, satellite 559 * Radio links (cellular, WiFi) that are distant from the source 560 are particularly challenging. The radio link capacity can vary 561 rapidly by orders of magnitude, so it is often desirable to 562 hold a buffer to utilise sudden increases of capacity; 564 * cellular networks are further complicated by a perceived need 565 to buffer in order to make hand-overs imperceptible; 567 * Satellite networks generally have a very large base RTT, so 568 even with minimal queuing, overall delay can never be extremely 569 low; 571 * Nonetheless, it is certainly desirable not to hold a buffer 572 purely because of the sawteeth of Classic TCP, when it is more 573 than is needed for all the above reasons. 575 o Private networks of heterogeneous data centres, where there is no 576 single administrator that can arrange for all the simultaneous 577 changes to senders, receivers and network needed to deploy DCTCP: 579 * a set of private data centres interconnected over a wide area 580 with separate administrations, but within the same company 582 * a set of data centres operated by separate companies 583 interconnected by a community of interest network (e.g. for the 584 finance sector) 586 * multi-tenant (cloud) data centres where tenants choose their 587 operating system stack (Infrastructure as a Service - IaaS) 589 o Different types of transport (or application) congestion control: 591 * elastic (TCP/SCTP); 593 * real-time (RTP, RMCAT); 595 * query (DNS/LDAP). 597 o Where low delay quality of service is required, but without 598 inspecting or intervening above the IP layer 599 [I-D.you-encrypted-traffic-management]: 601 * mobile and other networks have tended to inspect higher layers 602 in order to guess application QoS requirements. However, with 603 growing demand for support of privacy and encryption, L4S 604 offers an alternative. There is no need to select which 605 traffic to favour for queuing, when L4S gives favourable 606 queuing to all traffic. 608 7. IANA Considerations 610 This specification contains no IANA considerations. 612 8. Security Considerations 614 8.1. Traffic (Non-)Policing 616 Because the L4S service can serve all traffic that is using the 617 capacity of a link, it should not be necessary to police access to 618 the L4S service. In contrast, Diffserv only works if some packets 619 get less favourable treatement than others. So it has to use traffic 620 policers to limit how much traffic can be favoured, In turn, traffic 621 policers require traffic contracts between users and networks as well 622 as pairwise between networks. Because L4S will lack all this 623 management complexity, it is more likely to work end-to-end. 625 During early deployment (and perhaps always), some networks will not 626 offer the L4S service. These networks do not need to police or re- 627 mark L4S traffic - they just forward it unchanged as best efforts 628 traffic, as they would already forward traffic with ECT(1) today. At 629 a bottleneck, such networks will introduce some queuing and dropping. 630 When a scalable congestion control detects a drop it will have to 631 respond as if it is a Classic congestion control (see item 3-1 in 632 Appendix A). This will ensure safe interworking with other traffic 633 at the 'legacy' bottleneck. 635 Certain network operators might choose to restict access to the L4S 636 class, perhaps only to customers who have paid a premium. In the 637 packet classifer (item 2 in Figure 1), they could identify such 638 customers using some other field than ECN (e.g. source address 639 range), and just ignore the L4S identifier for non-paying customers. 640 This would ensure that the L4S identifier survives end-to-end even 641 though the service does not have to be supported at every hop. Such 642 arrangements would only require simple registered/not-registered 643 packet classification, rather than the managed application-specific 644 traffic policing against customer-specific traffic contracts that 645 Diffserv requires. 647 8.2. 'Latency Friendliness' 649 The L4S service does rely on self-constraint - not in terms of 650 limiting capacity usage, but in terms of limiting burstiness. It is 651 believed that standardisation of dynamic behaviour (cf. TCP slow- 652 start) and self-interest will be sufficient to prevent transports 653 from sending excessive bursts of L4S traffic, given the application's 654 own latency will suffer most from such behaviour. 656 Whether burst policing becomes necessary remains to be seen. Without 657 it, there will be potential for attacks on the low latency of the L4S 658 service. However it may only be necessary to apply such policing 659 reactively, e.g. punitively targeted at any deployments of new bursty 660 malware. 662 8.3. ECN Integrity 664 Receiving hosts can fool a sender into downloading faster by 665 suppressing feedback of ECN marks (or of losses if retransmissions 666 are not necessary or available otherwise). [RFC3540] proposes that a 667 TCP sender could pseudorandomly set either of ECT(0) or ECT(1) in 668 each packet of a flow and remember the sequence it had set, termed 669 the ECN nonce. If the receiver supports the nonce, it can prove that 670 it is not suppressing feedback by reflecting its knowledge of the 671 sequence back to the sender. The nonce was proposed on the 672 assumption that receivers might be more likely to cheat congestion 673 control than senders (although senders also have a motive to cheat). 675 If L4S uses the ECT(1) codepoint of ECN for packet classification, it 676 will have to obsolete the experimental nonce. As far as is known, 677 the ECN Nonce has never been deployed, and it was only implemented 678 for a couple of testbed evaluations. It would be nearly impossible 679 to deploy now, because any misbehaving receiver can simply opt-out, 680 which would be unremarkable given all receivers currently opt-out. 682 Other ways to protect TCP feedback integrity have since been 683 developed. For instance: 685 o the sender can test the integrity of the receiver's feedback by 686 occasionally setting the IP-ECN field to a value normally only set 687 by the network. Then it can test whether the receiver's feedback 688 faithfully reports what it expects [I-D.moncaster-tcpm-rcv-cheat]. 689 This method consumes no extra codepoints. It works for loss and 690 it will work for ECN feedback in any transport protocol suitable 691 for L4S. However, it shares the same assumption as the nonce; 692 that the sender is not cheating and it is motivated to prevent the 693 receiver cheating; 695 o A network can enforce a congestion response to its ECN markings 696 (or packet losses) by auditing congestion exposure (ConEx) 697 [RFC7713]. Whether the receiver or a downstream network is 698 suppressing congestion feedback or the sender is unresponsive to 699 the feedback, or both, ConEx audit can neutralise any advantage 700 that any of these three parties would otherwise gain. ConEx is 701 only currently defined for IPv6 and consumes a destination option 702 header. It has been implemented, but not deployed as far as is 703 known. 705 9. Acknowledgements 707 10. References 709 10.1. Normative References 711 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 712 Requirement Levels", BCP 14, RFC 2119, 713 DOI 10.17487/RFC2119, March 1997, 714 . 716 10.2. Informative References 718 [Alizadeh-stability] 719 Alizadeh, M., Javanmard, A., and B. Prabhakar, "Analysis 720 of DCTCP: Stability, Convergence, and Fairness", ACM 721 SIGMETRICS 2011 , June 2011. 723 [DCttH15] De Schepper, K., Bondarenko, O., Briscoe, B., and I. 724 Tsang, "'Data Centre to the Home': Ultra-Low Latency for 725 All", 2015, . 728 (Under submission) 730 [Hohlfeld14] 731 Hohlfeld , O., Pujol, E., Ciucu, F., Feldmann, A., and P. 732 Barford, "A QoE Perspective on Sizing Network Buffers", 733 Proc. ACM Internet Measurement Conf (IMC'14) hmm, November 734 2014. 736 [I-D.briscoe-aqm-dualq-coupled] 737 Schepper, K., Briscoe, B., Bondarenko, O., and I. Tsang, 738 "DualQ Coupled AQM for Low Latency, Low Loss and Scalable 739 Throughput", draft-briscoe-aqm-dualq-coupled-01 (work in 740 progress), March 2016. 742 [I-D.briscoe-tsvwg-ecn-l4s-id] 743 Schepper, K., Briscoe, B., and I. Tsang, "Identifying 744 Modified Explicit Congestion Notification (ECN) Semantics 745 for Ultra-Low Queuing Delay", draft-briscoe-tsvwg-ecn-l4s- 746 id-02 (work in progress), October 2016. 748 [I-D.ietf-aqm-fq-codel] 749 Hoeiland-Joergensen, T., McKenney, P., 750 dave.taht@gmail.com, d., Gettys, J., and E. Dumazet, "The 751 FlowQueue-CoDel Packet Scheduler and Active Queue 752 Management Algorithm", draft-ietf-aqm-fq-codel-06 (work in 753 progress), March 2016. 755 [I-D.ietf-aqm-pie] 756 Pan, R., Natarajan, P., Baker, F., and G. White, "PIE: A 757 Lightweight Control Scheme To Address the Bufferbloat 758 Problem", draft-ietf-aqm-pie-10 (work in progress), 759 September 2016. 761 [I-D.ietf-tcpm-accurate-ecn] 762 Briscoe, B., Kuehlewind, M., and R. Scheffenegger, "More 763 Accurate ECN Feedback in TCP", draft-ietf-tcpm-accurate- 764 ecn-02 (work in progress), October 2016. 766 [I-D.ietf-tcpm-cubic] 767 Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 768 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 769 draft-ietf-tcpm-cubic-02 (work in progress), August 2016. 771 [I-D.ietf-tcpm-dctcp] 772 Bensley, S., Eggert, L., Thaler, D., Balasubramanian, P., 773 and G. Judd, "Datacenter TCP (DCTCP): TCP Congestion 774 Control for Datacenters", draft-ietf-tcpm-dctcp-02 (work 775 in progress), July 2016. 777 [I-D.khademi-tcpm-alternativebackoff-ecn] 778 Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst, 779 "TCP Alternative Backoff with ECN (ABE)", draft-khademi- 780 tcpm-alternativebackoff-ecn-01 (work in progress), October 781 2016. 783 [I-D.moncaster-tcpm-rcv-cheat] 784 Moncaster, T., Briscoe, B., and A. Jacquet, "A TCP Test to 785 Allow Senders to Identify Receiver Non-Compliance", draft- 786 moncaster-tcpm-rcv-cheat-03 (work in progress), July 2014. 788 [I-D.stewart-tsvwg-sctpecn] 789 Stewart, R., Tuexen, M., and X. Dong, "ECN for Stream 790 Control Transmission Protocol (SCTP)", draft-stewart- 791 tsvwg-sctpecn-05 (work in progress), January 2014. 793 [I-D.you-encrypted-traffic-management] 794 You, J. and C. Xiong, "The Effect of Encrypted Traffic on 795 the QoS Mechanisms in Cellular Networks", draft-you- 796 encrypted-traffic-management-00 (work in progress), 797 October 2015. 799 [Mathis09] 800 Mathis, M., "Relentless Congestion Control", PFLDNeT'09 , 801 May 2009, . 804 [NewCC_Proc] 805 Eggert, L., "Experimental Specification of New Congestion 806 Control Algorithms", IETF Operational Note ion-tsv-alt-cc, 807 July 2007. 809 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 810 of Explicit Congestion Notification (ECN) to IP", 811 RFC 3168, DOI 10.17487/RFC3168, September 2001, 812 . 814 [RFC3246] Davie, B., Charny, A., Bennet, J., Benson, K., Le Boudec, 815 J., Courtney, W., Davari, S., Firoiu, V., and D. 816 Stiliadis, "An Expedited Forwarding PHB (Per-Hop 817 Behavior)", RFC 3246, DOI 10.17487/RFC3246, March 2002, 818 . 820 [RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust Explicit 821 Congestion Notification (ECN) Signaling with Nonces", 822 RFC 3540, DOI 10.17487/RFC3540, June 2003, 823 . 825 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows", 826 RFC 3649, DOI 10.17487/RFC3649, December 2003, 827 . 829 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 830 Congestion Control Protocol (DCCP)", RFC 4340, 831 DOI 10.17487/RFC4340, March 2006, 832 . 834 [RFC4774] Floyd, S., "Specifying Alternate Semantics for the 835 Explicit Congestion Notification (ECN) Field", BCP 124, 836 RFC 4774, DOI 10.17487/RFC4774, November 2006, 837 . 839 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 840 RFC 4960, DOI 10.17487/RFC4960, September 2007, 841 . 843 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 844 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 845 . 847 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 848 and K. Carlberg, "Explicit Congestion Notification (ECN) 849 for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 850 2012, . 852 [RFC7560] Kuehlewind, M., Ed., Scheffenegger, R., and B. Briscoe, 853 "Problem Statement and Requirements for Increased Accuracy 854 in Explicit Congestion Notification (ECN) Feedback", 855 RFC 7560, DOI 10.17487/RFC7560, August 2015, 856 . 858 [RFC7713] Mathis, M. and B. Briscoe, "Congestion Exposure (ConEx) 859 Concepts, Abstract Mechanism, and Requirements", RFC 7713, 860 DOI 10.17487/RFC7713, December 2015, 861 . 863 [TCP-sub-mss-w] 864 Briscoe, B. and K. De Schepper, "Scaling TCP's Congestion 865 Window for Small Round Trip Times", BT Technical Report 866 TR-TUB8-2015-002, May 2015, 867 . 870 [TCPPrague] 871 Briscoe, B., "Notes: DCTCP evolution 'bar BoF': Tue 21 Jul 872 2015, 17:40, Prague", tcpprague mailing list archive , 873 July 2015. 875 Appendix A. Required features for scalable transport protocols to be 876 safely deployable in the Internet (a.k.a. TCP Prague 877 requirements) 879 This list contains a list of features, mechanisms and modifications 880 from currently defined behaviour for scalable Transport protocols so 881 that they can be safely deployed over the public Internet. This list 882 of requirements was produced at an ad hoc meeting during IETF-94 in 883 Prague [TCPPrague]. 885 One of such scalable transport protocols is DCTCP, currently 886 specified in [I-D.ietf-tcpm-dctcp]. In its current form, DCTCP is 887 specified to be deployable in controlled environments and deploying 888 it in the public Internet would lead to a number of issues, both from 889 the safety and the performance perspective. In this section, we 890 describe the modifications and additional mechanisms that are 891 required for its deployment over the global Internet. We use DCTCP 892 as a base, but it is likely that most of these requirements equally 893 apply to other scalable transport protocols. 895 We next provide a brief description of each required feature. 897 Requirement #4.1: Fall back to Reno/Cubic congestion control on 898 packet loss. 900 Description: In case of packet loss, the scalable transport MUST 901 react as classic TCP (whatever the classic version of TCP is running 902 in the host, e.g. Reno, Cubic). 904 Motivation: As part of the safety conditions for deploying a scalable 905 transport over the public Internet is to make sure that it behaves 906 properly when some or all the network devices connecting the two 907 endpoints that implement the scalable transport have not been 908 upgraded. In particular, it may be the case that some of the 909 switches along the path between the two endpoints may only react to 910 congestion by dropping packets (i.e. no ECN marking). It is 911 important that in these cases, the scalable transport react to the 912 congestion signal in the form of a packet drop similarly to classic 913 TCP. 915 In the particular case of DCTCP, the current DCTCP specification 916 states that "It is RECOMMENDED that an implementation deal with loss 917 episodes in the same way as conventional TCP." For safe deployment 918 in the public Internet of a scalable transport, the above requirement 919 needs to be defined as a MUST. 921 Packet loss, while rare, may also occur in the case that the 922 bottleneck is L4S capable. In this case, the sender may receive a 923 high number of packets marked with the CE bit set and also experience 924 a loss. Current DCTCP implementations react differently to this 925 situation. At least one implementation reacts only to the drop 926 signal (e.g. by halving the CWND) and at least another DCTCP 927 implementation reacts to both signals (e.g. by halving the CWND due 928 to the drop and also further reducing the CWND based on the 929 proportion of marked packet). We believe that further 930 experimentation is needed to understand what is the best behaviour 931 for the public Internet, which may or not be one of the existent 932 implementations. 934 Requirement #4.2: Fall back to Reno/Cubic congestion control on 935 classic ECN bottlenecks. 937 Description: The scalable transport protocol SHOULD/MAY? behave as 938 classic TCP with classic ECN if the path contains a legacy bottleneck 939 which marks both ect(0) and ect(1) in the same way as drop (non L4S, 940 but ECN capable bottleneck). 942 Motivation: Similarly to Requirement #3.1, this requirement is a 943 safety condition in case L4S-capable endpoints are communicating over 944 a path that contains one or more non-L4S but ECN capable switches and 945 one of them happens to be the bottleneck. In this case, the scalable 946 transport will attempt to fill in the buffer of the bottleneck switch 947 up to the marking threshold and produce a small sawtooth around that 948 operation point. The result is that the switch will set its 949 operation point with the buffer full and all other non-scalable 950 transports will be starved (as they will react reducing their CWND 951 more aggressively than the scalable transport). 953 Scalable transports then MUST be able to detect the presence of a 954 classic ECN bottleneck and fall back to classic TCP/classic ECN 955 behaviour in this case. 957 Discussion: It is not clear at this point if it is possible to design 958 a mechanism that always detect the aforementioned cases. One 959 possibility is to base the detection on an increase on top of a 960 minimum RTT, but it is not yet clear which value should trigger this. 961 Having a delay based fall back response on L4S may as well be 962 beneficial for preserving low latency without legacy network nodes. 963 Even if it possible to design such a mechanism, it may well be that 964 it would encompass additional complexity that implementers may 965 consider unnecessary. The need for this mechanism depends on the 966 extent of classic ECN deployment. 968 Requirement #4.3: Reduce RTT dependence 970 Description: Scalable transport congestion control algorithms MUST 971 reduce or eliminate the RTT bias within the range of RTTs available. 973 Motivation: Classic TCP's throughput is known to be inversely 974 proportional to RTT. One would expect flows over very low RTT paths 975 to nearly starve flows over larger RTTs. However, because Classic 976 TCP induces a large queue, it has never allowed a very low RTT path 977 to exist, so far. For instance, consider two paths with base RTT 1ms 978 and 100ms. If Classic TCP induces a 20ms queue, it turns these RTTs 979 into 21ms and 120ms leading to a throughput ratio of about 1:6. 980 Whereas if a Scalable TCP induces only a 1ms queue, the ratio is 981 2:101. Therefore, with small queues, long RTT flows will essentially 982 starve. 984 Scalable transport protocol MUST then accommodate flows across the 985 range of RTTs enabled by the deployment of L4S service over the 986 public Internet. 988 Requirement #4.4: Scaling down the congestion window. 990 Description: Scalable transports MUST be responsive to congestion 991 when RTTs are significantly smaller than in the current public 992 Internet. 994 Motivation: As currently specified, the minimum CWND of TCP (and the 995 scalable extensions such as DCTCP), is set to 2 MSS. Once this 996 minimum CWND is reached, the transport protocol ceases to react to 997 congestion signals (the CWND is not further reduced beyond this 998 minimum size). 1000 L4S mechanisms reduce significantly the queueing delay, achieving 1001 smaller RTTs over the Internet. For the same CWND, smaller RTTs 1002 imply higher transmission rates. The result is that when scalable 1003 transport are used and small RTTs are achieved, the minimum value of 1004 the CWND currently defined in 2 MSS may still result in a high 1005 transmission rate for a large number of common scenarios. For 1006 example, as described in [TCP-sub-mss-w], consider a residential 1007 setting with an broadband Internet access of 40Mbps. Suppose now a 1008 number of equal TCP flows running in parallel with the Internet 1009 access link being the bottleneck. Suppose that for these flows, the 1010 RTT is 6ms and the MSS is 1500B. The minimum transmission rate 1011 supported by TCP in this scenario is when CWND is set to 2 MSS, which 1012 results in 4Mbps for each flow. This means that in this scenario, if 1013 the number of flows is higher than 10, the congestion control ceases 1014 to be responsive and starts to build up a queue in the network. 1016 In order to address this issue, the congestion control mechanism for 1017 scalable transports MUST be responsive for the new range of RTT 1018 resulting from the decrease of the queueing delay. 1020 There are several ways how this can be achieved. One possible sub- 1021 MSS window mechanism is described in [TCP-sub-mss-w]. 1023 In addition to the safety requirements described before, there are 1024 some optimizations that while not required for the safe deployment of 1025 scalable transports over the public Internet, would results in an 1026 optimized performance. We describe them next. 1028 Optimization #5.1: Setting ECT in SYN, SYN/ACK and pure ACK packets. 1030 Description: Scalable transport SHOULD set the ECT bit in SYN, SYN/ 1031 ACK and pure ACK packets. 1033 Motivation: Failing to set the ECT bit in SYN, SYN/ACK or ACK packets 1034 results in these packets being more likely dropped during congestion 1035 events. Dropping SYN and SYN/ACK packets is particularly bad for 1036 performance as the retransmission timers for these packets are large. 1037 [RFC3168] prevents from marking these packets due to security 1038 reasons. The arguments provided should be revisited in the the 1039 context of L4S and evaluate if avoiding marking these packets is 1040 still the best approach. 1042 Optimization #5.2: Faster than additive increase. 1044 Description: Scalable transport MAY support faster than additive 1045 increase in the congestion avoidance phase. 1047 Motivation: As currently defined, DCTCP supports additive increase in 1048 congestion avoidance phase. It would be beneficial for performance 1049 to update the congestion control algorithm to increase the CWND more 1050 than 1 MSS per RTT during the congestion avoidance phase. In the 1051 context of L4S such mechanism, must also provide fairness with other 1052 classes of traffic, including classic TCP and possibly scalable TCP 1053 that uses additive increase. 1055 Optimization #5.3: Faster convergence to fairness. 1057 Description: Scalable transport SHOULD converge to a fair share 1058 allocation of the available capacity as fast as classic TCP or 1059 faster. 1061 Motivation: The time required for a new flow to obtain its fair share 1062 of the capacity of the bottleneck when the there are already ongoing 1063 flows using up all the bottleneck capacity is higher in the case of 1064 DCTCP than in the case of classic TCP (about a factor of 1,5 and 2 1065 larger according to [Alizadeh-stability]). This is detrimental in 1066 general, but it is very harmful for short flows, which performance 1067 can be worse than the one obtained with classic TCP. for this reason 1068 it is desirable that scalable transport provide convergence times no 1069 larger than classic TCP. 1071 Appendix B. Standardization items 1073 The following table includes all the itmes that should be 1074 standardized to provide a full L4S architecture. 1076 The table is too wide for the ASCII draft format, so it has been 1077 split into two, with a common column of row index numbers on the 1078 left. 1080 The columns in the second part of the table have the following 1081 meanings: 1083 WG: The IETF WG most relevant to this requirement. The "tcpm/iccrg" 1084 combination refers to the procedure typically used for congestion 1085 control changes, where tcpm owns the approval decision, but uses 1086 the iccrg for expert review [NewCC_Proc]; 1088 TCP: Applicable to all forms of TCP congestion control; 1090 DCTCP: Applicable to Data Centre TCP as currently used (in 1091 controlled environments); 1093 DCTCP bis: Applicable to an future Data Centre TCP congestion 1094 control intended for controlled environments; 1096 XXX Prague: Applicable to a Scalable variant of XXX (TCP/SCTP/RMCAT) 1097 congestion control. 1099 +-----+-----------------------+-------------------------------------+ 1100 | Req | Requirement | Reference | 1101 | # | | | 1102 +-----+-----------------------+-------------------------------------+ 1103 | 0 | ARCHITECTURE | | 1104 | 1 | L4S IDENTIFIER | [I-D.briscoe-tsvwg-ecn-l4s-id] | 1105 | 2 | DUAL QUEUE AQM | [I-D.briscoe-aqm-dualq-coupled] | 1106 | 3 | Suitable ECN Feedback | [I-D.ietf-tcpm-accurate-ecn], | 1107 | | | [I-D.stewart-tsvwg-sctpecn]. | 1108 | | | | 1109 | | SCALABLE TRANSPORT - | | 1110 | | SAFETY ADDITIONS | | 1111 | 4-1 | Fall back to | [I-D.ietf-tcpm-dctcp] | 1112 | | Reno/Cubic on loss | | 1113 | 4-2 | Fall back to | | 1114 | | Reno/Cubic if classic | | 1115 | | ECN bottleneck | | 1116 | | detected | | 1117 | | | | 1118 | 4-3 | Reduce RTT-dependence | | 1119 | | | | 1120 | 4-4 | Scaling TCP's | [TCP-sub-mss-w] | 1121 | | Congestion Window for | | 1122 | | Small Round Trip | | 1123 | | Times | | 1124 | | SCALABLE TRANSPORT - | | 1125 | | PERFORMANCE | | 1126 | | ENHANCEMENTS | | 1127 | 5-1 | Setting ECT in SYN, | draft-bagnulo-tsvwg-generalized-ECN | 1128 | | SYN/ACK and pure ACK | | 1129 | | packets | | 1130 | 5-2 | Faster-than-additive | | 1131 | | increase | | 1132 | 5-3 | Less drastic exit | | 1133 | | from slow-start | | 1134 +-----+-----------------------+-------------------------------------+ 1135 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1136 | # | WG | TCP | DCTCP | DCTCP-bis | TCP | SCTP | RMCAT | 1137 | | | | | | Prague | Prague | Prague | 1138 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1139 | 0 | tsvwg? | Y | Y | Y | Y | Y | Y | 1140 | 1 | tsvwg? | | | Y | Y | Y | Y | 1141 | 2 | aqm? | n/a | n/a | n/a | n/a | n/a | n/a | 1142 | | | | | | | | | 1143 | | | | | | | | | 1144 | | | | | | | | | 1145 | 3 | tcpm | Y | Y | Y | Y | n/a | n/a | 1146 | | | | | | | | | 1147 | 4-1 | tcpm | | Y | Y | Y | Y | Y | 1148 | | | | | | | | | 1149 | 4-2 | tcpm/ | | | | Y | Y | ? | 1150 | | iccrg? | | | | | | | 1151 | | | | | | | | | 1152 | | | | | | | | | 1153 | | | | | | | | | 1154 | | | | | | | | | 1155 | 4-3 | tcpm/ | | | Y | Y | Y | ? | 1156 | | iccrg? | | | | | | | 1157 | 4-4 | tcpm | Y | Y | Y | Y | Y | ? | 1158 | | | | | | | | | 1159 | | | | | | | | | 1160 | 5-1 | tsvwg | Y | Y | Y | Y | n/a | n/a | 1161 | | | | | | | | | 1162 | 5-2 | tcpm/ | | | Y | Y | Y | ? | 1163 | | iccrg? | | | | | | | 1164 | 5-3 | tcpm/ | | | Y | Y | Y | ? | 1165 | | iccrg? | | | | | | | 1166 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1168 Authors' Addresses 1170 Bob Briscoe (editor) 1171 Simula Research Lab 1173 Email: ietf@bobbriscoe.net 1174 URI: http://bobbriscoe.net/ 1175 Koen De Schepper 1176 Nokia Bell Labs 1177 Antwerp 1178 Belgium 1180 Email: koen.de_schepper@nokia.com 1181 URI: https://www.bell-labs.com/usr/koen.de_schepper 1183 Marcelo Bagnulo 1184 Universidad Carlos III de Madrid 1185 Av. Universidad 30 1186 Leganes, Madrid 28911 1187 Spain 1189 Phone: 34 91 6249500 1190 Email: marcelo@it.uc3m.es 1191 URI: http://www.it.uc3m.es