idnits 2.17.1 draft-ietf-tsvwg-l4s-arch-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (May 5, 2017) is 2541 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-04) exists of draft-bagnulo-tcpm-generalized-ecn-03 == Outdated reference: A later version (-28) exists of draft-ietf-tcpm-accurate-ecn-02 == Outdated reference: A later version (-12) exists of draft-ietf-tcpm-alternativebackoff-ecn-01 == Outdated reference: A later version (-07) exists of draft-ietf-tcpm-cubic-04 == Outdated reference: A later version (-10) exists of draft-ietf-tcpm-dctcp-05 == Outdated reference: A later version (-25) exists of draft-ietf-tsvwg-aqm-dualq-coupled-00 == Outdated reference: A later version (-22) exists of draft-ietf-tsvwg-ecn-encap-guidelines-08 == Outdated reference: A later version (-08) exists of draft-ietf-tsvwg-ecn-experimentation-02 == Outdated reference: A later version (-29) exists of draft-ietf-tsvwg-ecn-l4s-id-00 == Outdated reference: A later version (-03) exists of draft-johansson-quic-ecn-02 == Outdated reference: A later version (-06) exists of draft-stewart-tsvwg-sctpecn-05 -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) -- Obsolete informational reference (is this intentional?): RFC 7540 (Obsoleted by RFC 9113) Summary: 0 errors (**), 0 flaws (~~), 13 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe, Ed. 3 Internet-Draft Simula Research Lab 4 Intended status: Informational K. De Schepper 5 Expires: November 6, 2017 Nokia Bell Labs 6 M. Bagnulo Braun 7 Universidad Carlos III de Madrid 8 May 5, 2017 10 Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: 11 Architecture 12 draft-ietf-tsvwg-l4s-arch-00 14 Abstract 16 This document describes the L4S architecture for the provision of a 17 new Internet service that could eventually replace best efforts for 18 all traffic: Low Latency, Low Loss, Scalable throughput (L4S). It is 19 becoming common for _all_ (or most) applications being run by a user 20 at any one time to require low latency. However, the only solution 21 the IETF can offer for ultra-low queuing delay is Diffserv, which 22 only favours a minority of packets at the expense of others. In 23 extensive testing the new L4S service keeps average queuing delay 24 under a millisecond for _all_ applications even under very heavy 25 load, without sacrificing utilization; and it keeps congestion loss 26 to zero. It is becoming widely recognized that adding more access 27 capacity gives diminishing returns, because latency is becoming the 28 critical problem. Even with a high capacity broadband access, the 29 reduced latency of L4S remarkably and consistently improves 30 performance under load for applications such as interactive video, 31 conversational video, voice, Web, gaming, instant messaging, remote 32 desktop and cloud-based apps (even when all being used at once over 33 the same access link). The insight is that the root cause of queuing 34 delay is in TCP, not in the queue. By fixing the sending TCP (and 35 other transports) queuing latency becomes so much better than today 36 that operators will want to deploy the network part of L4S to enable 37 new products and services. Further, the network part is simple to 38 deploy - incrementally with zero-config. Both parts, sender and 39 network, ensure coexistence with other legacy traffic. At the same 40 time L4S solves the long-recognized problem with the future 41 scalability of TCP throughput. 43 This document describes the L4S architecture, briefly describing the 44 different components and how the work together to provide the 45 aforementioned enhanced Internet service. 47 Status of This Memo 49 This Internet-Draft is submitted in full conformance with the 50 provisions of BCP 78 and BCP 79. 52 Internet-Drafts are working documents of the Internet Engineering 53 Task Force (IETF). Note that other groups may also distribute 54 working documents as Internet-Drafts. The list of current Internet- 55 Drafts is at http://datatracker.ietf.org/drafts/current/. 57 Internet-Drafts are draft documents valid for a maximum of six months 58 and may be updated, replaced, or obsoleted by other documents at any 59 time. It is inappropriate to use Internet-Drafts as reference 60 material or to cite them other than as "work in progress." 62 This Internet-Draft will expire on November 6, 2017. 64 Copyright Notice 66 Copyright (c) 2017 IETF Trust and the persons identified as the 67 document authors. All rights reserved. 69 This document is subject to BCP 78 and the IETF Trust's Legal 70 Provisions Relating to IETF Documents 71 (http://trustee.ietf.org/license-info) in effect on the date of 72 publication of this document. Please review these documents 73 carefully, as they describe your rights and restrictions with respect 74 to this document. Code Components extracted from this document must 75 include Simplified BSD License text as described in Section 4.e of 76 the Trust Legal Provisions and are provided without warranty as 77 described in the Simplified BSD License. 79 Table of Contents 81 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 82 2. L4S Architecture Overview . . . . . . . . . . . . . . . . . . 4 83 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 84 4. L4S Architecture Components . . . . . . . . . . . . . . . . . 7 85 5. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . 9 86 5.1. Why These Primary Components? . . . . . . . . . . . . . . 9 87 5.2. Why Not Alternative Approaches? . . . . . . . . . . . . . 11 88 6. Applicability . . . . . . . . . . . . . . . . . . . . . . . . 13 89 6.1. Applications . . . . . . . . . . . . . . . . . . . . . . 13 90 6.2. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 14 91 6.3. Deployment Considerations . . . . . . . . . . . . . . . . 15 92 6.3.1. Deployment Topology . . . . . . . . . . . . . . . . . 16 93 6.3.2. Deployment Sequences . . . . . . . . . . . . . . . . 17 94 6.3.3. L4S Flow but Non-L4S Bottleneck . . . . . . . . . . . 19 95 6.3.4. Other Potential Deployment Issues . . . . . . . . . . 20 96 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 97 8. Security Considerations . . . . . . . . . . . . . . . . . . . 21 98 8.1. Traffic (Non-)Policing . . . . . . . . . . . . . . . . . 21 99 8.2. 'Latency Friendliness' . . . . . . . . . . . . . . . . . 22 100 8.3. Policing Prioritized L4S Bandwidth . . . . . . . . . . . 22 101 8.4. ECN Integrity . . . . . . . . . . . . . . . . . . . . . . 23 102 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 23 103 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 104 10.1. Normative References . . . . . . . . . . . . . . . . . . 23 105 10.2. Informative References . . . . . . . . . . . . . . . . . 23 106 Appendix A. Standardization items . . . . . . . . . . . . . . . 28 107 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 30 109 1. Introduction 111 It is increasingly common for _all_ of a user's applications at any 112 one time to require low delay: interactive Web, Web services, voice, 113 conversational video, interactive video, interactive remote presence, 114 instant messaging, online gaming, remote desktop, cloud-based 115 applications and video-assisted remote control of machinery and 116 industrial processes. In the last decade or so, much has been done 117 to reduce propagation delay by placing caches or servers closer to 118 users. However, queuing remains a major, albeit intermittent, 119 component of latency. For instance spikes of hundreds of 120 milliseconds are common. During a long-running flow, even with 121 state-of-the-art active queue management (AQM), the base speed-of- 122 light path delay roughly doubles. Low loss is also important 123 because, for interactive applications, losses translate into even 124 longer retransmission delays. 126 It has been demonstrated that, once access network bit rates reach 127 levels now common in the developed world, increasing capacity offers 128 diminishing returns if latency (delay) is not addressed. 129 Differentiated services (Diffserv) offers Expedited Forwarding 130 [RFC3246] for some packets at the expense of others, but this is not 131 applicable when all (or most) of a user's applications require low 132 latency. 134 Therefore, the goal is an Internet service with ultra-Low queueing 135 Latency, ultra-Low Loss and Scalable throughput (L4S) - for _all_ 136 traffic. A service for all traffic will need none of the 137 configuration or management baggage (traffic policing, traffic 138 contracts) associated with favouring some packets over others. This 139 document describes the L4S architecture for achieving that goal. 141 It must be said that queuing delay only degrades performance 142 infrequently [Hohlfeld14]. It only occurs when a large enough 143 capacity-seeking (e.g. TCP) flow is running alongside the user's 144 traffic in the bottleneck link, which is typically in the access 145 network. Or when the low latency application is itself a large 146 capacity-seeking flow (e.g. interactive video). At these times, the 147 performance improvement from L4S must be so remarkable that network 148 operators will be motivated to deploy it. 150 Active Queue Management (AQM) is part of the solution to queuing 151 under load. AQM improves performance for all traffic, but there is a 152 limit to how much queuing delay can be reduced by solely changing the 153 network; without addressing the root of the problem. 155 The root of the problem is the presence of standard TCP congestion 156 control (Reno [RFC5681]) or compatible variants (e.g. TCP Cubic 157 [I-D.ietf-tcpm-cubic]). We shall call this family of congestion 158 controls 'Classic' TCP. It has been demonstrated that if the sending 159 host replaces Classic TCP with a 'Scalable' alternative, when a 160 suitable AQM is deployed in the network the performance under load of 161 all the above interactive applications can be stunningly improved. 162 For instance, queuing delay under heavy load with the example DCTCP/ 163 DualQ solution cited below is roughly 1 millisecond (1 ms) at the 164 99th percentile without losing link utilization. This compares with 165 5 to 20 ms on _average_ with a Classic TCP and current state-of-the- 166 art AQMs such as fq_CoDel [I-D.ietf-aqm-fq-codel] or PIE [RFC8033]. 167 Also, with a Classic TCP, 5 ms of queuing is usually only possible by 168 losing some utilization. 170 It has been convincingly demonstrated [DCttH15] that it is possible 171 to deploy such an L4S service alongside the existing best efforts 172 service so that all of a user's applications can shift to it when 173 their stack is updated. Access networks are typically designed with 174 one link as the bottleneck for each site (which might be a home, 175 small enterprise or mobile device), so deployment at a single node 176 should give nearly all the benefit. The L4S approach requires 177 component mechanisms in different parts of an Internet path to 178 fulfill its goal. This document presents the L4S architecture, by 179 describing the different components and how they interact to provide 180 the scalable low-latency, low-loss, Internet service. 182 2. L4S Architecture Overview 184 There are three main components to the L4S architecture (illustrated 185 in Figure 1): 187 1) Network: The L4S service traffic needs to be isolated from the 188 queuing latency of the Classic service traffic. However, the two 189 should be able to freely share a common pool of capacity. This is 190 because there is no way to predict how many flows at any one time 191 might use each service and capacity in access networks is too 192 scarce to partition into two. So a 'semi-permeable' membrane is 193 needed that partitions latency but not bandwidth. The Dual Queue 194 Coupled AQM [I-D.ietf-tsvwg-aqm-dualq-coupled] is an example of 195 such a semi-permeable membrane. 197 Per-flow queuing such as in [I-D.ietf-aqm-fq-codel] could be used, 198 but it partitions both latency and bandwidth between every end-to- 199 end flow. So it is rather overkill, which brings disadvantages 200 (see Section 5.2), not least that thousands of queues are needed 201 when two are sufficient. 203 2) Protocol: A host needs to distinguish L4S and Classic packets 204 with an identifier so that the network can classify them into 205 their separate treatments. [I-D.ietf-tsvwg-ecn-l4s-id] considers 206 various alternative identifiers, and concludes that all 207 alternatives involve compromises, but the ECT(1) codepoint of the 208 ECN field is a workable solution. 210 3) Host: Scalable congestion controls already exist. They solve the 211 scaling problem with TCP first pointed out in [RFC3649]. The one 212 used most widely (in controlled environments) is Data Centre TCP 213 (DCTCP [I-D.ietf-tcpm-dctcp]), which has been implemented and 214 deployed in Windows Server Editions (since 2012), in Linux and in 215 FreeBSD. Although DCTCP as-is 'works' well over the public 216 Internet, most implementations lack certain safety features that 217 will be necessary once it is used outside controlled environments 218 like data centres (see later). A similar scalable congestion 219 control will also need to be transplanted into protocols other 220 than TCP (SCTP, RTP/RTCP, RMCAT, etc.) 221 (2) (1) 222 .-------^------. .--------------^-------------------. 223 ,-(3)-----. ______ 224 ; ________ : L4S --------. | | 225 :|Scalable| : _\ ||___\_| mark | 226 :| sender | : __________ / / || / |______|\ _________ 227 :|________|\; | |/ --------' ^ \1| | 228 `---------'\_| IP-ECN | Coupling : \|priority |_\ 229 ________ / |Classifier| : /|scheduler| / 230 |Classic |/ |__________|\ --------. ___:__ / |_________| 231 | sender | \_\ || | |||___\_| mark/|/ 232 |________| / || | ||| / | drop | 233 Classic --------' |______| 235 Figure 1: Components of an L4S Solution: 1) Isolation in separate 236 network queues; 2) Packet Identification Protocol; and 3) Scalable 237 Sending Host 239 3. Terminology 241 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 242 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 243 document are to be interpreted as described in [RFC2119]. In this 244 document, these words will appear with that interpretation only when 245 in ALL CAPS. Lower case uses of these words are not to be 246 interpreted as carrying RFC-2119 significance. COMMENT: Since this 247 will be an information document, This should be removed. 249 Classic service: The 'Classic' service is intended for all the 250 congestion control behaviours that currently co-exist with TCP 251 Reno (e.g. TCP Cubic, Compound, SCTP, etc). 253 Low-Latency, Low-Loss and Scalable (L4S) service: The 'L4S' service 254 is intended for traffic from scalable TCP algorithms such as Data 255 Centre TCP. But it is also more general--it will allow a set of 256 congestion controls with similar scaling properties to DCTCP (e.g. 257 Relentless [Mathis09]) to evolve. 259 Both Classic and L4S services can cope with a proportion of 260 unresponsive or less-responsive traffic as well (e.g. DNS, VoIP, 261 etc). 263 Scalable Congestion Control: A congestion control where the packet 264 flow rate per round trip (the window) is inversely proportional to 265 the level (probability) of congestion signals. Then, as flow rate 266 scales, the number of congestion signals per round trip remains 267 invariant, maintaining the same degree of control. For instance, 268 DCTCP averages 2 congestion signals per round-trip whatever the 269 flow rate. 271 Classic Congestion Control: A congestion control with a flow rate 272 compatible with standard TCP Reno [RFC5681]. With Classic 273 congestion controls, as capacity increases enabling higher flow 274 rates, the number of round trips between congestion signals 275 (losses or ECN marks) rises in proportion to the flow rate. So 276 control of queuing and/or utilization becomes very slack. For 277 instance, with 1500 B packets and an RTT of 18 ms, as TCP Reno 278 flow rate increases from 2 to 100 Mb/s the number of round trips 279 between congestion signals rises proportionately, from 2 to 100. 281 The default congestion control in Linux (TCP Cubic) is Reno- 282 compatible for most Internet access scenarios expected for some 283 years. For instance, with a typical domestic round-trip time 284 (RTT) of 18ms, TCP Cubic only switches out of Reno-compatibility 285 mode once the flow rate approaches 1 Gb/s. For a typical data 286 centre RTT of 1 ms, the switch-over point is theoretically 1.3 Tb/ 287 s. However, with a less common transcontinental RTT of 100 ms, it 288 only remains Reno-compatible up to 13 Mb/s. All examples assume 289 1,500 B packets. 291 Classic ECN: The original proposed standard Explicit Congestion 292 Notification (ECN) protocol [RFC3168], which requires ECN signals 293 to be treated the same as drops, both when generated in the 294 network and when responded to by the sender. 296 Site: A home, mobile device, small enterprise or campus, where the 297 network bottleneck is typically the access link to the site. Not 298 all network arrangements fit this model but it is a useful, widely 299 applicable generalisation. 301 4. L4S Architecture Components 303 The L4S architecture is composed of the following elements. 305 Protocols:The L4S architecture encompasses the two protocol changes 306 (an unassignment and an assignment) that we describe next: 308 a. An essential aspect of a scalable congestion control is the use 309 of explicit congestion signals rather than losses, because the 310 signals need to be sent immediately and frequently--too often to 311 use drops. 'Classic' ECN [RFC3168] requires an ECN signal to be 312 treated the same as a drop, both when it is generated in the 313 network and when it is responded to by hosts. L4S needs networks 314 and hosts to support two separate meanings for ECN. So the 315 standards track [RFC3168] needs to be updated to allow L4S 316 packets to depart from the 'same as drop' constraint. 318 [I-D.ietf-tsvwg-ecn-experimentation] has been prepared as a 319 standards track update to relax specific requirements in RFC 3168 320 (and certain other standards track RFCs), which clears the way 321 for the experimental changes proposed for L4S. 322 [I-D.ietf-tsvwg-ecn-experimentation] also explains why the 323 original experimental assignment of the ECT(1) codepoint as an 324 ECN nonce [RFC3540] is being reclassified as historic (it was 325 never deployed, and it offers no security benefit now that 326 deployment is optional). 328 b. [I-D.ietf-tsvwg-ecn-l4s-id] recommends ECT(1) is used as the 329 identifier to classify L4S packets into a separate treatment from 330 Classic packets. This satisfies the requirements for identifying 331 an alternative ECN treatment in [RFC4774]. 333 Network components:The Dual Queue Coupled AQM has been specified as 334 generically as possible [I-D.ietf-tsvwg-aqm-dualq-coupled] as a 335 'semi-permeable' membrane without specifying the particular AQMs to 336 use in the two queues. An informational appendix of the draft is 337 provided for pseudocode examples of different possible AQM 338 approaches. Initially a zero-config variant of RED called Curvy RED 339 was implemented, tested and documented. The aim is for designers to 340 be free to implement diverse ideas. So the brief normative body of 341 the draft only specifies the minimum constraints an AQM needs to 342 comply with to ensure that the L4S and Classic services will coexist. 343 For instance, a variant of PIE called Dual PI Squared [PI2] has been 344 implemented and found to perform better than Curvy RED over a wide 345 range of conditions, so it has been documented in a second appendix 346 of [I-D.ietf-tsvwg-aqm-dualq-coupled]. 348 Host mechanisms: The L4S architecture includes a number of mechanisms 349 in the end host that we enumerate next: 351 a. Data Centre TCP is the most widely used example of a scalable 352 congestion control. It is being documented in the TCPM WG as an 353 informational record of the protocol currently in use 354 [I-D.ietf-tcpm-dctcp]. It will be necessary to define a number 355 of safety features for a variant usable on the public Internet. 356 A draft list of these, known as the TCP Prague requirements, has 357 been drawn up (see Appendix A of [I-D.ietf-tsvwg-ecn-l4s-id]). 358 The list also includes some optional performance improvements. 360 b. Transport protocols other than TCP use various congestion 361 controls designed to be friendly with Classic TCP. Before they 362 can use the L4S service, it will be necessary to implement 363 scalable variants of each of these congestion control behaviours. 364 The following standards track RFCs currently define these 365 protocols: ECN in TCP [RFC3168], in SCTP [RFC4960], in RTP 366 [RFC6679], and in DCCP [RFC4340]. Not all are in widespread use, 367 but those that are will eventually need to be updated to allow a 368 different congestion response, which they will have to indicate 369 by using the ECT(1) codepoint. Scalable variants are under 370 consideration for some new transport protocols that are 371 themselves under development, e.g. QUIC [I-D.johansson-quic-ecn] 372 and certain real-time media congestion avoidance techniques 373 (RMCAT) protocols. 375 c. ECN feedback is sufficient for L4S in some transport protocols 376 (RTCP, DCCP) but not others: 378 * For the case of TCP, the feedback protocol for ECN embeds the 379 assumption from Classic ECN that an ECN mark is the same as a 380 drop, making it unusable for a scalable TCP. Therefore, the 381 implementation of TCP receivers will have to be upgraded 382 [RFC7560]. Work to standardize more accurate ECN feedback for 383 TCP (AccECN [I-D.ietf-tcpm-accurate-ecn]) is in progress. 385 * ECN feedback is only roughly sketched in an appendix of the 386 SCTP specification. A fuller specification has been proposed 387 [I-D.stewart-tsvwg-sctpecn], which would need to be 388 implemented and deployed before SCTCP could support L4S. 390 5. Rationale 392 5.1. Why These Primary Components? 394 Explicit congestion signalling (protocol): Explicit congestion 395 signalling is a key part of the L4S approach. In contrast, use of 396 drop as a congestion signal creates a tension because drop is both 397 a useful signal (more would reduce delay) and an impairment (less 398 would reduce delay). Explicit congestion signals can be used many 399 times per round trip, to keep tight control, without any 400 impairment. Under heavy load, even more explicit signals can be 401 applied so the queue can be kept short whatever the load. Whereas 402 state-of-the-art AQMs have to introduce very high packet drop at 403 high load to keep the queue short. Further, when using ECN TCP's 404 sawtooth reduction can be smaller, and therefore return to the 405 operating point more often, without worrying that this causes more 406 signals (one at the top of each smaller sawtooth). The consequent 407 smaller amplitude sawteeth fit between a very shallow marking 408 threshold and an empty queue, so delay variation can be very low, 409 without risk of under-utilization. 411 All the above makes it clear that explicit congestion signalling 412 is only advantageous for latency if it does not have to be 413 considered 'the same as' drop (as required with Classic ECN 414 [RFC3168]). Therefore, in a DualQ AQM, the L4S queue uses a new 415 L4S variant of ECN that is not equivalent to drop 416 [I-D.ietf-tsvwg-ecn-l4s-id], while the Classic queue uses either 417 classic ECN [RFC3168] or drop, which are equivalent. 419 Before Classic ECN was standardized, there were various proposals 420 to give an ECN mark a different meaning from drop. However, there 421 was no particular reason to agree on any one of the alternative 422 meanings, so 'the same as drop' was the only compromise that could 423 be reached. RFC 3168 contains a statement that: 425 "An environment where all end nodes were ECN-Capable could 426 allow new criteria to be developed for setting the CE 427 codepoint, and new congestion control mechanisms for end-node 428 reaction to CE packets. However, this is a research issue, and 429 as such is not addressed in this document." 431 Latency isolation with coupled congestion notification (network): 432 Using just two queues is not essential to L4S (more would be 433 possible), but it is the simplest way to isolate all the L4S 434 traffic that keeps latency low from all the legacy Classic traffic 435 that does not. 437 Similarly, coupling the congestion notification between the queues 438 is not necessarily essential, but it is a clever and simple way to 439 allow senders to determine their rate, packet-by-packet, rather 440 than be overridden by a network scheduler. Because otherwise a 441 network scheduler would have to inspect at least transport layer 442 headers, and it would have to continually assign a rate to each 443 flow without any easy way to understand application intent. 445 L4S packet identifier (protocol): Once there are at least two 446 separate treatments in the network, hosts need an identifier at 447 the IP layer to distinguish which treatment they intend to use. 449 Scalable congestion notification (host): A scalable congestion 450 control keeps the signalling frequency high so that rate 451 variations can be small when signalling is stable, and rate can 452 track variations in available capacity as rapidly as possible 453 otherwise. 455 5.2. Why Not Alternative Approaches? 457 All the following approaches address some part of the same problem 458 space as L4S. In each case, it is shown that L4S complements them or 459 improves on them, rather than being a mutually exclusive alternative: 461 Diffserv: Diffserv addresses the problem of bandwidth apportionment 462 for important traffic as well as queuing latency for delay- 463 sensitive traffic. L4S solely addresses the problem of queuing 464 latency (as well as loss and throughput scaling). Diffserv will 465 still be necessary where important traffic requires priority (e.g. 466 for commercial reasons, or for protection of critical 467 infrastructure traffic). Nonetheless, if there are Diffserv 468 classes for important traffic, the L4S approach can provide low 469 latency for _all_ traffic within each Diffserv class (including 470 the case where there is only one Diffserv class). 472 Also, as already explained, Diffserv only works for a small subset 473 of the traffic on a link. It is not applicable when all the 474 applications in use at one time at a single site (home, small 475 business or mobile device) require low latency. Also, because L4S 476 is for all traffic, it needs none of the management baggage 477 (traffic policing, traffic contracts) associated with favouring 478 some packets over others. This baggage has held Diffserv back 479 from widespread end-to-end deployment. 481 State-of-the-art AQMs: AQMs such as PIE and fq_CoDel give a 482 significant reduction in queuing delay relative to no AQM at all. 483 The L4S work is intended to complement these AQMs, and we 484 definitely do not want to distract from the need to deploy them as 485 widely as possible. Nonetheless, without addressing the large 486 saw-toothing rate variations of Classic congestion controls, AQMs 487 alone cannot reduce queuing delay too far without significantly 488 reducing link utilization. The L4S approach resolves this tension 489 by ensuring hosts can minimize the size of their sawteeth without 490 appearing so aggressive to legacy flows that they starve them. 492 Per-flow queuing: Similarly per-flow queuing is not incompatible 493 with the L4S approach. However, one queue for every flow can be 494 thought of as overkill compared to the minimum of two queues for 495 all traffic needed for the L4S approach. The overkill of per-flow 496 queuing has side-effects: 498 A. fq makes high performance networking equipment costly 499 (processing and memory) - in contrast dual queue code can be 500 very simple; 502 B. fq requires packet inspection into the end-to-end transport 503 layer, which doesn't sit well alongside encryption for privacy 504 - in contrast the use of ECN as the classifier for L4S 505 requires no deeper inspection than the IP layer; 507 C. fq isolates the queuing of each flow from the others but not 508 from itself so, unlike L4S, it does not support applications 509 that need both capacity-seeking behaviour and very low 510 latency. 512 It might seem that self-inflicted queuing delay should not 513 count, because if the delay wasn't in the network it would 514 just shift to the sender. However, modern adaptive 515 applications, e.g. HTTP/2 [RFC7540] or the interactive media 516 applications described in Section 6, can keep low latency 517 objects at the front of their local send queue by shuffling 518 priorities of other objects dependent on the progress of other 519 transfers. They cannot shuffle packets once they have 520 released them into the network. 522 D. fq prevents any one flow from consuming more than 1/N of the 523 capacity at any instant, where N is the number of flows. This 524 is fine if all flows are elastic, but it does not sit well 525 with a variable bit rate real-time multimedia flow, which 526 requires wriggle room to sometimes take more and other times 527 less than a 1/N share. 529 It might seem that an fq scheduler offers the benefit that it 530 prevents individual flows from hogging all the bandwidth. 531 However, L4S has been deliberately designed so that policing 532 of individual flows can be added as a policy choice, rather 533 than requiring one specific policy choice as the mechanism 534 itself. A scheduler (like fq) has to decide packet-by-packet 535 which flow to schedule without knowing application intent. 536 Whereas a separate policing function can be configured less 537 strictly, so that senders can still control the instantaneous 538 rate of each flow dependent on the needs of each application 539 (e.g. variable rate video), giving more wriggle-room before a 540 flow is deemed non-compliant. Also policing of queuing and of 541 flow-rates can be applied independently. 543 Alternative Back-off ECN (ABE): Yet again, L4S is not an alternative 544 to ABE but a complement that introduces much lower queuing delay. 545 ABE [I-D.ietf-tcpm-alternativebackoff-ecn] alters the host 546 behaviour in response to ECN marking to utilize a link better and 547 give ECN flows a faster throughput, but it assumes the network 548 still treats ECN and drop the same. Therefore ABE exploits any 549 lower queuing delay that AQMs can provide. But as explained 550 above, AQMs still cannot reduce queuing delay too far without 551 losing link utilization (to allow for other, non-ABE, flows). 553 6. Applicability 555 6.1. Applications 557 A transport layer that solves the current latency issues will provide 558 new service, product and application opportunities. 560 With the L4S approach, the following existing applications will 561 immediately experience significantly better quality of experience 562 under load in the best effort class: 564 o Gaming; 566 o VoIP; 568 o Video conferencing; 570 o Web browsing; 572 o (Adaptive) video streaming; 574 o Instant messaging. 576 The significantly lower queuing latency also enables some interactive 577 application functions to be offloaded to the cloud that would hardly 578 even be usable today: 580 o Cloud based interactive video; 582 o Cloud based virtual and augmented reality. 584 The above two applications have been successfully demonstrated with 585 L4S, both running together over a 40 Mb/s broadband access link 586 loaded up with the numerous other latency sensitive applications in 587 the previous list as well as numerous downloads - all sharing the 588 same bottleneck queue simultaneously [L4Sdemo16]. For the former, a 589 panoramic video of a football stadium could be swiped and pinched so 590 that, on the fly, a proxy in the cloud could generate a sub-window of 591 the match video under the finger-gesture control of each user. For 592 the latter, a virtual reality headset displayed a viewport taken from 593 a 360 degree camera in a racing car. The user's head movements 594 controlled the viewport extracted by a cloud-based proxy. In both 595 cases, with 7 ms end-to-end base delay, the additional queuing delay 596 of roughly 1 ms was so low that it seemed the video was generated 597 locally. 599 Using a swiping finger gesture or head movement to pan a video are 600 extremely latency-demanding actions--far more demanding than VoIP. 601 Because human vision can detect extremely low delays of the order of 602 single milliseconds when delay is translated into a visual lag 603 between a video and a reference point (the finger or the orientation 604 of the head sensed by the balance system in the inner ear (the 605 vestibular system). 607 Without the low queuing delay of L4S, cloud-based applications like 608 these would not be credible without significantly more access 609 bandwidth (to deliver all possible video that might be viewed) and 610 more local processing, which would increase the weight and power 611 consumption of head-mounted displays. When all interactive 612 processing can be done in the cloud, only the data to be rendered for 613 the end user needs to be sent. 615 Other low latency high bandwidth applications such as: 617 o Interactive remote presence; 619 o Video-assisted remote control of machinery or industrial 620 processes. 622 are not credible at all without very low queuing delay. No amount of 623 extra access bandwidth or local processing can make up for lost time. 625 6.2. Use Cases 627 The following use-cases for L4S are being considered by various 628 interested parties: 630 o Where the bottleneck is one of various types of access network: 631 DSL, cable, mobile, satellite 633 * Radio links (cellular, WiFi, satellite) that are distant from 634 the source are particularly challenging. The radio link 635 capacity can vary rapidly by orders of magnitude, so it is 636 often desirable to hold a buffer to utilise sudden increases of 637 capacity; 639 * cellular networks are further complicated by a perceived need 640 to buffer in order to make hand-overs imperceptible; 642 * Satellite networks generally have a very large base RTT, so 643 even with minimal queuing, overall delay can never be extremely 644 low; 646 * Nonetheless, it is certainly desirable not to hold a buffer 647 purely because of the sawteeth of Classic TCP, when it is more 648 than is needed for all the above reasons. 650 o Private networks of heterogeneous data centres, where there is no 651 single administrator that can arrange for all the simultaneous 652 changes to senders, receivers and network needed to deploy DCTCP: 654 * a set of private data centres interconnected over a wide area 655 with separate administrations, but within the same company 657 * a set of data centres operated by separate companies 658 interconnected by a community of interest network (e.g. for the 659 finance sector) 661 * multi-tenant (cloud) data centres where tenants choose their 662 operating system stack (Infrastructure as a Service - IaaS) 664 o Different types of transport (or application) congestion control: 666 * elastic (TCP/SCTP); 668 * real-time (RTP, RMCAT); 670 * query (DNS/LDAP). 672 o Where low delay quality of service is required, but without 673 inspecting or intervening above the IP layer 674 [I-D.you-encrypted-traffic-management]: 676 * mobile and other networks have tended to inspect higher layers 677 in order to guess application QoS requirements. However, with 678 growing demand for support of privacy and encryption, L4S 679 offers an alternative. There is no need to select which 680 traffic to favour for queuing, when L4S gives favourable 681 queuing to all traffic. 683 o If queuing delay is minimized, applications with a fixed delay 684 budget can communicate over longer distances, or via a longer 685 chain of service functions [RFC7665] or onion routers. 687 6.3. Deployment Considerations 689 The DualQ is, in itself, an incremental deployment framework for L4S 690 AQMs so that L4S traffic can coexist with existing Classic "TCP- 691 friendly" traffic. Section 6.3.1 explains why only deploying a DualQ 692 AQM [I-D.ietf-tsvwg-aqm-dualq-coupled] in one node at each end of the 693 access link will realize nearly all the benefit of L4S. 695 L4S involves both end systems and the network, so Section 6.3.2 696 suggests some typical sequences to deploy each part, and why there 697 will be an immediate and significant benefit after deploying just one 698 part. 700 If an ECN-enabled DualQ AQM has not been deployed at a bottleneck, an 701 L4S flow is required to include a fall-back strategy to Classic 702 behaviour. Section 6.3.3 describes how an L4S flow detects this, and 703 how to minimize the effect of false negative detection. 705 6.3.1. Deployment Topology 707 DualQ AQMs will not have to be deployed throughout the Internet 708 before L4S will work for anyone. Operators of public Internet access 709 networks typically design their networks so that the bottleneck will 710 nearly always occur at one known (logical) link. This confines the 711 cost of queue management technology to one place. 713 The case of mesh networks is different and will be discussed later. 714 But the known bottleneck case is generally true for Internet access 715 to all sorts of different 'sites', where the word 'site' includes 716 home networks, small-to-medium sized campus or enterprise networks 717 and even cellular devices (Figure 2). Also, this known-bottleneck 718 case tends to be true whatever the access link technology; whether 719 xDSL, cable, cellular, line-of-sight wireless or satellite. 721 Therefore, the full benefit of the L4S service should be available in 722 the downstream direction when the DualQ AQM is deployed at the 723 ingress to this bottleneck link (or links for multihomed sites). And 724 similarly, the full upstream service will be available once the DualQ 725 is deployed at the upstream ingress. 727 ______ 728 ( ) 729 __ __ ( ) 730 |DQ\________/DQ|( enterprise ) 731 ___ |__/ \__| ( /campus ) 732 ( ) (______) 733 ( ) ___||_ 734 +----+ ( ) __ __ / \ 735 | DC |-----( Core )|DQ\_______________/DQ|| home | 736 +----+ ( ) |__/ \__||______| 737 (_____) __ 738 |DQ\__/\ __ ,===. 739 |__/ \ ____/DQ||| ||mobile 740 \/ \__|||_||device 741 | o | 742 `---' 744 Figure 2: Likely location of DualQ (DQ) Deployments in common access 745 topologies 747 Deployment in mesh topologies depends on how over-booked the core is. 748 If the core is non-blocking, or at least generously provisioned so 749 that the edges are nearly always the bottlenecks, it would only be 750 necessary to deploy the DualQ AQM at the edge bottlenecks. For 751 example, some datacentre networks are designed with the bottleneck in 752 the hypervisor or host NICs, while others bottleneck at the top-of- 753 rack switch (both the output ports facing hosts and those facing the 754 core). 756 The DualQ would eventually also need to be deployed at any other 757 persistent bottlenecks such as network interconnections, e.g. some 758 public Internet exchange points and the ingress and egress to WAN 759 links interconnecting datacentres. 761 6.3.2. Deployment Sequences 763 For any one L4S flow to work, it requires 3 parts to have been 764 deployed. This was the same deployment problem that ECN faced 765 [I-D.iab-protocol-transitions] so we have learned from this. 767 Firstly, L4S deployment exploits the fact that DCTCP already exists 768 on many Internet hosts (Windows, FreeBSD and Linux); both servers and 769 clients. Therefore, just deploying DualQ AQM at a network bottleneck 770 immediately gives a working deployment of all the L4S parts. DCTCP 771 needs some safety concerns to be fixed for general use over the 772 public Internet (see Section 2.3 of [I-D.ietf-tsvwg-ecn-l4s-id]), but 773 DCTCP is not on by default, so these issues can be managed within 774 controlled deployments or controlled trials. 776 Secondly, the performance improvement with L4S is so significant that 777 it enables new interactive services and products that were not 778 previously possible. It is much easier for companies to initiate new 779 work on deployment if there is budget for a new product trial. If, 780 in contrast, there were only an incremental performance improvement 781 (as with Classic ECN), spending on deployment tends to be much harder 782 to justify. 784 Thirdly, the L4S identifier is defined so that intially network 785 operators can enable L4S exclusively for certain customers or certain 786 applications. But this is carefully defined so that it does not 787 compromise future evolution towards L4S as an Internet-wide service. 788 This is because the L4S identifier is defined not only as the end-to- 789 end ECN field, but it can also optionally be combined with any other 790 packet header or some status of a customer or their access link 791 [I-D.ietf-tsvwg-ecn-l4s-id]. Operators could do this anyway, even if 792 it were not blessed by the IETF. However, it is best for the IETF to 793 specify that they must use their own local identifier in combination 794 with the IETF's identifier. Then, if an operator enables the 795 optional local-use approach, they only have to remove this extra rule 796 to make the service work Internet-wide - it will already traverse 797 middleboxes, peerings, etc. 799 +-+--------------------+----------------------+---------------------+ 800 | | Servers or proxies | Access link | Clients | 801 +-+--------------------+----------------------+---------------------+ 802 |1| DCTCP (existing) | | DCTCP (existing) | 803 | | | DualQ AQM downstream | | 804 | | WORKS DOWNSTREAM FOR CONTROLLED DEPLOYMENTS/TRIALS | 805 +-+--------------------+----------------------+---------------------+ 806 |2| TCP Prague | | AccECN (already in | 807 | | | | progress:DCTCP/BBR) | 808 | | FULLY WORKS DOWNSTREAM | 809 +-+--------------------+----------------------+---------------------+ 810 |3| | DualQ AQM upstream | TCP Prague | 811 | | | | | 812 | | FULLY WORKS UPSTREAM AND DOWNSTREAM | 813 +-+--------------------+----------------------+---------------------+ 815 Figure 3: Example L4S Deployment Sequences 817 Figure 3 illustrates some example sequences in which the parts of L4S 818 might be deployed. It consists of the following stages: 820 1. Here, the immediate benefit of a single AQM deployment can be 821 seen, but limited to a controlled trial or controlled deployment. 822 In this example downstream deployment is first, but in other 823 scenarios the upstream might be deployed first. If no AQM at all 824 was previously deployed for the downstream access, the DualQ AQM 825 greatly improves the Classic service (as well as adding the L4S 826 service). If an AQM was already deployed, the Classic service 827 will be unchanged (and L4S will still be added). 829 2. In this stage, the name 'TCP Prague' is used to represent a 830 variant of DCTCP that is safe to use in a production environment. 831 If the application is primarily unidirectional, 'TCP Prague' at 832 one end will provide all the benefit needed. Accurate ECN 833 feedback (AccECN) [I-D.ietf-tcpm-accurate-ecn] is needed at the 834 other end, but it is a generic ECN feedback facility that is 835 already planned to be deployed for other purposes, e.g. DCTCP, 836 BBR [BBR]. The two ends can be deployed in either order, because 837 TCP Prague only enables itself if it has negotiated the use of 838 AccECN feedback with the other end during the connection 839 handshake. Thus, deployment of TCP Prague on a server enables 840 L4S trials to move to a production service in one direction, 841 wherever AccECN is deployed at the other end. This stage might 842 be further motivated by performance improvements between DCTCP 843 and TCP Prague (see Appendix A.2 of [I-D.ietf-tsvwg-ecn-l4s-id]). 845 3. This is a two-move stage to enable L4S upstream. The DualQ or 846 TCP Prague can be deployed in either order as already explained. 847 To motivate the first of two independent moves, the deferred 848 benefit of enabling new services after the second move has to be 849 worth it to cover the first mover's investment risk. As 850 explained already, the potential for new interactive services 851 provides this motivation. The DualQ AQM also greatly improves 852 the upstream Classic service, assuming no other AQM has already 853 been deployed. 855 Note that other deployment sequences might occur. For instance: the 856 upstream might be deployed first; a non-TCP protocol might be used 857 end-to-end, e.g. QUIC, RMCAT; a body such as the 3GPP might require 858 L4S to be implemented in 5G user equipment, or other random acts of 859 kindness. 861 6.3.3. L4S Flow but Non-L4S Bottleneck 863 If L4S is enabled between two hosts but there is no L4S AQM at the 864 bottleneck, any drop from the bottleneck will trigger the L4S sender 865 to fall back to a classic ('TCP-Friendly') behaviour (see 866 Appendix A.1.3 of [I-D.ietf-tsvwg-ecn-l4s-id]). 868 Unfortunately, as well as protecting legacy traffic, this rule 869 degrades the L4S service whenever there is a loss, even if the loss 870 was not from a non-DualQ bottleneck (false negative). And 871 unfortunately, prevalent drop can be due to other causes, e.g.: 873 o congestion loss at other transient bottlenecks, e.g. due to bursts 874 in shallower queues; 876 o transmission errors, e.g. due to electrical interference; 878 o rate policing. 880 Three complementary approaches are in progress to address this issue, 881 but they are all currently research: 883 o In TCP Prague, ignore certain losses deemed unlikely to be due to 884 congestion (using some ideas from BBR [BBR] but with no need to 885 ignore nearly all losses). This could mask any of the above types 886 of loss (requires consensus on how to safely interoperate with 887 drop-based congestion controls). 889 o A combination of RACK, reconfigured link retransmission and L4S 890 could address transmission errors (no reference yet); 892 o Hybrid ECN/drop policers (see Section 8.3). 894 L4S deployment scenarios that minimize these issues (e.g. over 895 wireline networks) can proceed in parallel to this research, in the 896 expectation that research success will continually widen L4S 897 applicability. 899 Classic ECN support is starting to materialize (in the upstream of 900 some home routers as of early 2017), so an L4S sender will have to 901 fall back to a classic ('TCP-Friendly') behaviour if it detects that 902 ECN marking is accompanied by greater queuing delay or greater delay 903 variation than would be expected with L4S (see Appendix A.1.4 of 904 [I-D.ietf-tsvwg-ecn-l4s-id]). 906 6.3.4. Other Potential Deployment Issues 908 An L4S AQM uses the ECN field to signal congestion. So, in common 909 with Classic ECN, if the AQM is within a tunnel or at a lower layer, 910 correct functioning of ECN signalling requires correct propagation of 911 the ECN field up the layers [I-D.ietf-tsvwg-ecn-encap-guidelines]. 913 7. IANA Considerations 915 This specification contains no IANA considerations. 917 8. Security Considerations 919 8.1. Traffic (Non-)Policing 921 Because the L4S service can serve all traffic that is using the 922 capacity of a link, it should not be necessary to police access to 923 the L4S service. In contrast, Diffserv only works if some packets 924 get less favourable treatment than others. So Diffserv has to use 925 traffic policers to limit how much traffic can be favoured, In turn, 926 traffic policers require traffic contracts between users and networks 927 as well as pairwise between networks. Because L4S will lack all this 928 management complexity, it is more likely to work end-to-end. 930 During early deployment (and perhaps always), some networks will not 931 offer the L4S service. These networks do not need to police or re- 932 mark L4S traffic - they just forward it unchanged as best efforts 933 traffic, as they already forward traffic with ECT(1) today. At a 934 bottleneck, such networks will introduce some queuing and dropping. 935 When a scalable congestion control detects a drop it will have to 936 respond as if it is a Classic congestion control (as required in 937 Section 2.3 of [I-D.ietf-tsvwg-ecn-l4s-id]). This will ensure safe 938 interworking with other traffic at the 'legacy' bottleneck, but it 939 will degrade the L4S service to no better (but never worse) than 940 classic best efforts, whenever a legacy (non-L4S) bottleneck is 941 encountered on a path. 943 Certain network operators might choose to restrict access to the L4S 944 class, perhaps only to customers who have paid a premium. Their 945 packet classifier (item 2 in Figure 1) could identify such customers 946 against some other field (e.g. source address range) as well as ECN. 947 If only the ECN L4S identifier matched, but not the source address 948 (say), the classifier could direct these packets (from non-paying 949 customers) into the Classic queue. Allowing operators to use an 950 additional local classifier is intended to remove any incentive to 951 bleach the L4S identifier. Then at least the L4S ECN identifier will 952 be more likely to survive end-to-end even though the service may not 953 be supported at every hop. Such arrangements would only require 954 simple registered/not-registered packet classification, rather than 955 the managed application-specific traffic policing against customer- 956 specific traffic contracts that Diffserv requires. 958 8.2. 'Latency Friendliness' 960 The L4S service does rely on self-constraint - not in terms of 961 limiting capacity usage, but in terms of limiting burstiness. It is 962 hoped that standardisation of dynamic behaviour (cf. TCP slow-start) 963 and self-interest will be sufficient to prevent transports from 964 sending excessive bursts of L4S traffic, given the application's own 965 latency will suffer most from such behaviour. 967 Whether burst policing becomes necessary remains to be seen. Without 968 it, there will be potential for attacks on the low latency of the L4S 969 service. However it may only be necessary to apply such policing 970 reactively, e.g. punitively targeted at any deployments of new bursty 971 malware. 973 8.3. Policing Prioritized L4S Bandwidth 975 As mentioned in Section 5.2, L4S should remove the need for low 976 latency Diffserv classes. However, those Diffserv classes that give 977 certain applications or users priority over capacity, would still be 978 applicable. Then, within such Diffserv classes, L4S would often be 979 applicable to give traffic low latency and low loss. WIthin such a 980 class, the bandwidth available to a user or application is often 981 limited by a rate policer. Similarly, in the default Diffserv class, 982 rate policers are used to partition shared capacity. 984 A classic rate policer drops any packets exceeding a set rate, 985 usually also giving a burst allowance (variants exist where the 986 policer re-marks non-compliant traffic to a discard-eligible Diffserv 987 codepoint, so they may be dropped elsewhere during contention). In 988 networks that deploy L4S and use rate policers, it will be preferable 989 to deploy a policer designed to be more friendly to the L4S service, 991 This is currently a research area. It might be achieved by setting a 992 threshold where ECN marking is introduced, such that it is just under 993 the policed rate or just under the burst allowance where drop is 994 introduced. This could be applied to various types of policer, e.g. 995 [RFC2697], [RFC2698] or the 'local' (non-ConEx) variant of the ConEx 996 congestion policer [I-D.briscoe-conex-policing]. Otherwise, whenever 997 L4S traffic encounters a rate policer, it will experience drops and 998 the source will fall back to a Classic congestion control, thus 999 losing the benefits of L4S. 1001 Further discussion of the applicability of L4S to the various 1002 Diffserv classes, and the design of suitable L4S rate policers will 1003 require a separate dedicated document. 1005 8.4. ECN Integrity 1007 Receiving hosts can fool a sender into downloading faster by 1008 suppressing feedback of ECN marks (or of losses if retransmissions 1009 are not necessary or available otherwise). Various ways to protect 1010 TCP feedback integrity have been developed. For instance: 1012 o The sender can test the integrity of the receiver's feedback by 1013 occasionally setting the IP-ECN field to the congestion 1014 experienced (CE) codepoint, which is normally only set by a 1015 congested link. Then the sender can test whether the receiver's 1016 feedback faithfully reports what it expects 1017 [I-D.moncaster-tcpm-rcv-cheat]. 1019 o A network can enforce a congestion response to its ECN markings 1020 (or packet losses) by auditing congestion exposure (ConEx) 1021 [RFC7713]. 1023 o The TCP authentication option (TCP-AO [RFC5925]) can be used to 1024 detect tampering with TCP congestion feedback. 1026 o The ECN Nonce [RFC3540] was proposed to detect tampering with 1027 congestion feedback, but it is being reclassified as historic. 1029 Appendix C.1 of [I-D.ietf-tsvwg-ecn-l4s-id] gives more details of 1030 these techniques including their applicability and pros and cons. 1032 9. Acknowledgements 1034 Thanks to Wes Eddy, Karen Nielsen and David Black for their useful 1035 review comments. 1037 10. References 1039 10.1. Normative References 1041 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1042 Requirement Levels", BCP 14, RFC 2119, 1043 DOI 10.17487/RFC2119, March 1997, 1044 . 1046 10.2. Informative References 1048 [BBR] Cardwell, N., Cheng, Y., Gunn, C., Yeganeh, S., and V. 1049 Jacobson, "BBR: Congestion-Based Congestion Control; 1050 Measuring bottleneck bandwidth and round-trip propagation 1051 time", ACM Queue (14)5, December 2016. 1053 [DCttH15] De Schepper, K., Bondarenko, O., Tsang, I., and B. 1054 Briscoe, "'Data Centre to the Home': Ultra-Low Latency for 1055 All", 2015, . 1058 (Under submission) 1060 [Hohlfeld14] 1061 Hohlfeld , O., Pujol, E., Ciucu, F., Feldmann, A., and P. 1062 Barford, "A QoE Perspective on Sizing Network Buffers", 1063 Proc. ACM Internet Measurement Conf (IMC'14) hmm, November 1064 2014. 1066 [I-D.bagnulo-tcpm-generalized-ecn] 1067 Bagnulo, M. and B. Briscoe, "Adding Explicit Congestion 1068 Notification (ECN) to TCP control packets and TCP 1069 retransmissions", draft-bagnulo-tcpm-generalized-ecn-03 1070 (work in progress), April 2017. 1072 [I-D.briscoe-conex-policing] 1073 Briscoe, B., "Network Performance Isolation using 1074 Congestion Policing", draft-briscoe-conex-policing-01 1075 (work in progress), February 2014. 1077 [I-D.iab-protocol-transitions] 1078 Thaler, D., "Planning for Protocol Adoption and Subsequent 1079 Transitions", draft-iab-protocol-transitions-08 (work in 1080 progress), March 2017. 1082 [I-D.ietf-aqm-fq-codel] 1083 Hoeiland-Joergensen, T., McKenney, P., 1084 dave.taht@gmail.com, d., Gettys, J., and E. Dumazet, "The 1085 FlowQueue-CoDel Packet Scheduler and Active Queue 1086 Management Algorithm", draft-ietf-aqm-fq-codel-06 (work in 1087 progress), March 2016. 1089 [I-D.ietf-tcpm-accurate-ecn] 1090 Briscoe, B., Kuehlewind, M., and R. Scheffenegger, "More 1091 Accurate ECN Feedback in TCP", draft-ietf-tcpm-accurate- 1092 ecn-02 (work in progress), October 2016. 1094 [I-D.ietf-tcpm-alternativebackoff-ecn] 1095 Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst, 1096 "TCP Alternative Backoff with ECN (ABE)", draft-ietf-tcpm- 1097 alternativebackoff-ecn-01 (work in progress), May 2017. 1099 [I-D.ietf-tcpm-cubic] 1100 Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 1101 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 1102 draft-ietf-tcpm-cubic-04 (work in progress), February 1103 2017. 1105 [I-D.ietf-tcpm-dctcp] 1106 Bensley, S., Thaler, D., Balasubramanian, P., Eggert, L., 1107 and G. Judd, "Datacenter TCP (DCTCP): TCP Congestion 1108 Control for Datacenters", draft-ietf-tcpm-dctcp-05 (work 1109 in progress), March 2017. 1111 [I-D.ietf-tsvwg-aqm-dualq-coupled] 1112 Schepper, K., Briscoe, B., Bondarenko, O., and I. Tsang, 1113 "DualQ Coupled AQM for Low Latency, Low Loss and Scalable 1114 Throughput", draft-ietf-tsvwg-aqm-dualq-coupled-00 (work 1115 in progress), April 2017. 1117 [I-D.ietf-tsvwg-ecn-encap-guidelines] 1118 Briscoe, B., Kaippallimalil, J., and P. Thaler, 1119 "Guidelines for Adding Congestion Notification to 1120 Protocols that Encapsulate IP", draft-ietf-tsvwg-ecn- 1121 encap-guidelines-08 (work in progress), March 2017. 1123 [I-D.ietf-tsvwg-ecn-experimentation] 1124 Black, D., "Explicit Congestion Notification (ECN) 1125 Experimentation", draft-ietf-tsvwg-ecn-experimentation-02 1126 (work in progress), April 2017. 1128 [I-D.ietf-tsvwg-ecn-l4s-id] 1129 Schepper, K., Briscoe, B., and I. Tsang, "Identifying 1130 Modified Explicit Congestion Notification (ECN) Semantics 1131 for Ultra-Low Queuing Delay", draft-ietf-tsvwg-ecn-l4s- 1132 id-00 (work in progress), April 2017. 1134 [I-D.johansson-quic-ecn] 1135 Johansson, I., "ECN support in QUIC", draft-johansson- 1136 quic-ecn-02 (work in progress), April 2017. 1138 [I-D.moncaster-tcpm-rcv-cheat] 1139 Moncaster, T., Briscoe, B., and A. Jacquet, "A TCP Test to 1140 Allow Senders to Identify Receiver Non-Compliance", draft- 1141 moncaster-tcpm-rcv-cheat-03 (work in progress), July 2014. 1143 [I-D.stewart-tsvwg-sctpecn] 1144 Stewart, R., Tuexen, M., and X. Dong, "ECN for Stream 1145 Control Transmission Protocol (SCTP)", draft-stewart- 1146 tsvwg-sctpecn-05 (work in progress), January 2014. 1148 [I-D.you-encrypted-traffic-management] 1149 You, J. and C. Xiong, "The Effect of Encrypted Traffic on 1150 the QoS Mechanisms in Cellular Networks", draft-you- 1151 encrypted-traffic-management-00 (work in progress), 1152 October 2015. 1154 [L4Sdemo16] 1155 Bondarenko, O., De Schepper, K., Tsang, I., and B. 1156 Briscoe, "Ultra-Low Delay for All: Live Experience, Live 1157 Analysis", Proc. MMSYS'16 pp33:1--33:4, May 2016, 1158 . 1162 [Mathis09] 1163 Mathis, M., "Relentless Congestion Control", PFLDNeT'09 , 1164 May 2009, . 1167 [NewCC_Proc] 1168 Eggert, L., "Experimental Specification of New Congestion 1169 Control Algorithms", IETF Operational Note ion-tsv-alt-cc, 1170 July 2007. 1172 [PI2] De Schepper, K., Bondarenko, O., Tsang, I., and B. 1173 Briscoe, "PI^2 : A Linearized AQM for both Classic and 1174 Scalable TCP", Proc. ACM CoNEXT 2016 pp.105-119, December 1175 2016, 1176 . 1178 [RFC2697] Heinanen, J. and R. Guerin, "A Single Rate Three Color 1179 Marker", RFC 2697, DOI 10.17487/RFC2697, September 1999, 1180 . 1182 [RFC2698] Heinanen, J. and R. Guerin, "A Two Rate Three Color 1183 Marker", RFC 2698, DOI 10.17487/RFC2698, September 1999, 1184 . 1186 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1187 of Explicit Congestion Notification (ECN) to IP", 1188 RFC 3168, DOI 10.17487/RFC3168, September 2001, 1189 . 1191 [RFC3246] Davie, B., Charny, A., Bennet, J., Benson, K., Le Boudec, 1192 J., Courtney, W., Davari, S., Firoiu, V., and D. 1193 Stiliadis, "An Expedited Forwarding PHB (Per-Hop 1194 Behavior)", RFC 3246, DOI 10.17487/RFC3246, March 2002, 1195 . 1197 [RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust Explicit 1198 Congestion Notification (ECN) Signaling with Nonces", 1199 RFC 3540, DOI 10.17487/RFC3540, June 2003, 1200 . 1202 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows", 1203 RFC 3649, DOI 10.17487/RFC3649, December 2003, 1204 . 1206 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1207 Congestion Control Protocol (DCCP)", RFC 4340, 1208 DOI 10.17487/RFC4340, March 2006, 1209 . 1211 [RFC4774] Floyd, S., "Specifying Alternate Semantics for the 1212 Explicit Congestion Notification (ECN) Field", BCP 124, 1213 RFC 4774, DOI 10.17487/RFC4774, November 2006, 1214 . 1216 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1217 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1218 . 1220 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1221 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 1222 . 1224 [RFC5925] Touch, J., Mankin, A., and R. Bonica, "The TCP 1225 Authentication Option", RFC 5925, DOI 10.17487/RFC5925, 1226 June 2010, . 1228 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 1229 and K. Carlberg, "Explicit Congestion Notification (ECN) 1230 for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 1231 2012, . 1233 [RFC7540] Belshe, M., Peon, R., and M. Thomson, Ed., "Hypertext 1234 Transfer Protocol Version 2 (HTTP/2)", RFC 7540, 1235 DOI 10.17487/RFC7540, May 2015, 1236 . 1238 [RFC7560] Kuehlewind, M., Ed., Scheffenegger, R., and B. Briscoe, 1239 "Problem Statement and Requirements for Increased Accuracy 1240 in Explicit Congestion Notification (ECN) Feedback", 1241 RFC 7560, DOI 10.17487/RFC7560, August 2015, 1242 . 1244 [RFC7665] Halpern, J., Ed. and C. Pignataro, Ed., "Service Function 1245 Chaining (SFC) Architecture", RFC 7665, 1246 DOI 10.17487/RFC7665, October 2015, 1247 . 1249 [RFC7713] Mathis, M. and B. Briscoe, "Congestion Exposure (ConEx) 1250 Concepts, Abstract Mechanism, and Requirements", RFC 7713, 1251 DOI 10.17487/RFC7713, December 2015, 1252 . 1254 [RFC8033] Pan, R., Natarajan, P., Baker, F., and G. White, 1255 "Proportional Integral Controller Enhanced (PIE): A 1256 Lightweight Control Scheme to Address the Bufferbloat 1257 Problem", RFC 8033, DOI 10.17487/RFC8033, February 2017, 1258 . 1260 [TCP-sub-mss-w] 1261 Briscoe, B. and K. De Schepper, "Scaling TCP's Congestion 1262 Window for Small Round Trip Times", BT Technical Report 1263 TR-TUB8-2015-002, May 2015, 1264 . 1267 Appendix A. Standardization items 1269 The following table includes all the items that will need to be 1270 standardized to provide a full L4S architecture. 1272 The table is too wide for the ASCII draft format, so it has been 1273 split into two, with a common column of row index numbers on the 1274 left. 1276 The columns in the second part of the table have the following 1277 meanings: 1279 WG: The IETF WG most relevant to this requirement. The "tcpm/iccrg" 1280 combination refers to the procedure typically used for congestion 1281 control changes, where tcpm owns the approval decision, but uses 1282 the iccrg for expert review [NewCC_Proc]; 1284 TCP: Applicable to all forms of TCP congestion control; 1286 DCTCP: Applicable to Data Centre TCP as currently used (in 1287 controlled environments); 1289 DCTCP bis: Applicable to an future Data Centre TCP congestion 1290 control intended for controlled environments; 1292 XXX Prague: Applicable to a Scalable variant of XXX (TCP/SCTP/RMCAT) 1293 congestion control. 1295 +-----+------------------------+------------------------------------+ 1296 | Req | Requirement | Reference | 1297 | # | | | 1298 +-----+------------------------+------------------------------------+ 1299 | 0 | ARCHITECTURE | | 1300 | 1 | L4S IDENTIFIER | [I-D.ietf-tsvwg-ecn-l4s-id] | 1301 | 2 | DUAL QUEUE AQM | [I-D.ietf-tsvwg-aqm-dualq-coupled] | 1302 | 3 | Suitable ECN Feedback | [I-D.ietf-tcpm-accurate-ecn], | 1303 | | | [I-D.stewart-tsvwg-sctpecn]. | 1304 | | | | 1305 | | SCALABLE TRANSPORT - | | 1306 | | SAFETY ADDITIONS | | 1307 | 4-1 | Fall back to | [I-D.ietf-tsvwg-ecn-l4s-id] S.2.3, | 1308 | | Reno/Cubic on loss | [I-D.ietf-tcpm-dctcp] | 1309 | 4-2 | Fall back to | [I-D.ietf-tsvwg-ecn-l4s-id] S.2.3 | 1310 | | Reno/Cubic if classic | | 1311 | | ECN bottleneck | | 1312 | | detected | | 1313 | | | | 1314 | 4-3 | Reduce RTT-dependence | [I-D.ietf-tsvwg-ecn-l4s-id] S.2.3 | 1315 | | | | 1316 | 4-4 | Scaling TCP's | [I-D.ietf-tsvwg-ecn-l4s-id] S.2.3, | 1317 | | Congestion Window for | [TCP-sub-mss-w] | 1318 | | Small Round Trip Times | | 1319 | | SCALABLE TRANSPORT - | | 1320 | | PERFORMANCE | | 1321 | | ENHANCEMENTS | | 1322 | 5-1 | Setting ECT in TCP | [I-D.bagnulo-tcpm-generalized-ecn] | 1323 | | Control Packets and | | 1324 | | Retransmissions | | 1325 | 5-2 | Faster-than-additive | [I-D.ietf-tsvwg-ecn-l4s-id] (Appx | 1326 | | increase | A.2.2) | 1327 | 5-3 | Faster Convergence at | [I-D.ietf-tsvwg-ecn-l4s-id] (Appx | 1328 | | Flow Start | A.2.2) | 1329 +-----+------------------------+------------------------------------+ 1330 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1331 | # | WG | TCP | DCTCP | DCTCP-bis | TCP | SCTP | RMCAT | 1332 | | | | | | Prague | Prague | Prague | 1333 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1334 | 0 | tsvwg | Y | Y | Y | Y | Y | Y | 1335 | 1 | tsvwg | | | Y | Y | Y | Y | 1336 | 2 | tsvwg | n/a | n/a | n/a | n/a | n/a | n/a | 1337 | | | | | | | | | 1338 | | | | | | | | | 1339 | | | | | | | | | 1340 | 3 | tcpm | Y | Y | Y | Y | n/a | n/a | 1341 | | | | | | | | | 1342 | 4-1 | tcpm | | Y | Y | Y | Y | Y | 1343 | | | | | | | | | 1344 | 4-2 | tcpm/ | | | | Y | Y | ? | 1345 | | iccrg? | | | | | | | 1346 | | | | | | | | | 1347 | | | | | | | | | 1348 | | | | | | | | | 1349 | | | | | | | | | 1350 | 4-3 | tcpm/ | | | Y | Y | Y | ? | 1351 | | iccrg? | | | | | | | 1352 | 4-4 | tcpm | Y | Y | Y | Y | Y | ? | 1353 | | | | | | | | | 1354 | | | | | | | | | 1355 | 5-1 | tcpm | Y | Y | Y | Y | n/a | n/a | 1356 | | | | | | | | | 1357 | 5-2 | tcpm/ | | | Y | Y | Y | ? | 1358 | | iccrg? | | | | | | | 1359 | 5-3 | tcpm/ | | | Y | Y | Y | ? | 1360 | | iccrg? | | | | | | | 1361 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1363 Authors' Addresses 1365 Bob Briscoe (editor) 1366 Simula Research Lab 1368 Email: ietf@bobbriscoe.net 1369 URI: http://bobbriscoe.net/ 1370 Koen De Schepper 1371 Nokia Bell Labs 1372 Antwerp 1373 Belgium 1375 Email: koen.de_schepper@nokia.com 1376 URI: https://www.bell-labs.com/usr/koen.de_schepper 1378 Marcelo Bagnulo 1379 Universidad Carlos III de Madrid 1380 Av. Universidad 30 1381 Leganes, Madrid 28911 1382 Spain 1384 Phone: 34 91 6249500 1385 Email: marcelo@it.uc3m.es 1386 URI: http://www.it.uc3m.es