idnits 2.17.1 draft-briscoe-tsvwg-l4s-arch-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 30, 2017) is 2582 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-28) exists of draft-ietf-tcpm-accurate-ecn-02 == Outdated reference: A later version (-07) exists of draft-ietf-tcpm-cubic-04 == Outdated reference: A later version (-10) exists of draft-ietf-tcpm-dctcp-05 == Outdated reference: A later version (-22) exists of draft-ietf-tsvwg-ecn-encap-guidelines-08 == Outdated reference: A later version (-08) exists of draft-ietf-tsvwg-ecn-experimentation-01 == Outdated reference: A later version (-03) exists of draft-johansson-quic-ecn-01 == Outdated reference: A later version (-07) exists of draft-stewart-tsvwg-sctpecn-05 -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) Summary: 0 errors (**), 0 flaws (~~), 8 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe, Ed. 3 Internet-Draft Simula Research Lab 4 Intended status: Informational K. De Schepper 5 Expires: October 1, 2017 Nokia Bell Labs 6 M. Bagnulo Braun 7 Universidad Carlos III de Madrid 8 March 30, 2017 10 Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: 11 Architecture 12 draft-briscoe-tsvwg-l4s-arch-02 14 Abstract 16 This document describes the L4S architecture for the provision of a 17 new service that the Internet could provide to eventually replace 18 best efforts for all traffic: Low Latency, Low Loss, Scalable 19 throughput (L4S). It is becoming common for _all_ (or most) 20 applications being run by a user at any one time to require low 21 latency. However, the only solution the IETF can offer for ultra-low 22 queuing delay is Diffserv, which only favours a minority of packets 23 at the expense of others. In extensive testing the new L4S service 24 keeps average queuing delay under a millisecond for _all_ 25 applications even under very heavy load, without sacrificing 26 utilization; and it keeps congestion loss to zero. It is becoming 27 widely recognized that adding more access capacity gives diminishing 28 returns, because latency is becoming the critical problem. Even with 29 a high capacity broadband access, the reduced latency of L4S 30 remarkably and consistently improves performance under load for 31 applications such as interactive video, conversational video, voice, 32 Web, gaming, instant messaging, remote desktop and cloud-based apps 33 (even when all being used at once over the same access link). The 34 insight is that the root cause of queuing delay is in TCP, not in the 35 queue. By fixing the sending TCP (and other transports) queuing 36 latency becomes so much better than today that operators will want to 37 deploy the network part of L4S to enable new products and services. 38 Further, the network part is simple to deploy - incrementally with 39 zero-config. Both parts, sender and network, ensure coexistence with 40 other legacy traffic. At the same time L4S solves the long- 41 recognized problem with the future scalability of TCP throughput. 43 This document describes the L4S architecture, briefly describing the 44 different components and how the work together to provide the 45 aforementioned enhanced Internet service. 47 Status of This Memo 49 This Internet-Draft is submitted in full conformance with the 50 provisions of BCP 78 and BCP 79. 52 Internet-Drafts are working documents of the Internet Engineering 53 Task Force (IETF). Note that other groups may also distribute 54 working documents as Internet-Drafts. The list of current Internet- 55 Drafts is at http://datatracker.ietf.org/drafts/current/. 57 Internet-Drafts are draft documents valid for a maximum of six months 58 and may be updated, replaced, or obsoleted by other documents at any 59 time. It is inappropriate to use Internet-Drafts as reference 60 material or to cite them other than as "work in progress." 62 This Internet-Draft will expire on October 1, 2017. 64 Copyright Notice 66 Copyright (c) 2017 IETF Trust and the persons identified as the 67 document authors. All rights reserved. 69 This document is subject to BCP 78 and the IETF Trust's Legal 70 Provisions Relating to IETF Documents 71 (http://trustee.ietf.org/license-info) in effect on the date of 72 publication of this document. Please review these documents 73 carefully, as they describe your rights and restrictions with respect 74 to this document. Code Components extracted from this document must 75 include Simplified BSD License text as described in Section 4.e of 76 the Trust Legal Provisions and are provided without warranty as 77 described in the Simplified BSD License. 79 Table of Contents 81 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 82 2. L4S architecture overview . . . . . . . . . . . . . . . . . . 4 83 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 84 4. L4S architecture components . . . . . . . . . . . . . . . . . 7 85 5. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . 9 86 5.1. Why These Primary Components? . . . . . . . . . . . . . . 9 87 5.2. Why Not Alternative Approaches? . . . . . . . . . . . . . 10 88 6. Applicability . . . . . . . . . . . . . . . . . . . . . . . . 12 89 6.1. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 13 90 6.2. Deployment Considerations . . . . . . . . . . . . . . . . 14 91 6.2.1. Deployment Topology . . . . . . . . . . . . . . . . . 15 92 6.2.2. Deployment Sequences . . . . . . . . . . . . . . . . 16 93 6.2.3. L4S Flow but Non-L4S Bottleneck . . . . . . . . . . . 18 94 6.2.4. Other Potential Deployment Issues . . . . . . . . . . 19 96 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 97 8. Security Considerations . . . . . . . . . . . . . . . . . . . 19 98 8.1. Traffic (Non-)Policing . . . . . . . . . . . . . . . . . 19 99 8.2. 'Latency Friendliness' . . . . . . . . . . . . . . . . . 20 100 8.3. Policing Prioritized L4S Bandwidth . . . . . . . . . . . 20 101 8.4. ECN Integrity . . . . . . . . . . . . . . . . . . . . . . 21 102 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 22 103 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 104 10.1. Normative References . . . . . . . . . . . . . . . . . . 22 105 10.2. Informative References . . . . . . . . . . . . . . . . . 22 106 Appendix A. Required features for scalable transport protocols 107 to be safely deployable in the Internet (a.k.a. TCP 108 Prague requirements) . . . . . . . . . . . . . . . . 26 109 Appendix B. Standardization items . . . . . . . . . . . . . . . 30 110 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 33 112 1. Introduction 114 It is increasingly common for _all_ of a user's applications at any 115 one time to require low delay: interactive Web, Web services, voice, 116 conversational video, interactive video, instant messaging, online 117 gaming, remote desktop and cloud-based applications. In the last 118 decade or so, much has been done to reduce propagation delay by 119 placing caches or servers closer to users. However, queuing remains 120 a major, albeit intermittent, component of latency. When present it 121 typically doubles the path delay from that due to the base speed-of- 122 light. Low loss is also important because, for interactive 123 applications, losses translate into even longer retransmission 124 delays. 126 It has been demonstrated that, once access network bit rates reach 127 levels now common in the developed world, increasing capacity offers 128 diminishing returns if latency (delay) is not addressed. 129 Differentiated services (Diffserv) offers Expedited Forwarding 130 [RFC3246] for some packets at the expense of others, but this is not 131 applicable when all (or most) of a user's applications require low 132 latency. 134 Therefore, the goal is an Internet service with ultra-Low queueing 135 Latency, ultra-Low Loss and Scalable throughput (L4S) - for _all_ 136 traffic. A service for all traffic will need none of the 137 configuration or management baggage (traffic policing, traffic 138 contracts) associated with favouring some packets over others. This 139 document describes the L4S architecture for achieving that goal. 141 It must be said that queuing delay only degrades performance 142 infrequently [Hohlfeld14]. It only occurs when a large enough 143 capacity-seeking (e.g. TCP) flow is running alongside the user's 144 traffic in the bottleneck link, which is typically in the access 145 network. Or when the low latency application is itself a large 146 capacity-seeking flow (e.g. interactive video). At these times, the 147 performance improvement must be so remarkable that network operators 148 will be motivated to deploy it. 150 Active Queue Management (AQM) is part of the solution to queuing 151 under load. AQM improves performance for all traffic, but there is a 152 limit to how much queuing delay can be reduced by solely changing the 153 network; without addressing the root of the problem. 155 The root of the problem is the presence of standard TCP congestion 156 control (Reno [RFC5681]) or compatible variants (e.g. TCP Cubic 157 [I-D.ietf-tcpm-cubic]). We shall call this family of congestion 158 controls 'Classic' TCP. It has been demonstrated that if the sending 159 host replaces Classic TCP with a 'Scalable' alternative, when a 160 suitable AQM is deployed in the network the performance under load of 161 all the above interactive applications can be stunningly improved. 162 For instance, queuing delay under heavy load with the example DCTCP/ 163 DualQ solution cited below is roughly 1 millisecond (1 ms) at the 164 99th percentile without losing link utilization. This compares with 165 5 to 20 ms on _average_ with a Classic TCP and current state-of-the- 166 art AQMs such as fq_CoDel [I-D.ietf-aqm-fq-codel] or PIE [RFC8033]. 167 Also, with a Classic TCP, 5 ms of queuing is usually only possible by 168 losing some utilization. 170 It has been convincingly demonstrated [DCttH15] that it is possible 171 to deploy such an L4S service alongside the existing best efforts 172 service so that all of a user's applications can shift to it when 173 their stack is updated. Access networks are typically designed with 174 one link as the bottleneck for each site (which might be a home, 175 small enterprise or mobile device), so deployment at a single node 176 should give nearly all the benefit. The L4S approach requires a 177 number of mechanisms in different parts of the Internet to fulfill 178 its goal. This document presents the L4S architecture, by describing 179 the different components and how they interact to provide the 180 scalable low-latency, low-loss, Internet service. 182 2. L4S architecture overview 184 There are three main components to the L4S architecture (illustrated 185 in Figure 1): 187 1) Network: The L4S service traffic needs to be isolated from the 188 queuing latency of the Classic service traffic. However, the two 189 should be able to freely share a common pool of capacity. This is 190 because there is no way to predict how many flows at any one time 191 might use each service and capacity in access networks is too 192 scarce to partition into two. So a 'semi-permeable' membrane is 193 needed that partitions latency but not bandwidth. The Dual Queue 194 Coupled AQM [I-D.briscoe-aqm-dualq-coupled] is an example of such 195 a semi-permeable membrane. 197 Per-flow queuing such as in [I-D.ietf-aqm-fq-codel] could be used, 198 but it partitions both latency and bandwidth between every end-to- 199 end flow. So it is rather overkill, which brings disadvantages 200 (see Section 5.2), not least that thousands of queues are needed 201 when two are sufficient. 203 2) Protocol: A host needs to distinguish L4S and Classic packets 204 with an identifier so that the network can classify them into 205 their separate treatments. [I-D.briscoe-tsvwg-ecn-l4s-id] 206 considers various alternative identifiers, and concludes that all 207 alternatives involve compromises, but the ECT(1) codepoint of the 208 ECN field is a workable solution. 210 3) Host: Scalable congestion controls already exist. They solve the 211 scaling problem with TCP first pointed out in [RFC3649]. The one 212 used most widely (in controlled environments) is Data Centre TCP 213 (DCTCP [I-D.ietf-tcpm-dctcp]), which has been implemented and 214 deployed in Windows Server Editions (since 2012), in Linux and in 215 FreeBSD. Although DCTCP as-is 'works' well over the public 216 Internet, most implementations lack certain safety features that 217 will be necessary once it is used outside controlled environments 218 like data centres (see later). A similar scalable congestion 219 control will also need to be transplanted into protocols other 220 than TCP (SCTP, RTP/RTCP, RMCAT, etc.) 222 (2) (1) 223 .-------^------. .--------------^-------------------. 224 ,-(3)-----. ______ 225 ; ________ : L4S --------. | | 226 :|Scalable| : _\ ||___\_| mark | 227 :| sender | : __________ / / || / |______|\ _________ 228 :|________|\; | |/ --------' ^ \1| | 229 `---------'\_| IP-ECN | Coupling : \|priority |_\ 230 ________ / |Classifier| : /|scheduler| / 231 |Classic |/ |__________|\ --------. ___:__ / |_________| 232 | sender | \_\ || | |||___\_| mark/|/ 233 |________| / || | ||| / | drop | 234 Classic --------' |______| 236 Figure 1: Components of an L4S Solution: 1) Isolation in separate 237 network queues; 2) Packet Identification Protocol; and 3) Scalable 238 Sending Host 240 3. Terminology 242 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 243 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 244 document are to be interpreted as described in [RFC2119]. In this 245 document, these words will appear with that interpretation only when 246 in ALL CAPS. Lower case uses of these words are not to be 247 interpreted as carrying RFC-2119 significance. COMMENT: Since this 248 will be an information document, This should be removed. 250 Classic service: The 'Classic' service is intended for all the 251 congestion control behaviours that currently co-exist with TCP 252 Reno (e.g. TCP Cubic, Compound, SCTP, etc). 254 Low-Latency, Low-Loss and Scalable (L4S) service: The 'L4S' service 255 is intended for traffic from scalable TCP algorithms such as Data 256 Centre TCP. But it is also more general--it will allow a set of 257 congestion controls with similar scaling properties to DCTCP (e.g. 258 Relentless [Mathis09]) to evolve. 260 Both Classic and L4S services can cope with a proportion of 261 unresponsive or less-responsive traffic as well (e.g. DNS, VoIP, 262 etc). 264 Scalable Congestion Control: A congestion control where flow rate is 265 inversely proportional to the level of congestion signals. Then, 266 as flow rate scales, the number of congestion signals per round 267 trip remains invariant, maintaining the same degree of control. 268 For instance, DCTCP averages 2 congestion signals per round-trip 269 whatever the flow rate. 271 Classic Congestion Control: A congestion control with a flow rate 272 compatible with standard TCP Reno [RFC5681]. With Classic 273 congestion controls, as capacity increases enabling higher flow 274 rates, the number of round trips between congestion signals 275 (losses or ECN marks) rises in proportion to the flow rate. So 276 control of queuing and/or utilization becomes very slack. For 277 instance, with 1500 B packets and an RTT of 18 ms, as TCP Reno 278 flow rate increases from 2 to 100 Mb/s the number of round trips 279 between congestion signals rises proportionately, from 2 to 100. 281 The default congestion control in Linux (TCP Cubic) is Reno- 282 compatible for most scenarios expected for some years. For 283 instance, with a typical domestic round-trip time (RTT) of 18ms, 284 TCP Cubic only switches out of Reno-compatibility mode once the 285 flow rate approaches 1 Gb/s. For a typical data centre RTT of 1 286 ms, the switch-over point is theoretically 1.3 Tb/s. However, 287 with a less common transcontinental RTT of 100 ms, it only remains 288 Reno-compatible up to 13 Mb/s. All examples assume 1,500 B 289 packets. 291 Classic ECN: The original proposed standard Explicit Congestion 292 Notification (ECN) protocol [RFC3168], which requires ECN signals 293 to be treated the same as drops, both when generated in the 294 network and when responded to by the sender. 296 Site: A home, mobile device, small enterprise or campus, where the 297 network bottleneck is typically the access link to the site. Not 298 all network arrangements fit this model but it is a useful, widely 299 applicable generalisation. 301 4. L4S architecture components 303 The L4S architecture is composed by the following elements. 305 Protocols:The L4S architecture encompass the two protocol changes 306 that we describe next: 308 a. [I-D.briscoe-tsvwg-ecn-l4s-id] recommends ECT(1) is used as the 309 identifier to classify L4S and Classic packets into their 310 separate treatments, as required by [RFC4774]. 312 b. An essential aspect of a scalable congestion control is the use 313 of explicit congestion signals rather than losses, because the 314 signals need to be sent immediately and frequently--too often to 315 use drops. 'Classic' ECN [RFC3168] requires an ECN signal to be 316 treated the same as a drop, both when it is generated in the 317 network and when it is responded to by hosts. L4S allows 318 networks and hosts to support two separate meanings for ECN. So 319 the standards track [RFC3168] will need to be updated to allow 320 ECT(1) packets to depart from the 'same as drop' constraint. 322 [I-D.ietf-tsvwg-ecn-experimentation] has been prepared as a 323 standards track update to relax specific requirements in RFC 3168 324 (and certain other standards track RFCs), which clears the way 325 for the above experimental changes proposed for L4S. 326 [I-D.ietf-tsvwg-ecn-experimentation] also obsoletes the original 327 experimental assignment of the ECT(1) codepoint as an ECN nonce 328 [RFC3540] (it was never deployed, and it offers no security 329 benefit now that deployment is optional). 331 Network components:The Dual Queue Coupled AQM has been specified as 332 generically as possible [I-D.briscoe-aqm-dualq-coupled] as a 'semi- 333 permeable' membrane without specifying the particular AQMs to use in 334 the two queues. An informational appendix of the draft is provided 335 for pseudocode examples of different possible AQM approaches. 337 Initially a zero-config variant of RED called Curvy RED was 338 implemented, tested and documented. The aim is for designers to be 339 free to implement diverse ideas. So the brief normative body of the 340 draft only specifies the minimum constraints an AQM needs to comply 341 with to ensure that the L4S and Classic services will coexist. For 342 instance, a variant of PIE called Dual PI Squared [PI2] has been 343 implemented and found to perform better over a wide range of 344 conditions, so it has been documented in a second appendix of 345 [I-D.briscoe-aqm-dualq-coupled]. 347 Host mechanisms: The L4S architecture includes a number of mechanisms 348 in the end host that we enumerate next: 350 a. Data Centre TCP is the most widely used example of a scalable 351 congestion control. It is being documented in the TCPM WG as an 352 informational record of the protocol currently in use 353 [I-D.ietf-tcpm-dctcp]. It will be necessary to define a number 354 of safety features for a variant usable on the public Internet. 355 A draft list of these, known as the TCP Prague requirements, has 356 been drawn up (see Appendix A). The list also includes some 357 optional performance improvements. 359 b. Transport protocols other than TCP use various congestion 360 controls designed to be friendly with Classic TCP. Before they 361 can use the L4S service, it will be necessary to implement 362 scalable variants of each of these transport behaviours. The 363 following standards track RFCs currently define these protocols: 364 ECN in TCP [RFC3168], in SCTP [RFC4960], in RTP [RFC6679], and in 365 DCCP [RFC4340]. Not all are in widespread use, but those that 366 are will eventually need to be updated to allow a different 367 congestion response, which they will have to indicate by using 368 the ECT(1) codepoint. Scalable variants are under consideration 369 for some new transport protocols that are themselves under 370 development, e.g. QUIC [I-D.johansson-quic-ecn] and certain 371 real-time media congestion avoidandance techniques (RMCAT) 372 protocols. 374 c. ECN feedback is sufficient for L4S in some transport protocols 375 (RTCP, DCCP) but not others: 377 * For the case of TCP, the feedback protocol for ECN embeds the 378 assumption from Classic ECN that it is the same as drop, 379 making it unusable for a scalable TCP. Therefore, the 380 implementation of TCP receivers will have to be upgraded 381 [RFC7560]. Work to standardize more accurate ECN feedback for 382 TCP (AccECN [I-D.ietf-tcpm-accurate-ecn]) is already in 383 progress. 385 * ECN feedback is only roughly sketched in an appendix of the 386 SCTP specification. A fuller specification has been proposed 387 [I-D.stewart-tsvwg-sctpecn], which would need to be 388 implemented and deployed before SCTCP could support L4S. 390 5. Rationale 392 5.1. Why These Primary Components? 394 Explicit congestion signalling (protocol): Explicit congestion 395 signalling is a key part of the L4S approach. In contrast, use of 396 drop as a congestion signal creates a tension because drop is both 397 a useful signal (more would reduce delay) and an impairment (less 398 would reduce delay). Explicit congestion signals can be used many 399 times per round trip, to keep tight control, without any 400 impairment. Under heavy load, even more explicit signals can be 401 applied so the queue can be kept short whatever the load. Whereas 402 state-of-the-art AQMs have to introduce very high packet drop at 403 high load to keep the queue short. Further, TCP's sawtooth 404 reduction can be smaller, and therefore return to the operating 405 point more often, without worrying that this causes more signals 406 (one at the top of each smaller sawtooth). The consequent smaller 407 amplitude sawteeth fit between a very shallow marking threshold 408 and an empty queue, so delay variation can be very low, without 409 risk of under-utilization. 411 All the above makes it clear that explicit congestion signalling 412 is only advantageous for latency if it does not have to be 413 considered 'the same as' drop (as required with Classic ECN 414 [RFC3168]). Therefore, in a DualQ AQM, the L4S queue uses a new 415 L4S variant of ECN that is not equivalent to drop 416 [I-D.briscoe-tsvwg-ecn-l4s-id], while the Classic queue uses 417 either classic ECN [RFC3168] or drop, which are equivalent. 419 Before Classic ECN was standardized, there were various proposals 420 to give an ECN mark a different meaning from drop. However, there 421 was no particular reason to agree on any one of the alternative 422 meanings, so 'the same as drop' was the only compromise that could 423 be reached. RFC 3168 contains a statement that: 425 "An environment where all end nodes were ECN-Capable could 426 allow new criteria to be developed for setting the CE 427 codepoint, and new congestion control mechanisms for end-node 428 reaction to CE packets. However, this is a research issue, and 429 as such is not addressed in this document." 431 Latency isolation with coupled congestion notification (network): 433 Using just two queues is not essential to L4S (more would be 434 possible), but it is the simplest way to isolate all the L4S 435 traffic that keeps latency low from all the legacy Classic traffic 436 that does not. 438 Similarly, coupling the congestion notification between the queues 439 is not necessarily essential, but it is a clever and simple way to 440 allow senders to determine their rate, packet-by-packet, rather 441 than be overridden by a network scheduler. Because otherwise a 442 network scheduler would have to inspect at least transport layer 443 headers, and it would have to continually assign a rate to each 444 flow without any easy way to understand application intent. 446 L4S packet identifier (protocol): Once there are at least two 447 separate treatments in the network, hosts need an identifier at 448 the IP layer to distinguish which treatment they intend to use. 450 Scalable congestion notification (host): A scalable congestion 451 control keeps the signalling frequency high so that rate 452 variations can be small when signalling is stable, and rate can 453 track variations in available capacity as rapidly as possible 454 otherwise. 456 5.2. Why Not Alternative Approaches? 458 All the following approaches address some part of the same problem 459 space as L4S. In each case, it is shown that L4S complements them or 460 improves on them, rather than being a mutually exclusive alternative: 462 Diffserv: Diffserv addresses the problem of bandwidth apportionment 463 for important traffic as well as queuing latency for delay- 464 sensitive traffic. L4S solely addresses the problem of queuing 465 latency (as well as loss and throughput scaling). Diffserv will 466 still be necessary where important traffic requires priority (e.g. 467 for commercial reasons, or for protection of critical 468 infrastructure traffic). Nonetheless, if there are Diffserv 469 classes for important traffic, the L4S approach can provide low 470 latency for _all_ traffic within each Diffserv class (including 471 the case where there is only one Diffserv class). 473 Also, as already explained, Diffserv only works for a small subset 474 of the traffic on a link. It is not applicable when all the 475 applications in use at one time at a single site (home, small 476 business or mobile device) require low latency. Also, because L4S 477 is for all traffic, it needs none of the management baggage 478 (traffic policing, traffic contracts) associated with favouring 479 some packets over others. This baggage has held Diffserv back 480 from widespread end-to-end deployment. 482 State-of-the-art AQMs: AQMs such as PIE and fq_CoDel give a 483 significant reduction in queuing delay relative to no AQM at all. 484 The L4S work is intended to complement these AQMs, and we 485 definitely do not want to distract from the need to deploy them as 486 widely as possible. Nonetheless, without addressing the large 487 saw-toothing rate variations of Classic congestion controls, AQMs 488 alone cannot reduce queuing delay too far without significantly 489 reducing link utilization. The L4S approach resolves this tension 490 by ensuring hosts can minimize the size of their sawteeth without 491 appearing so aggressive to legacy flows that they starve. 493 Per-flow queuing: Similarly per-flow queuing is not incompatible 494 with the L4S approach. However, one queue for every flow can be 495 thought of as overkill compared to the minimum of two queues for 496 all traffic needed for the L4S approach. The overkill of per-flow 497 queuing has side-effects: 499 A. fq makes high performance networking equipment costly 500 (processing and memory) - in contrast dual queue code can be 501 very simple; 503 B. fq requires packet inspection into the end-to-end transport 504 layer, which doesn't sit well alongside encryption for privacy 505 - in contrast a dual queue only operates at the IP layer; 507 C. fq isolates the queuing of each flow from the others and it 508 prevents any one flow from consuming more than 1/N of the 509 capacity. In contrast, all L4S flows are expected to keep the 510 queue shallow, and policing of individual flows to enforce 511 this may be applied separately, as a policy choice. 513 An fq scheduler has to decide packet-by-packet which flow to 514 schedule without knowing application intent. Whereas a 515 separate policing function can be configured less strictly, so 516 that senders can still control the instantaneous rate of each 517 flow dependent on the needs of each application (e.g. variable 518 rate video), giving more wriggle-room before a flow is deemed 519 non-compliant. Also policing of queuing and of flow-rates can 520 be applied independently. 522 Alternative Back-off ECN (ABE): Yet again, L4S is not an alternative 523 to ABE but a complement that introduces much lower queuing delay. 524 ABE [I-D.khademi-tcpm-alternativebackoff-ecn] alters the host 525 behaviour in response to ECN marking to utilize a link better and 526 give ECN flows a faster throughput, but it assumes the network 527 still treats ECN and drop the same. Therefore ABE exploits any 528 lower queuing delay that AQMs can provide. But as explained 529 above, AQMs still cannot reduce queuing delay too far without 530 losing link utilization (for other non-ABE flows). 532 6. Applicability 534 A transport layer that solves the current latency issues will provide 535 new service, product and application opportunities. 537 With the L4S approach, the following existing applications will 538 immediately experience significantly better quality of experience 539 under load in the best effort class: 541 o Gaming 543 o VoIP 545 o Video conferencing 547 o Web browsing 549 o (Adaptive) video streaming 551 o Instant messaging 553 The significantly lower queuing latency also enables some interactive 554 application functions to be offloaded to the cloud that would hardly 555 even be usable today: 557 o Cloud based interactive video 559 o Cloud based virtual and augmented reality 561 The above two applications have been successfully demonstrated with 562 L4S, both running together over a 40 Mb/s broadband access link 563 loaded up with the numerous other latency sensitive applications in 564 the previous list as well as numerous downloads. A panoramic video 565 of a football stadium can be swiped and pinched so that on the fly a 566 proxy in the cloud generates a sub-window of the match video under 567 the finger-gesture control of each user. At the same time, a virtual 568 reality headset fed from a 360 degree camera in a racing car has been 569 demonstrated, where the user's head movements control the scene 570 generated in the cloud. In both cases, with 7 ms end-to-end base 571 delay, the additional queuing delay of roughly 1 ms is so low that it 572 seems the video is generated locally. See https://riteproject.eu/ 573 dctth/ for videos of these demonstrations. 575 Using a swiping finger gesture or head movement to pan a video are 576 extremely demanding applications--far more demanding than VoIP. 578 Because human vision can detect extremely low delays of the order of 579 single milliseconds when delay is translated into a visual lag 580 between a video and a reference point (the finger or the orientation 581 of the head). 583 If low network delay is not available, all fine interaction has to be 584 done locally and therefore much more redundant data has to be 585 downloaded. When all interactive processing can be done in the 586 cloud, only the data to be rendered for the end user needs to be 587 sent. Whereas, once applications can rely on minimal queues in the 588 network, they can focus on reducing their own latency by only 589 minimizing the application send queue. 591 6.1. Use Cases 593 The following use-cases for L4S are being considered by various 594 interested parties: 596 o Where the bottleneck is one of various types of access network: 597 DSL, cable, mobile, satellite 599 * Radio links (cellular, WiFi) that are distant from the source 600 are particularly challenging. The radio link capacity can vary 601 rapidly by orders of magnitude, so it is often desirable to 602 hold a buffer to utilise sudden increases of capacity; 604 * cellular networks are further complicated by a perceived need 605 to buffer in order to make hand-overs imperceptible; 607 * Satellite networks generally have a very large base RTT, so 608 even with minimal queuing, overall delay can never be extremely 609 low; 611 * Nonetheless, it is certainly desirable not to hold a buffer 612 purely because of the sawteeth of Classic TCP, when it is more 613 than is needed for all the above reasons. 615 o Private networks of heterogeneous data centres, where there is no 616 single administrator that can arrange for all the simultaneous 617 changes to senders, receivers and network needed to deploy DCTCP: 619 * a set of private data centres interconnected over a wide area 620 with separate administrations, but within the same company 622 * a set of data centres operated by separate companies 623 interconnected by a community of interest network (e.g. for the 624 finance sector) 626 * multi-tenant (cloud) data centres where tenants choose their 627 operating system stack (Infrastructure as a Service - IaaS) 629 o Different types of transport (or application) congestion control: 631 * elastic (TCP/SCTP); 633 * real-time (RTP, RMCAT); 635 * query (DNS/LDAP). 637 o Where low delay quality of service is required, but without 638 inspecting or intervening above the IP layer 639 [I-D.you-encrypted-traffic-management]: 641 * mobile and other networks have tended to inspect higher layers 642 in order to guess application QoS requirements. However, with 643 growing demand for support of privacy and encryption, L4S 644 offers an alternative. There is no need to select which 645 traffic to favour for queuing, when L4S gives favourable 646 queuing to all traffic. 648 o If queuing delay is minimized, applications with a fixed delay 649 budget can communicate over longer distances, or via a longer 650 chain of service functions [RFC7665] or onion routers. 652 6.2. Deployment Considerations 654 The DualQ is, in itself, an incremental deployment framework for L4S 655 AQMs so that L4S traffic can coexist with existing Classic "TCP- 656 friendly" traffic. Section 6.2.1 explains why only deploying AQM in 657 one node at each end of the access link will realize nearly all the 658 benefit. 660 L4S involves both end systems and the network, so Section 6.2.2 661 suggests some typical sequences to deploy each part, and why there 662 will be an immediate and significant benefit after deploying just one 663 part. 665 If an ECN-enabled DualQ AQM has not been deployed at a bottleneck, an 666 L4S flow is required to include a fall-back strategy to Classic 667 behaviour. Section 6.2.3 describes how an L4S flow detects this, and 668 how to minimize the effect of false negative detection. 670 6.2.1. Deployment Topology 672 Nonetheless, DualQ AQMs will not have to be deployed throughout the 673 Internet before L4S will work for anyone. Operators of public 674 Internet access networks typically design their networks so that the 675 bottleneck will nearly always occur at one known (logical) link. 676 This confines the cost of queue management technology to one place. 678 The case of mesh networks is different and will be discussed later. 679 But the known bottleneck case is generally true for Internet access 680 to all sorts of different 'sites', where the word 'site' includes 681 home networks, small-to-medium sized campus or enterprise networks 682 and even cellular devices (Figure 2). Also, this known-bottleneck 683 case tends to be true whatever the access link technology; whether 684 xDSL, cable, cellular, line-of-sight wireless or satellite. 686 Therefore, the full benefit of the L4S service should be available in 687 the downstream direction when the DualQ AQM is deployed at the 688 ingress to this bottleneck link (or links for multihomed sites). And 689 similarly, the full upstream service will be available once the DualQ 690 is deployed at the upstream ingress. 692 ______ 693 ( ) 694 __ __ ( ) 695 |DQ\________/DQ|( enterprise ) 696 ___ |__/ \__| ( /campus ) 697 ( ) (______) 698 ( ) ___||_ 699 +----+ ( ) __ __ / \ 700 | DC |-----( Core )|DQ\_______________/DQ|| home | 701 +----+ ( ) |__/ \__||______| 702 (_____) __ 703 |DQ\__/\ __ ,===. 704 |__/ \ ____/DQ||| ||mobile 705 \/ \__|||_||device 706 | o | 707 `---' 709 Figure 2: Likely location of DualQ Deployments in common access 710 topologies 712 Deployment in mesh topologies depends on how over-booked the core is. 713 If the core is non-blocking, or at least generously provisioned so 714 that the edges are nearly always the bottlenecks, it would only be 715 necessary to deploy the DualQ AQM at the edge bottlenecks. For 716 example, some datacentre networks are designed with the bottleneck in 717 the hypervisor or host NICs, while others bottleneck at the top-of- 718 rack switch (both the output ports facing hosts and those facing the 719 core). 721 The DualQ would eventually also need to be deployed at any other 722 persistent bottlenecks such as network interconnections, e.g. some 723 public Internet exchange points and the ingress and egress to WAN 724 links interconnecting datacentres. 726 6.2.2. Deployment Sequences 728 For any one L4S flow to work, it requires 3 parts to have been 729 deployed. This was the same deployment problem that ECN faced 730 [I-D.iab-protocol-transitions] so we have learned from this. 732 FIrstly, L4S deployment exploits the fact that DCTCP already exists 733 on many Internet hosts (Windows, FreeBSD and Linux); both servers and 734 clients. Therefore, just deploying DualQ AQM at a network bottleneck 735 immediately gives a working deployment of all the L4S parts. DCTCP 736 needs some safety concerns to be fixed for general use over the 737 public Internet (see Appendix A), but DCTCP is not on by default, so 738 these issues can be managed within controlled deployments or 739 controlled trials. 741 Secondly, the performance improvement with L4S is so significant that 742 it enables new interactive services and products that were not 743 previously possible. It is much easier for companies to initiate new 744 work on deployment if there is budget for a new product trial. If, 745 in contrast, there were only an incremental performance improvement 746 (as with Classic ECN), spending on deployment tends to be much harder 747 to justify. 749 Thirdly, the L4S identifier is defined so that intially network 750 operators can enable L4S exclusively for certain customers or certain 751 applications. But this is carefully defined so that it does not 752 compromise future evolution towards L4S as an Internet-wide service. 753 This is because the L4S identifier is defined not only as the end-to- 754 end ECN field, but it can also optionally be combined with any other 755 packet header or some status of a customer or their access link. 756 Operators could do this anyway, even if it were not blessed by the 757 IETF. However, it is best for the IETF to specify that they must use 758 their own local identifier in combination with the IETF's identifier. 759 Then, if an operator enables the optional local-use approach, they 760 only have to remove this extra rule to make the service work 761 Internet-wide - it will already traverse middleboxes, peerings, etc. 763 +-+--------------------+----------------------+---------------------+ 764 | | Servers or proxies | Access link | Clients | 765 +-+--------------------+----------------------+---------------------+ 766 |1| DCTCP (existing) | | DCTCP (existing) | 767 | | | DualQ AQM downstream | | 768 | | WORKS DOWNSTREAM FOR CONTROLLED DEPLOYMENTS/TRIALS | 769 +-+--------------------+----------------------+---------------------+ 770 |2| TCP Prague | | AccECN (already in | 771 | | | | progress:DCTCP/BBR) | 772 | | FULLY WORKS DOWNSTREAM | 773 +-+--------------------+----------------------+---------------------+ 774 |3| | DualQ AQM upstream | TCP Prague | 775 | | | | | 776 | | FULLY WORKS UPSTREAM AND DOWNSTREAM | 777 +-+--------------------+----------------------+---------------------+ 779 Figure 3: Example L4S Deployment Sequences 781 Figure 3 illustrates some example sequences in which the parts of L4S 782 might be deployed. It consists of the following stages: 784 1. Here, the immediate benefit of a single AQM deployment can be 785 seen, but limited to a controlled trial or controlled deployment. 786 In this example downstream deployment is first, but in other 787 scenarios the upstream might be go first. The DualQ AQM also 788 greatly improves the downstream Classic service, assuming no 789 other AQM has already been deployed. 791 2. In this stage, the name 'TCP Prague' is used to represent a 792 variant of DCTCP that is safe to use in a production environment. 793 If the application is primarily unidirectional, 'TCP Prague' is 794 only needed at one end. Accurate ECN feedback (AccECN) 795 [I-D.ietf-tcpm-accurate-ecn] is needed at the other end, but it 796 is a generic ECN feedback facility that is already planned to be 797 deployed for other purposes, e.g. DCTCP, BBR [BBR]. The two 798 ends can be deployed in either order, because TCP Prague only 799 enables itself if it has negotiated the use of AccECN feedback 800 with the other end during the connection handshake. Thus, 801 deployment on both ends (and in some cases only one) enables L4S 802 trials to move to a production service, in one direction. This 803 stage might be further motivated by performance improvements 804 between DCTCP and TCP Prague Appendix A. 806 3. This is a two-move stage to enable L4S upstream. The DualQ or 807 TCP Prague can be deployed in either order as already explained. 808 To motivate the first of two independent moves, the deferred 809 benefit of enabling new services after the second move has to be 810 worth it to cover the first mover's investment risk. As 811 explained already, the potential for new services provides this 812 motivation. The DualQ AQM also greatly improves the upstream 813 Classic service, assuming no other AQM has already been deployed. 815 Note that other deployment sequences might occur. For instance: the 816 upstream might be deployed first; a non-TCP protocol might be used 817 end-to-end, e.g. QUIC, RMCAT; a body such as the 3GPP might require 818 L4S to be implemented in 5G user equipment, or other random acts of 819 kindness. 821 6.2.3. L4S Flow but Non-L4S Bottleneck 823 If L4S is enabled between two hosts but there is no L4S AQM at the 824 bottleneck, any drop from the bottleneck will trigger the L4S sender 825 to fall back to a 'TCP-Friendly' behaviour (Requirement #4.1 in 826 Appendix A). 828 Unfortunately, as well as protecting legacy traffic, this rule 829 degrades the L4S service whenever there is a loss, even if the loss 830 was not from a non-DualQ bottleneck (false negative). And 831 unfortunately, prevalent drop can be due to other causes, e.g.: 833 o congestion loss at other transient bottlenecks, e.g. due to bursts 834 in shallower queues; 836 o transmission errors, e.g. due to electrical interference; 838 o rate policing. 840 Three complementary approaches are in progress, but they are all 841 currently research: 843 o In TCP Prague, use a similar approach to BBR [BBR] to ignore 844 selected losses. This could mask any of the above types of loss 845 (requires consensus on how to safely interoperate with drop-based 846 congestion controls). 848 o A combination of RACK, reconfigured link retransmission and L4S 849 could address transmission errors (no reference yet); 851 o Hybrid ECN/drop policers (see Section 8.3). 853 L4S deployment scenarios that minimize these issues (e.g. over 854 wireline networks) can proceed in parallel to this research, in the 855 expectation that research success will continually widen L4S 856 applicability. 858 In recent studies there has been no evidence of Classic ECN support 859 in AQMs on the Internet. If Classic ECN support does materialize, a 860 way to satisfy Requirement #4.2 in Appendix A will have to be added 861 to TCP Prague. 863 6.2.4. Other Potential Deployment Issues 865 An L4S AQM uses the ECN field to signal congestion. So, in common 866 with Classic ECN, if the AQM is within a tunnel or at a lower layer, 867 correct functioning of ECN signalling requires correct propagation of 868 the ECN field up the layers [I-D.ietf-tsvwg-ecn-encap-guidelines]. 870 7. IANA Considerations 872 This specification contains no IANA considerations. 874 8. Security Considerations 876 8.1. Traffic (Non-)Policing 878 Because the L4S service can serve all traffic that is using the 879 capacity of a link, it should not be necessary to police access to 880 the L4S service. In contrast, Diffserv only works if some packets 881 get less favourable treatement than others. So it has to use traffic 882 policers to limit how much traffic can be favoured, In turn, traffic 883 policers require traffic contracts between users and networks as well 884 as pairwise between networks. Because L4S will lack all this 885 management complexity, it is more likely to work end-to-end. 887 During early deployment (and perhaps always), some networks will not 888 offer the L4S service. These networks do not need to police or re- 889 mark L4S traffic - they just forward it unchanged as best efforts 890 traffic, as they would already forward traffic with ECT(1) today. At 891 a bottleneck, such networks will introduce some queuing and dropping. 892 When a scalable congestion control detects a drop it will have to 893 respond as if it is a Classic congestion control (see item 3-1 in 894 Appendix A). This will ensure safe interworking with other traffic 895 at the 'legacy' bottleneck, but it will degrade the L4S service to no 896 better (but never worse) than classic best efforts, whenever a legacy 897 (non-L4S) bottleneck is encountered on a path. 899 Certain network operators might choose to restict access to the L4S 900 class, perhaps only to customers who have paid a premium. Their 901 packet classifer (item 2 in Figure 1) could identify such customers 902 against some other field (e.g. source address range) as well as ECN. 903 If only the ECN L4S identifier matched, but not the source address 904 (say), the classifier could direct these packets (from non-paying 905 customers) into the Classic queue. Allowing operators to use an 906 additional local classifier is intended to remove any incentive to 907 bleach the L4S identifier. Then at least the L4S ECN identifier will 908 be more likely to survive end-to-end even though the service may not 909 be supported at every hop. Such arrangements would only require 910 simple registered/not-registered packet classification, rather than 911 the managed application-specific traffic policing against customer- 912 specific traffic contracts that Diffserv requires. 914 8.2. 'Latency Friendliness' 916 The L4S service does rely on self-constraint - not in terms of 917 limiting capacity usage, but in terms of limiting burstiness. It is 918 hoped that standardisation of dynamic behaviour (cf. TCP slow-start) 919 and self-interest will be sufficient to prevent transports from 920 sending excessive bursts of L4S traffic, given the application's own 921 latency will suffer most from such behaviour. 923 Whether burst policing becomes necessary remains to be seen. Without 924 it, there will be potential for attacks on the low latency of the L4S 925 service. However it may only be necessary to apply such policing 926 reactively, e.g. punitively targeted at any deployments of new bursty 927 malware. 929 8.3. Policing Prioritized L4S Bandwidth 931 As mentioned in Section 5.2, L4S should remove the need for low 932 latency Diffserv classes. However, those Diffserv classes that give 933 certain applications or users priority over capacity, would still be 934 applicable. Then, within such Diffserv classes, L4S would often be 935 applicable to give traffic low latency and low loss. WIthin such a 936 class, the bandwidth available to a user or application is often 937 limited by a rate policer. Similarly, in the default Diffserv class, 938 rate policers are used to partition shared capacity. 940 A classic rate policer drops any packets exceeding a set rate, 941 usually also giving a burst allowance (variant exist where the 942 policer re-marks non-compliant traffic to a discard-eligible Diffserv 943 codepoint, so they may be dropped elsewhere during contention). In 944 networks that deploy L4S and use rate policers, it will be preferable 945 to deploy a policer designed to be more friendly to the L4S service, 947 This is currently a research area. it might be achieved by setting a 948 threshold where ECN marking is introduced, such that it is just under 949 the policed rate or just under the burst allowance where drop is 950 introduced. This could be applied to various types of policer, e.g. 951 [RFC2697], [RFC2698] or the local (non-ConEx) variant of the ConEx 952 congestion policer [I-D.briscoe-conex-policing]. Otherwise, whenever 953 L4S traffic encounters a rate policer, it will experience drops and 954 the source will fall back to a Classic congestion control, thus 955 losing all the benefits of L4S. 957 Further discussion of the applicability of L4S to the various 958 Diffserv classes, and the design of suitable L4S rate policers. 960 8.4. ECN Integrity 962 Receiving hosts can fool a sender into downloading faster by 963 suppressing feedback of ECN marks (or of losses if retransmissions 964 are not necessary or available otherwise). [RFC3540] proposes that a 965 TCP sender could pseudorandomly set either of ECT(0) or ECT(1) in 966 each packet of a flow and remember the sequence it had set, termed 967 the ECN nonce. If the receiver supports the nonce, it can prove that 968 it is not suppressing feedback by reflecting its knowledge of the 969 sequence back to the sender. The nonce was proposed on the 970 assumption that receivers might be more likely to cheat congestion 971 control than senders (although senders also have a motive to cheat). 973 If L4S uses the ECT(1) codepoint of ECN for packet classification, it 974 will have to obsolete the experimental nonce. As far as is known, 975 the ECN Nonce has never been deployed, and it was only implemented 976 for a couple of testbed evaluations. It would be nearly impossible 977 to deploy now, because any misbehaving receiver can simply opt-out, 978 which would be unremarkable given all receivers currently opt-out. 980 Other ways to protect TCP feedback integrity have since been 981 developed. For instance: 983 o the sender can test the integrity of the receiver's feedback by 984 occasionally setting the IP-ECN field to a value normally only set 985 by the network. Then it can test whether the receiver's feedback 986 faithfully reports what it expects [I-D.moncaster-tcpm-rcv-cheat]. 987 This method consumes no extra codepoints. It works for loss and 988 it will work for ECN feedback in any transport protocol suitable 989 for L4S. However, it shares the same assumption as the nonce; 990 that the sender is not cheating and it is motivated to prevent the 991 receiver cheating; 993 o A network can enforce a congestion response to its ECN markings 994 (or packet losses) by auditing congestion exposure (ConEx) 995 [RFC7713]. Whether the receiver or a downstream network is 996 suppressing congestion feedback or the sender is unresponsive to 997 the feedback, or both, ConEx audit can neutralise any advantage 998 that any of these three parties would otherwise gain. ConEx is 999 only currently defined for IPv6 and consumes a destination option 1000 header. It has been implemented, but not deployed as far as is 1001 known. 1003 9. Acknowledgements 1005 Thanks to Wes Eddy, Karen Nielsen and David Black for their useful 1006 review comments. 1008 10. References 1010 10.1. Normative References 1012 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1013 Requirement Levels", BCP 14, RFC 2119, 1014 DOI 10.17487/RFC2119, March 1997, 1015 . 1017 10.2. Informative References 1019 [Alizadeh-stability] 1020 Alizadeh, M., Javanmard, A., and B. Prabhakar, "Analysis 1021 of DCTCP: Stability, Convergence, and Fairness", ACM 1022 SIGMETRICS 2011 , June 2011. 1024 [BBR] Cardwell, N., Cheng, Y., Gunn, C., Yeganeh, S., and V. 1025 Jacobson, "BBR: Congestion-Based Congestion Control; 1026 Measuring bottleneck bandwidth and round-trip propagation 1027 time", ACM Queue (14)5, December 2016. 1029 [DCttH15] De Schepper, K., Bondarenko, O., Tsang, I., and B. 1030 Briscoe, "'Data Centre to the Home': Ultra-Low Latency for 1031 All", 2015, . 1034 (Under submission) 1036 [Hohlfeld14] 1037 Hohlfeld , O., Pujol, E., Ciucu, F., Feldmann, A., and P. 1038 Barford, "A QoE Perspective on Sizing Network Buffers", 1039 Proc. ACM Internet Measurement Conf (IMC'14) hmm, November 1040 2014. 1042 [I-D.briscoe-aqm-dualq-coupled] 1043 Schepper, K., Briscoe, B., Bondarenko, O., and I. Tsang, 1044 "DualQ Coupled AQM for Low Latency, Low Loss and Scalable 1045 Throughput", draft-briscoe-aqm-dualq-coupled-01 (work in 1046 progress), March 2016. 1048 [I-D.briscoe-conex-policing] 1049 Briscoe, B., "Network Performance Isolation using 1050 Congestion Policing", draft-briscoe-conex-policing-01 1051 (work in progress), February 2014. 1053 [I-D.briscoe-tsvwg-ecn-l4s-id] 1054 Schepper, K., Briscoe, B., and I. Tsang, "Identifying 1055 Modified Explicit Congestion Notification (ECN) Semantics 1056 for Ultra-Low Queuing Delay", draft-briscoe-tsvwg-ecn-l4s- 1057 id-02 (work in progress), October 2016. 1059 [I-D.iab-protocol-transitions] 1060 Thaler, D., "Planning for Protocol Adoption and Subsequent 1061 Transitions", draft-iab-protocol-transitions-08 (work in 1062 progress), March 2017. 1064 [I-D.ietf-aqm-fq-codel] 1065 Hoeiland-Joergensen, T., McKenney, P., 1066 dave.taht@gmail.com, d., Gettys, J., and E. Dumazet, "The 1067 FlowQueue-CoDel Packet Scheduler and Active Queue 1068 Management Algorithm", draft-ietf-aqm-fq-codel-06 (work in 1069 progress), March 2016. 1071 [I-D.ietf-tcpm-accurate-ecn] 1072 Briscoe, B., Kuehlewind, M., and R. Scheffenegger, "More 1073 Accurate ECN Feedback in TCP", draft-ietf-tcpm-accurate- 1074 ecn-02 (work in progress), October 2016. 1076 [I-D.ietf-tcpm-cubic] 1077 Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 1078 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 1079 draft-ietf-tcpm-cubic-04 (work in progress), February 1080 2017. 1082 [I-D.ietf-tcpm-dctcp] 1083 Bensley, S., Thaler, D., Balasubramanian, P., Eggert, L., 1084 and G. Judd, "Datacenter TCP (DCTCP): TCP Congestion 1085 Control for Datacenters", draft-ietf-tcpm-dctcp-05 (work 1086 in progress), March 2017. 1088 [I-D.ietf-tsvwg-ecn-encap-guidelines] 1089 Briscoe, B., Kaippallimalil, J., and P. Thaler, 1090 "Guidelines for Adding Congestion Notification to 1091 Protocols that Encapsulate IP", draft-ietf-tsvwg-ecn- 1092 encap-guidelines-08 (work in progress), March 2017. 1094 [I-D.ietf-tsvwg-ecn-experimentation] 1095 Black, D., "Explicit Congestion Notification (ECN) 1096 Experimentation", draft-ietf-tsvwg-ecn-experimentation-01 1097 (work in progress), March 2017. 1099 [I-D.johansson-quic-ecn] 1100 Johansson, I., "ECN support in QUIC", draft-johansson- 1101 quic-ecn-01 (work in progress), February 2017. 1103 [I-D.khademi-tcpm-alternativebackoff-ecn] 1104 Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst, 1105 "TCP Alternative Backoff with ECN (ABE)", draft-khademi- 1106 tcpm-alternativebackoff-ecn-01 (work in progress), October 1107 2016. 1109 [I-D.moncaster-tcpm-rcv-cheat] 1110 Moncaster, T., Briscoe, B., and A. Jacquet, "A TCP Test to 1111 Allow Senders to Identify Receiver Non-Compliance", draft- 1112 moncaster-tcpm-rcv-cheat-03 (work in progress), July 2014. 1114 [I-D.stewart-tsvwg-sctpecn] 1115 Stewart, R., Tuexen, M., and X. Dong, "ECN for Stream 1116 Control Transmission Protocol (SCTP)", draft-stewart- 1117 tsvwg-sctpecn-05 (work in progress), January 2014. 1119 [I-D.you-encrypted-traffic-management] 1120 You, J. and C. Xiong, "The Effect of Encrypted Traffic on 1121 the QoS Mechanisms in Cellular Networks", draft-you- 1122 encrypted-traffic-management-00 (work in progress), 1123 October 2015. 1125 [Mathis09] 1126 Mathis, M., "Relentless Congestion Control", PFLDNeT'09 , 1127 May 2009, . 1130 [NewCC_Proc] 1131 Eggert, L., "Experimental Specification of New Congestion 1132 Control Algorithms", IETF Operational Note ion-tsv-alt-cc, 1133 July 2007. 1135 [PI2] De Schepper, K., Bondarenko, O., Tsang, I., and B. 1136 Briscoe, "PI^2 : A Linearized AQM for both Classic and 1137 Scalable TCP", Proc. ACM CoNEXT 2016 pp.105-119, December 1138 2016, 1139 . 1141 [RFC2697] Heinanen, J. and R. Guerin, "A Single Rate Three Color 1142 Marker", RFC 2697, DOI 10.17487/RFC2697, September 1999, 1143 . 1145 [RFC2698] Heinanen, J. and R. Guerin, "A Two Rate Three Color 1146 Marker", RFC 2698, DOI 10.17487/RFC2698, September 1999, 1147 . 1149 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1150 of Explicit Congestion Notification (ECN) to IP", 1151 RFC 3168, DOI 10.17487/RFC3168, September 2001, 1152 . 1154 [RFC3246] Davie, B., Charny, A., Bennet, J., Benson, K., Le Boudec, 1155 J., Courtney, W., Davari, S., Firoiu, V., and D. 1156 Stiliadis, "An Expedited Forwarding PHB (Per-Hop 1157 Behavior)", RFC 3246, DOI 10.17487/RFC3246, March 2002, 1158 . 1160 [RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust Explicit 1161 Congestion Notification (ECN) Signaling with Nonces", 1162 RFC 3540, DOI 10.17487/RFC3540, June 2003, 1163 . 1165 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows", 1166 RFC 3649, DOI 10.17487/RFC3649, December 2003, 1167 . 1169 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1170 Congestion Control Protocol (DCCP)", RFC 4340, 1171 DOI 10.17487/RFC4340, March 2006, 1172 . 1174 [RFC4774] Floyd, S., "Specifying Alternate Semantics for the 1175 Explicit Congestion Notification (ECN) Field", BCP 124, 1176 RFC 4774, DOI 10.17487/RFC4774, November 2006, 1177 . 1179 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1180 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1181 . 1183 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1184 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 1185 . 1187 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 1188 and K. Carlberg, "Explicit Congestion Notification (ECN) 1189 for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 1190 2012, . 1192 [RFC7560] Kuehlewind, M., Ed., Scheffenegger, R., and B. Briscoe, 1193 "Problem Statement and Requirements for Increased Accuracy 1194 in Explicit Congestion Notification (ECN) Feedback", 1195 RFC 7560, DOI 10.17487/RFC7560, August 2015, 1196 . 1198 [RFC7665] Halpern, J., Ed. and C. Pignataro, Ed., "Service Function 1199 Chaining (SFC) Architecture", RFC 7665, 1200 DOI 10.17487/RFC7665, October 2015, 1201 . 1203 [RFC7713] Mathis, M. and B. Briscoe, "Congestion Exposure (ConEx) 1204 Concepts, Abstract Mechanism, and Requirements", RFC 7713, 1205 DOI 10.17487/RFC7713, December 2015, 1206 . 1208 [RFC8033] Pan, R., Natarajan, P., Baker, F., and G. White, 1209 "Proportional Integral Controller Enhanced (PIE): A 1210 Lightweight Control Scheme to Address the Bufferbloat 1211 Problem", RFC 8033, DOI 10.17487/RFC8033, February 2017, 1212 . 1214 [TCP-sub-mss-w] 1215 Briscoe, B. and K. De Schepper, "Scaling TCP's Congestion 1216 Window for Small Round Trip Times", BT Technical Report 1217 TR-TUB8-2015-002, May 2015, 1218 . 1221 [TCPPrague] 1222 Briscoe, B., "Notes: DCTCP evolution 'bar BoF': Tue 21 Jul 1223 2015, 17:40, Prague", tcpprague mailing list archive , 1224 July 2015. 1226 Appendix A. Required features for scalable transport protocols to be 1227 safely deployable in the Internet (a.k.a. TCP Prague 1228 requirements) 1230 This list contains a list of features, mechanisms and modifications 1231 from currently defined behaviour for scalable Transport protocols so 1232 that they can be safely deployed over the public Internet. This list 1233 of requirements was produced at an ad hoc meeting during IETF-94 in 1234 Prague [TCPPrague]. 1236 One of such scalable transport protocols is DCTCP, currently 1237 specified in [I-D.ietf-tcpm-dctcp]. In its current form, DCTCP is 1238 specified to be deployable in controlled environments and deploying 1239 it in the public Internet would lead to a number of issues, both from 1240 the safety and the performance perspective. In this section, we 1241 describe the modifications and additional mechanisms that are 1242 required for its deployment over the global Internet. We use DCTCP 1243 as a base, but it is likely that most of these requirements equally 1244 apply to other scalable transport protocols. 1246 We next provide a brief description of each required feature. 1248 Requirement #4.1: Fall back to Reno/Cubic congestion control on 1249 packet loss. 1251 Description: In case of packet loss, the scalable transport MUST 1252 react as classic TCP (whatever the classic version of TCP is running 1253 in the host, e.g. Reno, Cubic). 1255 Motivation: As part of the safety conditions for deploying a scalable 1256 transport over the public Internet is to make sure that it behaves 1257 properly when some or all the network devices connecting the two 1258 endpoints that implement the scalable transport have not been 1259 upgraded. In particular, it may be the case that some of the 1260 switches along the path between the two endpoints may only react to 1261 congestion by dropping packets (i.e. no ECN marking). It is 1262 important that in these cases, the scalable transport react to the 1263 congestion signal in the form of a packet drop similarly to classic 1264 TCP. 1266 In the particular case of DCTCP, the current DCTCP specification 1267 states that "It is RECOMMENDED that an implementation deal with loss 1268 episodes in the same way as conventional TCP." For safe deployment 1269 in the public Internet of a scalable transport, the above requirement 1270 needs to be defined as a MUST. 1272 Packet loss, while rare, may also occur in the case that the 1273 bottleneck is L4S capable. In this case, the sender may receive a 1274 high number of packets marked with the CE bit set and also experience 1275 a loss. Current DCTCP implementations react differently to this 1276 situation. At least one implementation reacts only to the drop 1277 signal (e.g. by halving the CWND) and at least another DCTCP 1278 implementation reacts to both signals (e.g. by halving the CWND due 1279 to the drop and also further reducing the CWND based on the 1280 proportion of marked packet). We believe that further 1281 experimentation is needed to understand what is the best behaviour 1282 for the public Internet, which may or not be one of the existent 1283 implementations. 1285 Requirement #4.2: Fall back to Reno/Cubic congestion control on 1286 classic ECN bottlenecks. 1288 Description: The scalable transport protocol SHOULD/MAY? behave as 1289 classic TCP with classic ECN if the path contains a legacy bottleneck 1290 which marks both ect(0) and ect(1) in the same way as drop (non L4S, 1291 but ECN capable bottleneck). 1293 Motivation: Similarly to Requirement #3.1, this requirement is a 1294 safety condition in case L4S-capable endpoints are communicating over 1295 a path that contains one or more non-L4S but ECN capable switches and 1296 one of them happens to be the bottleneck. In this case, the scalable 1297 transport will attempt to fill in the buffer of the bottleneck switch 1298 up to the marking threshold and produce a small sawtooth around that 1299 operation point. The result is that the switch will set its 1300 operation point with the buffer full and all other non-scalable 1301 transports will be starved (as they will react reducing their CWND 1302 more aggressively than the scalable transport). 1304 Scalable transports then MUST be able to detect the presence of a 1305 classic ECN bottleneck and fall back to classic TCP/classic ECN 1306 behaviour in this case. 1308 Discussion: It is not clear at this point if it is possible to design 1309 a mechanism that always detect the aforementioned cases. One 1310 possibility is to base the detection on an increase on top of a 1311 minimum RTT, but it is not yet clear which value should trigger this. 1312 Having a delay based fall back response on L4S may as well be 1313 beneficial for preserving low latency without legacy network nodes. 1314 Even if it possible to design such a mechanism, it may well be that 1315 it would encompass additional complexity that implementers may 1316 consider unnecessary. The need for this mechanism depends on the 1317 extent of classic ECN deployment. 1319 Requirement #4.3: Reduce RTT dependence 1321 Description: Scalable transport congestion control algorithms MUST 1322 reduce or eliminate the RTT bias within the range of RTTs available. 1324 Motivation: Classic TCP's throughput is known to be inversely 1325 proportional to RTT. One would expect flows over very low RTT paths 1326 to nearly starve flows over larger RTTs. However, because Classic 1327 TCP induces a large queue, it has never allowed a very low RTT path 1328 to exist, so far. For instance, consider two paths with base RTT 1ms 1329 and 100ms. If Classic TCP induces a 20ms queue, it turns these RTTs 1330 into 21ms and 120ms leading to a throughput ratio of about 1:6. 1331 Whereas if a Scalable TCP induces only a 1ms queue, the ratio is 1332 2:101. Therefore, with small queues, long RTT flows will essentially 1333 starve. 1335 Scalable transport protocol MUST then accommodate flows across the 1336 range of RTTs enabled by the deployment of L4S service over the 1337 public Internet. 1339 Requirement #4.4: Scaling down the congestion window. 1341 Description: Scalable transports MUST be responsive to congestion 1342 when RTTs are significantly smaller than in the current public 1343 Internet. 1345 Motivation: As currently specified, the minimum CWND of TCP (and the 1346 scalable extensions such as DCTCP), is set to 2 MSS. Once this 1347 minimum CWND is reached, the transport protocol ceases to react to 1348 congestion signals (the CWND is not further reduced beyond this 1349 minimum size). 1351 L4S mechanisms reduce significantly the queueing delay, achieving 1352 smaller RTTs over the Internet. For the same CWND, smaller RTTs 1353 imply higher transmission rates. The result is that when scalable 1354 transport are used and small RTTs are achieved, the minimum value of 1355 the CWND currently defined in 2 MSS may still result in a high 1356 transmission rate for a large number of common scenarios. For 1357 example, as described in [TCP-sub-mss-w], consider a residential 1358 setting with an broadband Internet access of 40Mbps. Suppose now a 1359 number of equal TCP flows running in parallel with the Internet 1360 access link being the bottleneck. Suppose that for these flows, the 1361 RTT is 6ms and the MSS is 1500B. The minimum transmission rate 1362 supported by TCP in this scenario is when CWND is set to 2 MSS, which 1363 results in 4Mbps for each flow. This means that in this scenario, if 1364 the number of flows is higher than 10, the congestion control ceases 1365 to be responsive and starts to build up a queue in the network. 1367 In order to address this issue, the congestion control mechanism for 1368 scalable transports MUST be responsive for the new range of RTT 1369 resulting from the decrease of the queueing delay. 1371 There are several ways how this can be achieved. One possible sub- 1372 MSS window mechanism is described in [TCP-sub-mss-w]. 1374 In addition to the safety requirements described before, there are 1375 some optimizations that while not required for the safe deployment of 1376 scalable transports over the public Internet, would results in an 1377 optimized performance. We describe them next. 1379 Optimization #5.1: Setting ECT in SYN, SYN/ACK and pure ACK packets. 1381 Description: Scalable transport SHOULD set the ECT bit in SYN, SYN/ 1382 ACK and pure ACK packets. 1384 Motivation: Failing to set the ECT bit in SYN, SYN/ACK or ACK packets 1385 results in these packets being more likely dropped during congestion 1386 events. Dropping SYN and SYN/ACK packets is particularly bad for 1387 performance as the retransmission timers for these packets are large. 1388 [RFC3168] prevents from marking these packets due to security 1389 reasons. The arguments provided should be revisited in the the 1390 context of L4S and evaluate if avoiding marking these packets is 1391 still the best approach. 1393 Optimization #5.2: Faster than additive increase. 1395 Description: Scalable transport MAY support faster than additive 1396 increase in the congestion avoidance phase. 1398 Motivation: As currently defined, DCTCP supports additive increase in 1399 congestion avoidance phase. It would be beneficial for performance 1400 to update the congestion control algorithm to increase the CWND more 1401 than 1 MSS per RTT during the congestion avoidance phase. In the 1402 context of L4S such mechanism, must also provide fairness with other 1403 classes of traffic, including classic TCP and possibly scalable TCP 1404 that uses additive increase. 1406 Optimization #5.3: Faster convergence to fairness. 1408 Description: Scalable transport SHOULD converge to a fair share 1409 allocation of the available capacity as fast as classic TCP or 1410 faster. 1412 Motivation: The time required for a new flow to obtain its fair share 1413 of the capacity of the bottleneck when the there are already ongoing 1414 flows using up all the bottleneck capacity is higher in the case of 1415 DCTCP than in the case of classic TCP (about a factor of 1,5 and 2 1416 larger according to [Alizadeh-stability]). This is detrimental in 1417 general, but it is very harmful for short flows, which performance 1418 can be worse than the one obtained with classic TCP. for this reason 1419 it is desirable that scalable transport provide convergence times no 1420 larger than classic TCP. 1422 Appendix B. Standardization items 1424 The following table includes all the itmes that should be 1425 standardized to provide a full L4S architecture. 1427 The table is too wide for the ASCII draft format, so it has been 1428 split into two, with a common column of row index numbers on the 1429 left. 1431 The columns in the second part of the table have the following 1432 meanings: 1434 WG: The IETF WG most relevant to this requirement. The "tcpm/iccrg" 1435 combination refers to the procedure typically used for congestion 1436 control changes, where tcpm owns the approval decision, but uses 1437 the iccrg for expert review [NewCC_Proc]; 1439 TCP: Applicable to all forms of TCP congestion control; 1441 DCTCP: Applicable to Data Centre TCP as currently used (in 1442 controlled environments); 1444 DCTCP bis: Applicable to an future Data Centre TCP congestion 1445 control intended for controlled environments; 1447 XXX Prague: Applicable to a Scalable variant of XXX (TCP/SCTP/RMCAT) 1448 congestion control. 1450 +-----+-----------------------+-------------------------------------+ 1451 | Req | Requirement | Reference | 1452 | # | | | 1453 +-----+-----------------------+-------------------------------------+ 1454 | 0 | ARCHITECTURE | | 1455 | 1 | L4S IDENTIFIER | [I-D.briscoe-tsvwg-ecn-l4s-id] | 1456 | 2 | DUAL QUEUE AQM | [I-D.briscoe-aqm-dualq-coupled] | 1457 | 3 | Suitable ECN Feedback | [I-D.ietf-tcpm-accurate-ecn], | 1458 | | | [I-D.stewart-tsvwg-sctpecn]. | 1459 | | | | 1460 | | SCALABLE TRANSPORT - | | 1461 | | SAFETY ADDITIONS | | 1462 | 4-1 | Fall back to | [I-D.ietf-tcpm-dctcp] | 1463 | | Reno/Cubic on loss | | 1464 | 4-2 | Fall back to | | 1465 | | Reno/Cubic if classic | | 1466 | | ECN bottleneck | | 1467 | | detected | | 1468 | | | | 1469 | 4-3 | Reduce RTT-dependence | | 1470 | | | | 1471 | 4-4 | Scaling TCP's | [TCP-sub-mss-w] | 1472 | | Congestion Window for | | 1473 | | Small Round Trip | | 1474 | | Times | | 1475 | | SCALABLE TRANSPORT - | | 1476 | | PERFORMANCE | | 1477 | | ENHANCEMENTS | | 1478 | 5-1 | Setting ECT in SYN, | draft-bagnulo-tsvwg-generalized-ECN | 1479 | | SYN/ACK and pure ACK | | 1480 | | packets | | 1481 | 5-2 | Faster-than-additive | | 1482 | | increase | | 1483 | 5-3 | Less drastic exit | | 1484 | | from slow-start | | 1485 +-----+-----------------------+-------------------------------------+ 1486 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1487 | # | WG | TCP | DCTCP | DCTCP-bis | TCP | SCTP | RMCAT | 1488 | | | | | | Prague | Prague | Prague | 1489 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1490 | 0 | tsvwg? | Y | Y | Y | Y | Y | Y | 1491 | 1 | tsvwg? | | | Y | Y | Y | Y | 1492 | 2 | aqm? | n/a | n/a | n/a | n/a | n/a | n/a | 1493 | | | | | | | | | 1494 | | | | | | | | | 1495 | | | | | | | | | 1496 | 3 | tcpm | Y | Y | Y | Y | n/a | n/a | 1497 | | | | | | | | | 1498 | 4-1 | tcpm | | Y | Y | Y | Y | Y | 1499 | | | | | | | | | 1500 | 4-2 | tcpm/ | | | | Y | Y | ? | 1501 | | iccrg? | | | | | | | 1502 | | | | | | | | | 1503 | | | | | | | | | 1504 | | | | | | | | | 1505 | | | | | | | | | 1506 | 4-3 | tcpm/ | | | Y | Y | Y | ? | 1507 | | iccrg? | | | | | | | 1508 | 4-4 | tcpm | Y | Y | Y | Y | Y | ? | 1509 | | | | | | | | | 1510 | | | | | | | | | 1511 | 5-1 | tsvwg | Y | Y | Y | Y | n/a | n/a | 1512 | | | | | | | | | 1513 | 5-2 | tcpm/ | | | Y | Y | Y | ? | 1514 | | iccrg? | | | | | | | 1515 | 5-3 | tcpm/ | | | Y | Y | Y | ? | 1516 | | iccrg? | | | | | | | 1517 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1519 Authors' Addresses 1521 Bob Briscoe (editor) 1522 Simula Research Lab 1524 Email: ietf@bobbriscoe.net 1525 URI: http://bobbriscoe.net/ 1526 Koen De Schepper 1527 Nokia Bell Labs 1528 Antwerp 1529 Belgium 1531 Email: koen.de_schepper@nokia.com 1532 URI: https://www.bell-labs.com/usr/koen.de_schepper 1534 Marcelo Bagnulo 1535 Universidad Carlos III de Madrid 1536 Av. Universidad 30 1537 Leganes, Madrid 28911 1538 Spain 1540 Phone: 34 91 6249500 1541 Email: marcelo@it.uc3m.es 1542 URI: http://www.it.uc3m.es