idnits 2.17.1 draft-ietf-tsvwg-l4s-arch-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (July 8, 2019) is 1726 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-briscoe-docsis-q-protection-00 == Outdated reference: A later version (-02) exists of draft-cardwell-iccrg-bbr-congestion-control-00 == Outdated reference: A later version (-34) exists of draft-ietf-quic-transport-20 == Outdated reference: A later version (-28) exists of draft-ietf-tcpm-accurate-ecn-08 == Outdated reference: A later version (-15) exists of draft-ietf-tcpm-generalized-ecn-03 == Outdated reference: A later version (-25) exists of draft-ietf-tsvwg-aqm-dualq-coupled-09 == Outdated reference: A later version (-22) exists of draft-ietf-tsvwg-ecn-encap-guidelines-13 == Outdated reference: A later version (-29) exists of draft-ietf-tsvwg-ecn-l4s-id-06 == Outdated reference: A later version (-06) exists of draft-stewart-tsvwg-sctpecn-05 -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) -- Obsolete informational reference (is this intentional?): RFC 7540 (Obsoleted by RFC 9113) -- Obsolete informational reference (is this intentional?): RFC 8312 (Obsoleted by RFC 9438) Summary: 0 errors (**), 0 flaws (~~), 11 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe, Ed. 3 Internet-Draft CableLabs 4 Intended status: Informational K. De Schepper 5 Expires: January 9, 2020 Nokia Bell Labs 6 M. Bagnulo Braun 7 Universidad Carlos III de Madrid 8 G. White 9 CableLabs 10 July 8, 2019 12 Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: 13 Architecture 14 draft-ietf-tsvwg-l4s-arch-04 16 Abstract 18 This document describes the L4S architecture for the provision of a 19 new Internet service that could eventually replace best efforts for 20 all traffic: Low Latency, Low Loss, Scalable throughput (L4S). It is 21 becoming common for _all_ (or most) applications being run by a user 22 at any one time to require low latency. However, the only solution 23 the IETF can offer for ultra-low queuing delay is Diffserv, which 24 only favours a minority of packets at the expense of others. In 25 extensive testing the new L4S service keeps average queuing delay 26 under a millisecond for _all_ applications even under very heavy 27 load, without sacrificing utilization; and it keeps congestion loss 28 to zero. It is becoming widely recognized that adding more access 29 capacity gives diminishing returns, because latency is becoming the 30 critical problem. Even with a high capacity broadband access, the 31 reduced latency of L4S remarkably and consistently improves 32 performance under load for applications such as interactive video, 33 conversational video, voice, Web, gaming, instant messaging, remote 34 desktop and cloud-based apps (even when all being used at once over 35 the same access link). The insight is that the root cause of queuing 36 delay is in TCP, not in the queue. By fixing the sending TCP (and 37 other transports) queuing latency becomes so much better than today 38 that operators will want to deploy the network part of L4S to enable 39 new products and services. Further, the network part is simple to 40 deploy - incrementally with zero-config. Both parts, sender and 41 network, ensure coexistence with other legacy traffic. At the same 42 time L4S solves the long-recognized problem with the future 43 scalability of TCP throughput. 45 This document describes the L4S architecture, briefly describing the 46 different components and how the work together to provide the 47 aforementioned enhanced Internet service. 49 Status of This Memo 51 This Internet-Draft is submitted in full conformance with the 52 provisions of BCP 78 and BCP 79. 54 Internet-Drafts are working documents of the Internet Engineering 55 Task Force (IETF). Note that other groups may also distribute 56 working documents as Internet-Drafts. The list of current Internet- 57 Drafts is at https://datatracker.ietf.org/drafts/current/. 59 Internet-Drafts are draft documents valid for a maximum of six months 60 and may be updated, replaced, or obsoleted by other documents at any 61 time. It is inappropriate to use Internet-Drafts as reference 62 material or to cite them other than as "work in progress." 64 This Internet-Draft will expire on January 9, 2020. 66 Copyright Notice 68 Copyright (c) 2019 IETF Trust and the persons identified as the 69 document authors. All rights reserved. 71 This document is subject to BCP 78 and the IETF Trust's Legal 72 Provisions Relating to IETF Documents 73 (https://trustee.ietf.org/license-info) in effect on the date of 74 publication of this document. Please review these documents 75 carefully, as they describe your rights and restrictions with respect 76 to this document. Code Components extracted from this document must 77 include Simplified BSD License text as described in Section 4.e of 78 the Trust Legal Provisions and are provided without warranty as 79 described in the Simplified BSD License. 81 Table of Contents 83 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 84 2. L4S Architecture Overview . . . . . . . . . . . . . . . . . . 4 85 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 86 4. L4S Architecture Components . . . . . . . . . . . . . . . . . 7 87 5. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . 10 88 5.1. Why These Primary Components? . . . . . . . . . . . . . . 10 89 5.2. Why Not Alternative Approaches? . . . . . . . . . . . . . 12 90 6. Applicability . . . . . . . . . . . . . . . . . . . . . . . . 15 91 6.1. Applications . . . . . . . . . . . . . . . . . . . . . . 15 92 6.2. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 16 93 6.3. Deployment Considerations . . . . . . . . . . . . . . . . 17 94 6.3.1. Deployment Topology . . . . . . . . . . . . . . . . . 18 95 6.3.2. Deployment Sequences . . . . . . . . . . . . . . . . 19 96 6.3.3. L4S Flow but Non-L4S Bottleneck . . . . . . . . . . . 21 97 6.3.4. Other Potential Deployment Issues . . . . . . . . . . 23 98 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 99 8. Security Considerations . . . . . . . . . . . . . . . . . . . 23 100 8.1. Traffic (Non-)Policing . . . . . . . . . . . . . . . . . 23 101 8.2. 'Latency Friendliness' . . . . . . . . . . . . . . . . . 24 102 8.3. Interaction between Rate Policing and L4S . . . . . . . . 24 103 8.4. ECN Integrity . . . . . . . . . . . . . . . . . . . . . . 25 104 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 26 105 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 26 106 10.1. Normative References . . . . . . . . . . . . . . . . . . 26 107 10.2. Informative References . . . . . . . . . . . . . . . . . 26 108 Appendix A. Standardization items . . . . . . . . . . . . . . . 32 109 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 34 111 1. Introduction 113 It is increasingly common for _all_ of a user's applications at any 114 one time to require low delay: interactive Web, Web services, voice, 115 conversational video, interactive video, interactive remote presence, 116 instant messaging, online gaming, remote desktop, cloud-based 117 applications and video-assisted remote control of machinery and 118 industrial processes. In the last decade or so, much has been done 119 to reduce propagation delay by placing caches or servers closer to 120 users. However, queuing remains a major, albeit intermittent, 121 component of latency. For instance spikes of hundreds of 122 milliseconds are common. During a long-running flow, even with 123 state-of-the-art active queue management (AQM), the base speed-of- 124 light path delay roughly doubles. Low loss is also important 125 because, for interactive applications, losses translate into even 126 longer retransmission delays. 128 It has been demonstrated that, once access network bit rates reach 129 levels now common in the developed world, increasing capacity offers 130 diminishing returns if latency (delay) is not addressed. 131 Differentiated services (Diffserv) offers Expedited Forwarding (EF 132 [RFC3246]) for some packets at the expense of others, but this is not 133 sufficient when all (or most) of a user's applications require low 134 latency. 136 Therefore, the goal is an Internet service with ultra-Low queueing 137 Latency, ultra-Low Loss and Scalable throughput (L4S) - for _all_ 138 traffic. A service for all traffic will need none of the 139 configuration or management baggage (traffic policing, traffic 140 contracts) associated with favouring some packets over others. This 141 document describes the L4S architecture for achieving that goal. 143 It must be said that queuing delay only degrades performance 144 infrequently [Hohlfeld14]. It only occurs when a large enough 145 capacity-seeking (e.g. TCP) flow is running alongside the user's 146 traffic in the bottleneck link, which is typically in the access 147 network. Or when the low latency application is itself a large 148 capacity-seeking flow (e.g. interactive video). At these times, the 149 performance improvement from L4S must be so remarkable that network 150 operators will be motivated to deploy it. 152 Active Queue Management (AQM) is part of the solution to queuing 153 under load. AQM improves performance for all traffic, but there is a 154 limit to how much queuing delay can be reduced by solely changing the 155 network; without addressing the root of the problem. 157 The root of the problem is the presence of standard TCP congestion 158 control (Reno [RFC5681]) or compatible variants (e.g. TCP Cubic 159 [RFC8312]). We shall call this family of congestion controls 160 'Classic' TCP. It has been demonstrated that if the sending host 161 replaces Classic TCP with a 'Scalable' alternative, when a suitable 162 AQM is deployed in the network the performance under load of all the 163 above interactive applications can be stunningly improved. For 164 instance, queuing delay under heavy load with the example DCTCP/DualQ 165 solution cited below is roughly 1 millisecond (1 to 2 ms) at the 99th 166 percentile without losing link utilization. This compares with 5 to 167 20 ms on _average_ with a Classic TCP and current state-of-the-art 168 AQMs such as fq_CoDel [RFC8290] or PIE [RFC8033] and about 20-30 ms 169 at the 99th percentile. Also, with a Classic TCP, 5 ms of queuing is 170 usually only possible by losing some utilization. 172 It has been convincingly demonstrated [DCttH15] that it is possible 173 to deploy such an L4S service alongside the existing best efforts 174 service so that all of a user's applications can shift to it when 175 their stack is updated. Access networks are typically designed with 176 one link as the bottleneck for each site (which might be a home, 177 small enterprise or mobile device), so deployment at a single network 178 node should give nearly all the benefit. The L4S approach also 179 requires component mechanisms at the endpoints to fulfill its goal. 180 This document presents the L4S architecture, by describing the 181 different components and how they interact to provide the scalable 182 low-latency, low-loss, Internet service. 184 2. L4S Architecture Overview 186 There are three main components to the L4S architecture (illustrated 187 in Figure 1): 189 1) Network: L4S traffic needs to be isolated from the queuing 190 latency of Classic traffic. However, the two should be able to 191 freely share a common pool of capacity. This is because there is 192 no way to predict how many flows at any one time might use each 193 service and capacity in access networks is too scarce to partition 194 into two. The Dual Queue Coupled AQM 195 [I-D.ietf-tsvwg-aqm-dualq-coupled] was developed as a minimal 196 complexity solution to this problem. The two queues appear to be 197 separated by a 'semi-permeable' membrane that partitions latency 198 but not bandwidth (explained later). 200 Per-flow queuing such as in [RFC8290] could be used (see 201 Section 4), but it partitions both latency and bandwidth between 202 every end-to-end flow. So it is rather overkill, which brings 203 disadvantages (see Section 5.2), not least that large number of 204 queues are needed when two are sufficient. 206 2) Protocol: A host needs to distinguish L4S and Classic packets 207 with an identifier so that the network can classify them into 208 their separate treatments. [I-D.ietf-tsvwg-ecn-l4s-id] considers 209 various alternative identifiers, and concludes that all 210 alternatives involve compromises, but the ECT(1) and CE codepoints 211 of the ECN field represent a workable solution. 213 3) Host: Scalable congestion controls already exist. They solve the 214 scaling problem with TCP that was first pointed out in [RFC3649]. 215 The one used most widely (in controlled environments) is Data 216 Center TCP (DCTCP [RFC8257]), which has been implemented and 217 deployed in Windows Server Editions (since 2012), in Linux and in 218 FreeBSD. Although DCTCP as-is 'works' well over the public 219 Internet, most implementations lack certain safety features that 220 will be necessary once it is used outside controlled environments 221 like data centres (see later). A similar scalable congestion 222 control will also need to be transplanted into protocols other 223 than TCP (QUIC, SCTP, RTP/RTCP, RMCAT, etc.) Indeed, between the 224 present document being drafted and published, the following 225 scalable congestion controls were implemented: TCP Prague 226 [PragueLinux], QUIC Prague and an L4S variant of the RMCAT SCReAM 227 controller [RFC8298]. 229 (2) (1) 230 .-------^------. .--------------^-------------------. 231 ,-(3)-----. ______ 232 ; ________ : L4S --------. | | 233 :|Scalable| : _\ ||___\_| mark | 234 :| sender | : __________ / / || / |______|\ _________ 235 :|________|\; | |/ --------' ^ \1|condit'nl| 236 `---------'\_| IP-ECN | Coupling : \|priority |_\ 237 ________ / |Classifier| : /|scheduler| / 238 |Classic |/ |__________|\ --------. ___:__ / |_________| 239 | sender | \_\ || | |||___\_| mark/|/ 240 |________| / || | ||| / | drop | 241 Classic --------' |______| 243 Figure 1: Components of an L4S Solution: 1) Isolation in separate 244 network queues; 2) Packet Identification Protocol; and 3) Scalable 245 Sending Host 247 3. Terminology 249 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 250 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 251 document are to be interpreted as described in [RFC2119]. In this 252 document, these words will appear with that interpretation only when 253 in ALL CAPS. Lower case uses of these words are not to be 254 interpreted as carrying RFC-2119 significance. COMMENT: Since this 255 will be an information document, This should be removed. 257 Classic service: The 'Classic' service is intended for all the 258 congestion control behaviours that currently co-exist with TCP 259 Reno (e.g. TCP Cubic, Compound, SCTP, etc). 261 Low-Latency, Low-Loss and Scalable (L4S) service: The 'L4S' service 262 is intended for traffic from scalable TCP algorithms such as Data 263 Center TCP. But it is also more general--it will allow a set of 264 congestion controls with similar scaling properties to DCTCP (e.g. 265 Relentless [Mathis09]) to evolve. 267 Both Classic and L4S services can cope with a proportion of 268 unresponsive or less-responsive traffic as well (e.g. DNS, VoIP, 269 etc). 271 Scalable Congestion Control: A congestion control where the packet 272 flow rate per round trip (the window) is inversely proportional to 273 the level (probability) of congestion signals. Then, as flow rate 274 scales, the number of congestion signals per round trip remains 275 invariant, maintaining the same degree of control. For instance, 276 DCTCP averages 2 congestion signals per round-trip whatever the 277 flow rate. 279 Classic Congestion Control: A congestion control with a flow rate 280 that can co-exist with standard TCP Reno [RFC5681] without 281 starvation. With Classic congestion controls, as capacity 282 increases enabling higher flow rates, the number of round trips 283 between congestion signals (losses or ECN marks) rises in 284 proportion to the flow rate. So control of queuing and/or 285 utilization becomes very slack. For instance, with 1500 B packets 286 and an RTT of 18 ms, as TCP Reno flow rate increases from 2 to 100 287 Mb/s the number of round trips between congestion signals rises 288 proportionately, from 2 to 100. 290 The default congestion control in Linux (TCP Cubic) is Reno- 291 compatible for most Internet access scenarios expected for some 292 years. For instance, with a typical domestic round-trip time 293 (RTT) of 18ms, TCP Cubic only switches out of Reno-compatibility 294 mode once the flow rate approaches 1 Gb/s. For a typical data 295 centre RTT of 1 ms, the switch-over point is theoretically 1.3 Tb/ 296 s. However, with a less common transcontinental RTT of 100 ms, it 297 only remains Reno-compatible up to 13 Mb/s. All examples assume 298 1,500 B packets. 300 Classic ECN: The original proposed standard Explicit Congestion 301 Notification (ECN) protocol [RFC3168], which requires ECN signals 302 to be treated the same as drops, both when generated in the 303 network and when responded to by the sender. 305 Site: A home, mobile device, small enterprise or campus, where the 306 network bottleneck is typically the access link to the site. Not 307 all network arrangements fit this model but it is a useful, widely 308 applicable generalisation. 310 4. L4S Architecture Components 312 The L4S architecture is composed of the following elements. 314 Protocols:The L4S architecture encompasses the two identifier changes 315 (an unassignment and an assignment) and optional further identifiers: 317 a. An essential aspect of a scalable congestion control is the use 318 of explicit congestion signals rather than losses, because the 319 signals need to be sent immediately and frequently. 'Classic' 320 ECN [RFC3168] requires an ECN signal to be treated the same as a 321 drop, both when it is generated in the network and when it is 322 responded to by hosts. L4S needs networks and hosts to support a 323 different meaning for ECN: 325 * much more frequent signals--too often to use drops; 327 * immediately tracking every fluctuation of the queue--too soon 328 to commit to dropping packets. 330 So the standards track [RFC3168] has had to be updated to allow 331 L4S packets to depart from the 'same as drop' constraint. 332 [RFC8311] is a standards track update to relax specific 333 requirements in RFC 3168 (and certain other standards track 334 RFCs), which clears the way for the experimental changes proposed 335 for L4S. [RFC8311] also reclassifies the original experimental 336 assignment of the ECT(1) codepoint as an ECN nonce [RFC3540] as 337 historic. 339 b. [I-D.ietf-tsvwg-ecn-l4s-id] recommends ECT(1) is used as the 340 identifier to classify L4S packets into a separate treatment from 341 Classic packets. This satisfies the requirements for identifying 342 an alternative ECN treatment in [RFC4774]. 344 The CE codepoint is used to indicate Congestion Experienced by 345 both L4S and Classic treatments. This raises the concern that a 346 Classic AQM earlier on the path might have marked some ECT(0) 347 packets as CE. Then these packets will be erroneously classified 348 into the L4S queue. [I-D.ietf-tsvwg-ecn-l4s-id] explains why 5 349 unlikely eventualities all have to coincide for this to have any 350 detrimental effect, which even then would only involve a 351 vanishingly small likelihood of a spurious retransmission. 353 c. A network operator might wish to include certain unresponsive, 354 non-L4S traffic in the L4S queue if it is deemed to be smoothly 355 enough paced and low enough rate not to build a queue. For 356 instance, VoIP, low rate datagrams to sync online games, 357 relatively low rate application-limited traffic, DNS, LDAP, etc. 358 This traffic would need to be tagged with specific identifiers, 359 e.g. a low latency Diffserv Codepoint such as Expedited 360 Forwarding (EF [RFC3246]), Non-Queue-Building (NQB 361 [I-D.white-tsvwg-nqb]), or operator-specific identifiers. 363 Network components: The L4S architecture encompasses either dual- 364 queue or per-flow queue solutions: 366 a. The Dual Queue Coupled AQM has been specified as generically as 367 possible [I-D.ietf-tsvwg-aqm-dualq-coupled] as a 'semi-permeable' 368 membrane without specifying the particular AQMs to use in the two 369 queues. Informational appendices of the draft are provided for 370 pseudocode examples of different possible AQM approaches. The 371 aim is for designers to be free to implement diverse ideas. So 372 the brief normative body of the draft only specifies the minimum 373 constraints an AQM needs to comply with to ensure that the L4S 374 and Classic services will coexist. The core idea is the tension 375 between the scheduler's prioritization of L4S over Classic and 376 the coupling from the Classic to the L4S AQM. The L4S AQM 377 derives its level of ECN marking from the maximum of the 378 congestion levels in both queues. So L4S flows leave enough 379 space between their packets for Classic flows, as if they were 380 all the same type of TCP, all sharing one FIFO queue. 382 Initially a zero-config variant of RED called Curvy RED was 383 implemented, tested and documented. Then, a variant of PIE 384 called DualPI2 (pronounced Dual PI Squared) [PI2] was implemented 385 and found to perform better than Curvy RED over a wide range of 386 conditions, so it was documented in another appendix of 387 [I-D.ietf-tsvwg-aqm-dualq-coupled]. 389 b. A scheduler with per-flow queues can be used for L4S. It would 390 be simple to modify an existing design such as FQ-CoDel or FQ- 391 PIE, although this has not been implemented and evaluated because 392 the goal of the original proponents of L4S was to avoid per-flow 393 scheduling. 395 The idea would be to implement two AQMs (Classic and Scalable) 396 and switch each per-flow queue to use an instance of the 397 appropriate AQM for the flow, based on the ECN codepoints of the 398 packets. Flows of non-ECN or ECT(0) packets would use a Classic 399 AQM such as CoDel or PIE, while flows of ECT(1) packets without 400 any ECT(0) packets would use a simple shallow threshold AQM with 401 immediate (unsmoothed) marking. The FQ scheduler might work as 402 is, because it is likely that L4S flows would be continually 403 categorized as 'new' flows. However, this presumption has not 404 been tested under a wide range of conditions. A variant of FQ- 405 CoDel already exists that adapts to a shallower threshold AQM for 406 ECN-capable packets. 408 Host mechanisms: The L4S architecture includes a number of mechanisms 409 in the end host that we enumerate next: 411 a. Data Center TCP is the most widely used example of a scalable 412 congestion control. It has been documented as an informational 413 record of the protocol currently in use [RFC8257]. It will be 414 necessary to define a number of safety features for a variant 415 usable on the public Internet. A draft list of these, known as 416 the TCP Prague requirements, has been drawn up (see Appendix A of 417 [I-D.ietf-tsvwg-ecn-l4s-id]). The list also includes some 418 optional performance improvements. 420 b. Transport protocols other than TCP use various congestion 421 controls designed to be friendly with Classic TCP. Before they 422 can use the L4S service, it will be necessary to implement 423 scalable variants of each of these congestion control behaviours. 424 The following standards track RFCs currently define these 425 protocols: ECN in TCP [RFC3168], in SCTP [RFC4960], in RTP 426 [RFC6679], and in DCCP [RFC4340]. Not all are in widespread use, 427 but those that are will eventually need to be updated to allow a 428 different congestion response, which they will have to indicate 429 by using the ECT(1) codepoint. Scalable variants are under 430 consideration for some new transport protocols that are 431 themselves under development, e.g. QUIC 432 [I-D.ietf-quic-transport] and certain real-time media congestion 433 avoidance techniques (RMCAT) protocols. 435 c. ECN feedback is sufficient for L4S in some transport protocols 436 (RTCP, DCCP) but not others: 438 * For the case of TCP, the feedback protocol for ECN embeds the 439 assumption from Classic ECN that an ECN mark is the same as a 440 drop, making it unusable for a scalable TCP. Therefore, the 441 implementation of TCP receivers will have to be upgraded 442 [RFC7560]. Work to standardize and implement more accurate 443 ECN feedback for TCP (AccECN) is in progress 444 [I-D.ietf-tcpm-accurate-ecn], [PragueLinux]. 446 * ECN feedback is only roughly sketched in an appendix of the 447 SCTP specification. A fuller specification has been proposed 448 [I-D.stewart-tsvwg-sctpecn], which would need to be 449 implemented and deployed before SCTCP could support L4S. 451 5. Rationale 453 5.1. Why These Primary Components? 455 Explicit congestion signalling (protocol): Explicit congestion 456 signalling is a key part of the L4S approach. In contrast, use of 457 drop as a congestion signal creates a tension because drop is both 458 a useful signal (more would reduce delay) and an impairment (less 459 would reduce delay): 461 * Explicit congestion signals can be used many times per round 462 trip, to keep tight control, without any impairment. Under 463 heavy load, even more explicit signals can be applied so the 464 queue can be kept short whatever the load. Whereas state-of- 465 the-art AQMs have to introduce very high packet drop at high 466 load to keep the queue short. Further, when using ECN, TCP's 467 sawtooth reduction can be smaller and therefore return to the 468 operating point more often, without worrying that this causes 469 more signals (one at the top of each smaller sawtooth). The 470 consequent smaller amplitude sawteeth fit between a very 471 shallow marking threshold and an empty queue, so delay 472 variation can be very low, without risk of under-utilization. 474 * Explicit congestion signals can be sent immediately to track 475 fluctuations of the queue. L4S shifts smoothing from the 476 network (which doesn't know the round trip times of all the 477 flows) to the host (which knows its own round trip time). 478 Previously, the network had to smooth to keep a worst-case 479 round trip stable, delaying congestion signals by 100-200ms. 481 All the above makes it clear that explicit congestion signalling 482 is only advantageous for latency if it does not have to be 483 considered 'the same as' drop (as was required with Classic ECN 484 [RFC3168]). Therefore, in a DualQ AQM, the L4S queue uses a new 485 L4S variant of ECN that is not equivalent to drop 486 [I-D.ietf-tsvwg-ecn-l4s-id], while the Classic queue uses either 487 classic ECN [RFC3168] or drop, which are equivalent. 489 Before Classic ECN was standardized, there were various proposals 490 to give an ECN mark a different meaning from drop. However, there 491 was no particular reason to agree on any one of the alternative 492 meanings, so 'the same as drop' was the only compromise that could 493 be reached. RFC 3168 contains a statement that: 495 "An environment where all end nodes were ECN-Capable could 496 allow new criteria to be developed for setting the CE 497 codepoint, and new congestion control mechanisms for end-node 498 reaction to CE packets. However, this is a research issue, and 499 as such is not addressed in this document." 501 Latency isolation with coupled congestion notification (network): 502 Using just two queues is not essential to L4S (more would be 503 possible), but it is the simplest way to isolate all the L4S 504 traffic that keeps latency low from all the legacy Classic traffic 505 that does not. 507 Similarly, coupling the congestion notification between the queues 508 is not necessarily essential, but it is a clever and simple way to 509 allow senders to determine their rate, packet-by-packet, rather 510 than be overridden by a network scheduler. Because otherwise a 511 network scheduler would have to inspect at least transport layer 512 headers, and it would have to continually assign a rate to each 513 flow without any easy way to understand application intent. 515 L4S packet identifier (protocol): Once there are at least two 516 separate treatments in the network, hosts need an identifier at 517 the IP layer to distinguish which treatment they intend to use. 519 Scalable congestion notification (host): A scalable congestion 520 control keeps the signalling frequency high so that rate 521 variations can be small when signalling is stable, and rate can 522 track variations in available capacity as rapidly as possible 523 otherwise. 525 Low loss: Latency is not the only concern of L4S. The 'Low Loss" 526 part of the name denotes that L4S generally achieves zero 527 congestion loss due to its use of ECN. Otherwise, loss would 528 itself cause delay, particularly for short flows, due to 529 retransmission delay [RFC2884]. 531 Scalable throughput: The "Scalable throughput" part of the name 532 denotes that the per-flow throughput of scalable congestion 533 controls should scale indefinitely, avoiding the imminent scaling 534 problems with TCP-Friendly congestion control algorithms 535 [RFC3649]. It was known when TCP was first developed that it 536 would not scale to high bandwidth-delay products (see footnote 6 537 in [TCP-CA]). Today, regular broadband bit-rates over WAN 538 distances are already beyond the scaling range of `classic' TCP 539 Reno. So `less unscalable' Cubic [RFC8312] and 540 Compound [I-D.sridharan-tcpm-ctcp] variants of TCP have been 541 successfully deployed. However, these are now approaching their 542 scaling limits. For instance, at 800Mb/s with a 20ms round trip, 543 Cubic induces a congestion signal only every 500 round trips or 10 544 seconds, which makes its dynamic control very sloppy. In contrast 545 on average a scalable congestion control like DCTCP or TCP Prague 546 induces 2 congestion signals per round trip, which remains 547 invariant for any flow rate, keeping dynamic control very tight. 549 5.2. Why Not Alternative Approaches? 551 All the following approaches address some part of the same problem 552 space as L4S. In each case, it is shown that L4S complements them or 553 improves on them, rather than being a mutually exclusive alternative: 555 Diffserv: Diffserv addresses the problem of bandwidth apportionment 556 for important traffic as well as queuing latency for delay- 557 sensitive traffic. L4S solely addresses the problem of queuing 558 latency (as well as loss and throughput scaling). Diffserv will 559 still be necessary where important traffic requires priority (e.g. 560 for commercial reasons, or for protection of critical 561 infrastructure traffic) - see [I-D.briscoe-tsvwg-l4s-diffserv]. 562 Nonetheless, if there are Diffserv classes for important traffic, 563 the L4S approach can provide low latency for _all_ traffic within 564 each Diffserv class (including the case where there is only one 565 Diffserv class). 567 Also, as already explained, Diffserv only works for a small subset 568 of the traffic on a link. It is not applicable when all the 569 applications in use at one time at a single site (home, small 570 business or mobile device) require low latency. Also, because L4S 571 is for all traffic, it needs none of the management baggage 572 (traffic policing, traffic contracts) associated with favouring 573 some packets over others. This baggage has held Diffserv back 574 from widespread end-to-end deployment. 576 State-of-the-art AQMs: AQMs such as PIE and fq_CoDel give a 577 significant reduction in queuing delay relative to no AQM at all. 578 The L4S work is intended to complement these AQMs, and we 579 definitely do not want to distract from the need to deploy them as 580 widely as possible. Nonetheless, without addressing the large 581 saw-toothing rate variations of Classic congestion controls, AQMs 582 alone cannot reduce queuing delay too far without significantly 583 reducing link utilization. The L4S approach resolves this tension 584 by ensuring hosts can minimize the size of their sawteeth without 585 appearing so aggressive to legacy flows that they starve them. 587 Per-flow queuing: Similarly per-flow queuing is not incompatible 588 with the L4S approach. However, one queue for every flow can be 589 thought of as overkill compared to the minimum of two queues for 590 all traffic needed for the L4S approach. The overkill of per-flow 591 queuing has side-effects: 593 A. fq makes high performance networking equipment costly 594 (processing and memory) - in contrast dual queue code can be 595 very simple; 597 B. fq requires packet inspection into the end-to-end transport 598 layer, which doesn't sit well alongside encryption for privacy 599 - in contrast the use of ECN as the classifier for L4S 600 requires no deeper inspection than the IP layer; 602 C. fq isolates the queuing of each flow from the others but not 603 from itself so existing FQ implementations still needs to have 604 support for scalable congestion control added. 606 It might seem that self-inflicted queuing delay should not 607 count, because if the delay wasn't in the network it would 608 just shift to the sender. However, modern adaptive 609 applications, e.g. HTTP/2 [RFC7540] or the interactive media 610 applications described in Section 6, can keep low latency 611 objects at the front of their local send queue by shuffling 612 priorities of other objects dependent on the progress of other 613 transfers. They cannot shuffle packets once they have 614 released them into the network. 616 D. fq prevents any one flow from consuming more than 1/N of the 617 capacity at any instant, where N is the number of flows. This 618 is fine if all flows are elastic, but it does not sit well 619 with a variable bit rate real-time multimedia flow, which 620 requires wriggle room to sometimes take more and other times 621 less than a 1/N share. 623 It might seem that an fq scheduler offers the benefit that it 624 prevents individual flows from hogging all the bandwidth. 625 However, L4S has been deliberately designed so that policing 626 of individual flows can be added as a policy choice, rather 627 than requiring one specific policy choice as the mechanism 628 itself. A scheduler (like fq) has to decide packet-by-packet 629 which flow to schedule without knowing application intent. 630 Whereas a separate policing function can be configured less 631 strictly, so that senders can still control the instantaneous 632 rate of each flow dependent on the needs of each application 633 (e.g. variable rate video), giving more wriggle-room before a 634 flow is deemed non-compliant. Also policing of queuing and of 635 flow-rates can be applied independently. 637 Alternative Back-off ECN (ABE): Yet again, L4S is not an alternative 638 to ABE but a complement that introduces much lower queuing delay. 639 ABE [RFC8511] alters the host behaviour in response to ECN marking 640 to utilize a link better and give ECN flows faster throughput. It 641 uses ECT(0) and assumes the network still treats ECN and drop the 642 same. Therefore ABE exploits any lower queuing delay that AQMs 643 can provide. But as explained above, AQMs still cannot reduce 644 queuing delay too far without losing link utilization (to allow 645 for other, non-ABE, flows). 647 BBRv1: v1 of Bottleneck Bandwidth and Round-trip propagation time 648 (BBR [I-D.cardwell-iccrg-bbr-congestion-control]) controls queuing 649 delay end-to-end without needing any special logic in the network, 650 such as an AQM - so it works pretty-much on any path. Setting 651 some problems with capacity sharing aside, queuing delay is good 652 with BBRv1, but perhaps not quite as low as with state-of-the-art 653 AQMs such as PIE or fq_CoDel, and certainly nowhere near as low as 654 with L4S. Queuing delay is also not consistently low, due to its 655 regular bandwidth probes and the aggressive flow start-up phase. 657 L4S is a complement to BBRv1. Indeed BBRv2 (not released at the 658 time of writing) is likely to use L4S ECN and a TCP-Prague-like 659 behaviour if it discovers a compatible path. Otherwise it will 660 use an evolution of BBRv1. 662 6. Applicability 664 6.1. Applications 666 A transport layer that solves the current latency issues will provide 667 new service, product and application opportunities. 669 With the L4S approach, the following existing applications will 670 immediately experience significantly better quality of experience 671 under load: 673 o Gaming; 675 o VoIP; 677 o Video conferencing; 679 o Web browsing; 681 o (Adaptive) video streaming; 683 o Instant messaging. 685 The significantly lower queuing latency also enables some interactive 686 application functions to be offloaded to the cloud that would hardly 687 even be usable today: 689 o Cloud based interactive video; 691 o Cloud based virtual and augmented reality. 693 The above two applications have been successfully demonstrated with 694 L4S, both running together over a 40 Mb/s broadband access link 695 loaded up with the numerous other latency sensitive applications in 696 the previous list as well as numerous downloads - all sharing the 697 same bottleneck queue simultaneously [L4Sdemo16]. For the former, a 698 panoramic video of a football stadium could be swiped and pinched so 699 that, on the fly, a proxy in the cloud could generate a sub-window of 700 the match video under the finger-gesture control of each user. For 701 the latter, a virtual reality headset displayed a viewport taken from 702 a 360 degree camera in a racing car. The user's head movements 703 controlled the viewport extracted by a cloud-based proxy. In both 704 cases, with 7 ms end-to-end base delay, the additional queuing delay 705 of roughly 1 ms was so low that it seemed the video was generated 706 locally. 708 Using a swiping finger gesture or head movement to pan a video are 709 extremely latency-demanding actions--far more demanding than VoIP. 710 Because human vision can detect extremely low delays of the order of 711 single milliseconds when delay is translated into a visual lag 712 between a video and a reference point (the finger or the orientation 713 of the head sensed by the balance system in the inner ear --- the 714 vestibular system). 716 Without the low queuing delay of L4S, cloud-based applications like 717 these would not be credible without significantly more access 718 bandwidth (to deliver all possible video that might be viewed) and 719 more local processing, which would increase the weight and power 720 consumption of head-mounted displays. When all interactive 721 processing can be done in the cloud, only the data to be rendered for 722 the end user needs to be sent. 724 Other low latency high bandwidth applications such as: 726 o Interactive remote presence; 728 o Video-assisted remote control of machinery or industrial 729 processes. 731 are not credible at all without very low queuing delay. No amount of 732 extra access bandwidth or local processing can make up for lost time. 734 6.2. Use Cases 736 The following use-cases for L4S are being considered by various 737 interested parties: 739 o Where the bottleneck is one of various types of access network: 740 DSL, cable, mobile, satellite 742 * Radio links (cellular, WiFi, satellite) that are distant from 743 the source are particularly challenging. The radio link 744 capacity can vary rapidly by orders of magnitude, so it is 745 often desirable to hold a buffer to utilise sudden increases of 746 capacity; 748 * cellular networks are further complicated by a perceived need 749 to buffer in order to make hand-overs imperceptible; 751 * Satellite networks generally have a very large base RTT, so 752 even with minimal queuing, overall delay can never be extremely 753 low; 755 * Nonetheless, it is certainly desirable not to hold a buffer 756 purely because of the sawteeth of Classic TCP, when it is more 757 than is needed for all the above reasons. 759 o Private networks of heterogeneous data centres, where there is no 760 single administrator that can arrange for all the simultaneous 761 changes to senders, receivers and network needed to deploy DCTCP: 763 * a set of private data centres interconnected over a wide area 764 with separate administrations, but within the same company 766 * a set of data centres operated by separate companies 767 interconnected by a community of interest network (e.g. for the 768 finance sector) 770 * multi-tenant (cloud) data centres where tenants choose their 771 operating system stack (Infrastructure as a Service - IaaS) 773 o Different types of transport (or application) congestion control: 775 * elastic (TCP/SCTP); 777 * real-time (RTP, RMCAT); 779 * query (DNS/LDAP). 781 o Where low delay quality of service is required, but without 782 inspecting or intervening above the IP layer 783 [I-D.smith-encrypted-traffic-management]: 785 * mobile and other networks have tended to inspect higher layers 786 in order to guess application QoS requirements. However, with 787 growing demand for support of privacy and encryption, L4S 788 offers an alternative. There is no need to select which 789 traffic to favour for queuing, when L4S gives favourable 790 queuing to all traffic. 792 o If queuing delay is minimized, applications with a fixed delay 793 budget can communicate over longer distances, or via a longer 794 chain of service functions [RFC7665] or onion routers. 796 6.3. Deployment Considerations 798 The DualQ is, in itself, an incremental deployment framework for L4S 799 AQMs so that L4S traffic can coexist with existing Classic "TCP- 800 friendly" traffic. Section 6.3.1 explains why only deploying a DualQ 801 AQM [I-D.ietf-tsvwg-aqm-dualq-coupled] in one node at each end of the 802 access link will realize nearly all the benefit of L4S. 804 L4S involves both end systems and the network, so Section 6.3.2 805 suggests some typical sequences to deploy each part, and why there 806 will be an immediate and significant benefit after deploying just one 807 part. 809 If an ECN-enabled DualQ AQM has not been deployed at a bottleneck, an 810 L4S flow is required to include a fall-back strategy to Classic 811 behaviour. Section 6.3.3 describes how an L4S flow detects this, and 812 how to minimize the effect of false negative detection. 814 6.3.1. Deployment Topology 816 DualQ AQMs will not have to be deployed throughout the Internet 817 before L4S will work for anyone. Operators of public Internet access 818 networks typically design their networks so that the bottleneck will 819 nearly always occur at one known (logical) link. This confines the 820 cost of queue management technology to one place. 822 The case of mesh networks is different and will be discussed later. 823 But the known bottleneck case is generally true for Internet access 824 to all sorts of different 'sites', where the word 'site' includes 825 home networks, small-to-medium sized campus or enterprise networks 826 and even cellular devices (Figure 2). Also, this known-bottleneck 827 case tends to be applicable whatever the access link technology; 828 whether xDSL, cable, cellular, line-of-sight wireless or satellite. 830 Therefore, the full benefit of the L4S service should be available in 831 the downstream direction when the DualQ AQM is deployed at the 832 ingress to this bottleneck link (or links for multihomed sites). And 833 similarly, the full upstream service will be available once the DualQ 834 is deployed at the upstream ingress. 836 ______ 837 ( ) 838 __ __ ( ) 839 |DQ\________/DQ|( enterprise ) 840 ___ |__/ \__| ( /campus ) 841 ( ) (______) 842 ( ) ___||_ 843 +----+ ( ) __ __ / \ 844 | DC |-----( Core )|DQ\_______________/DQ|| home | 845 +----+ ( ) |__/ \__||______| 846 (_____) __ 847 |DQ\__/\ __ ,===. 848 |__/ \ ____/DQ||| ||mobile 849 \/ \__|||_||device 850 | o | 851 `---' 853 Figure 2: Likely location of DualQ (DQ) Deployments in common access 854 topologies 856 Deployment in mesh topologies depends on how over-booked the core is. 857 If the core is non-blocking, or at least generously provisioned so 858 that the edges are nearly always the bottlenecks, it would only be 859 necessary to deploy the DualQ AQM at the edge bottlenecks. For 860 example, some data-centre networks are designed with the bottleneck 861 in the hypervisor or host NICs, while others bottleneck at the top- 862 of-rack switch (both the output ports facing hosts and those facing 863 the core). 865 The DualQ would eventually also need to be deployed at any other 866 persistent bottlenecks such as network interconnections, e.g. some 867 public Internet exchange points and the ingress and egress to WAN 868 links interconnecting data-centres. 870 6.3.2. Deployment Sequences 872 For any one L4S flow to work, it requires 3 parts to have been 873 deployed. This was the same deployment problem that ECN faced 874 [RFC8170] so we have learned from this. 876 Firstly, L4S deployment exploits the fact that DCTCP already exists 877 on many Internet hosts (Windows, FreeBSD and Linux); both servers and 878 clients. Therefore, just deploying DualQ AQM at a network bottleneck 879 immediately gives a working deployment of all the L4S parts. DCTCP 880 needs some safety concerns to be fixed for general use over the 881 public Internet (see Section 2.3 of [I-D.ietf-tsvwg-ecn-l4s-id]), but 882 DCTCP is not on by default, so these issues can be managed within 883 controlled deployments or controlled trials. 885 Secondly, the performance improvement with L4S is so significant that 886 it enables new interactive services and products that were not 887 previously possible. It is much easier for companies to initiate new 888 work on deployment if there is budget for a new product trial. If, 889 in contrast, there were only an incremental performance improvement 890 (as with Classic ECN), spending on deployment tends to be much harder 891 to justify. 893 Thirdly, the L4S identifier is defined so that initially network 894 operators can enable L4S exclusively for certain customers or certain 895 applications. But this is carefully defined so that it does not 896 compromise future evolution towards L4S as an Internet-wide service. 897 This is because the L4S identifier is defined not only as the end-to- 898 end ECN field, but it can also optionally be combined with any other 899 packet header or some status of a customer or their access link 900 [I-D.ietf-tsvwg-ecn-l4s-id]. Operators could do this anyway, even if 901 it were not blessed by the IETF. However, it is best for the IETF to 902 specify that they must use their own local identifier in combination 903 with the IETF's identifier. Then, if an operator enables the 904 optional local-use approach, they only have to remove this extra rule 905 to make the service work Internet-wide - it will already traverse 906 middleboxes, peerings, etc. 908 +-+--------------------+----------------------+---------------------+ 909 | | Servers or proxies | Access link | Clients | 910 +-+--------------------+----------------------+---------------------+ 911 |1| DCTCP (existing) | | DCTCP (existing) | 912 | | | DualQ AQM downstream | | 913 | | WORKS DOWNSTREAM FOR CONTROLLED DEPLOYMENTS/TRIALS | 914 +-+--------------------+----------------------+---------------------+ 915 |2| TCP Prague | | AccECN (already in | 916 | | | | progress:DCTCP/BBR) | 917 | | FULLY WORKS DOWNSTREAM | 918 +-+--------------------+----------------------+---------------------+ 919 |3| | DualQ AQM upstream | TCP Prague | 920 | | | | | 921 | | FULLY WORKS UPSTREAM AND DOWNSTREAM | 922 +-+--------------------+----------------------+---------------------+ 924 Figure 3: Example L4S Deployment Sequences 926 Figure 3 illustrates some example sequences in which the parts of L4S 927 might be deployed. It consists of the following stages: 929 1. Here, the immediate benefit of a single AQM deployment can be 930 seen, but limited to a controlled trial or controlled deployment. 931 In this example downstream deployment is first, but in other 932 scenarios the upstream might be deployed first. If no AQM at all 933 was previously deployed for the downstream access, the DualQ AQM 934 greatly improves the Classic service (as well as adding the L4S 935 service). If an AQM was already deployed, the Classic service 936 will be unchanged (and L4S will still be added). 938 2. In this stage, the name 'TCP Prague' is used to represent a 939 variant of DCTCP that is safe to use in a production environment. 940 If the application is primarily unidirectional, 'TCP Prague' at 941 one end will provide all the benefit needed. Accurate ECN 942 feedback (AccECN) [I-D.ietf-tcpm-accurate-ecn] is needed at the 943 other end, but it is a generic ECN feedback facility that is 944 already planned to be deployed for other purposes, e.g. DCTCP, 945 BBR [I-D.cardwell-iccrg-bbr-congestion-control]. The two ends 946 can be deployed in either order, because TCP Prague only enables 947 itself if it has negotiated the use of AccECN feedback with the 948 other end during the connection handshake. Thus, deployment of 949 TCP Prague on a server enables L4S trials to move to a production 950 service in one direction, wherever AccECN is deployed at the 951 other end. This stage might be further motivated by the 952 performance improvements of TCP Prague relative to DCTCP (see 953 Appendix A.2 of [I-D.ietf-tsvwg-ecn-l4s-id]). 955 3. This is a two-move stage to enable L4S upstream. The DualQ or 956 TCP Prague can be deployed in either order as already explained. 957 To motivate the first of two independent moves, the deferred 958 benefit of enabling new services after the second move has to be 959 worth it to cover the first mover's investment risk. As 960 explained already, the potential for new interactive services 961 provides this motivation. The DualQ AQM also greatly improves 962 the upstream Classic service, assuming no other AQM has already 963 been deployed. 965 Note that other deployment sequences might occur. For instance: the 966 upstream might be deployed first; a non-TCP protocol might be used 967 end-to-end, e.g. QUIC, RMCAT; a body such as the 3GPP might require 968 L4S to be implemented in 5G user equipment, or other random acts of 969 kindness. 971 6.3.3. L4S Flow but Non-L4S Bottleneck 973 If L4S is enabled between two hosts but there is no L4S AQM at the 974 bottleneck, any drop from the bottleneck will trigger the L4S sender 975 to fall back to a classic ('TCP-Friendly') behaviour (see 976 Appendix A.1.3 of [I-D.ietf-tsvwg-ecn-l4s-id]). 978 Unfortunately, as well as protecting legacy traffic, this rule 979 degrades the L4S service whenever there is a loss, even if the loss 980 was not from a non-DualQ bottleneck (false negative). And 981 unfortunately, prevalent drop can be due to other causes, e.g.: 983 o congestion loss at other transient bottlenecks, e.g. due to bursts 984 in shallower queues; 986 o transmission errors, e.g. due to electrical interference; 988 o rate policing. 990 Three complementary approaches are in progress to address this issue, 991 but they are all currently research: 993 o In TCP Prague, ignore certain losses deemed unlikely to be due to 994 congestion (using some ideas from BBR 995 [I-D.cardwell-iccrg-bbr-congestion-control] but with no need to 996 ignore nearly all losses). This could mask any of the above types 997 of loss (requires consensus on how to safely interoperate with 998 drop-based congestion controls). 1000 o A combination of RACK, reconfigured link retransmission and L4S 1001 could address transmission errors [UnorderedLTE], 1002 [I-D.ietf-tsvwg-ecn-l4s-id]; 1004 o Hybrid ECN/drop policers (see Section 8.3). 1006 L4S deployment scenarios that minimize these issues (e.g. over 1007 wireline networks) can proceed in parallel to this research, in the 1008 expectation that research success could continually widen L4S 1009 applicability. 1011 Classic ECN support is starting to materialize on the Internet as an 1012 increased level of CE marking. Given some of this Classic ECN might 1013 be due to single-queue ECN deployment, an L4S sender will have to 1014 fall back to a classic ('TCP-Friendly') behaviour if it detects that 1015 ECN marking is accompanied by greater queuing delay or greater delay 1016 variation than would be expected with L4S (see Appendix A.1.4 of 1017 [I-D.ietf-tsvwg-ecn-l4s-id]). It is hard to detect whether this is 1018 all due to the addition of support for ECN in the Linux 1019 implementation of FQ-CoDel, which would not require fall-back to 1020 Classic behaviour, because FQ inherently forces the throughput of 1021 each flow to be equal irrespective of its aggressiveness. 1023 6.3.4. Other Potential Deployment Issues 1025 An L4S AQM uses the ECN field to signal congestion. So, in common 1026 with Classic ECN, if the AQM is within a tunnel or at a lower layer, 1027 correct functioning of ECN signalling requires correct propagation of 1028 the ECN field up the layers [RFC6040], 1029 [I-D.ietf-tsvwg-ecn-encap-guidelines]. 1031 7. IANA Considerations 1033 This specification contains no IANA considerations. 1035 8. Security Considerations 1037 8.1. Traffic (Non-)Policing 1039 Because the L4S service can serve all traffic that is using the 1040 capacity of a link, it should not be necessary to police access to 1041 the L4S service. In contrast, Diffserv only works if some packets 1042 get less favourable treatment than others. So Diffserv has to use 1043 traffic policers to limit how much traffic can be favoured. In turn, 1044 traffic policers require traffic contracts between users and networks 1045 as well as pairwise between networks. Because L4S will lack all this 1046 management complexity, it is more likely to work end-to-end. 1048 During early deployment (and perhaps always), some networks will not 1049 offer the L4S service. These networks do not need to police or re- 1050 mark L4S traffic - they just forward it unchanged as best efforts 1051 traffic, as they already forward traffic with ECT(1) today. At a 1052 bottleneck, such networks will introduce some queuing and dropping. 1053 When a scalable congestion control detects a drop it will have to 1054 respond as if it is a Classic congestion control (as required in 1055 Section 2.3 of [I-D.ietf-tsvwg-ecn-l4s-id]). This will ensure safe 1056 interworking with other traffic at the 'legacy' bottleneck, but it 1057 will degrade the L4S service to no better (but never worse) than 1058 classic best efforts, whenever a legacy (non-L4S) bottleneck is 1059 encountered on a path. 1061 Certain network operators might choose to restrict access to the L4S 1062 class, perhaps only to selected premium customers as a value-added 1063 service. Their packet classifier (item 2 in Figure 1) could identify 1064 such customers against some other field (e.g. source address range) 1065 as well as ECN. If only the ECN L4S identifier matched, but not the 1066 source address (say), the classifier could direct these packets (from 1067 non-premium customers) into the Classic queue. Clearly explaining 1068 how operators can use an additional local classifiers (see 1069 [I-D.ietf-tsvwg-ecn-l4s-id]) is intended to remove any tendency to 1070 bleach the L4S identifier. Then at least the L4S ECN identifier will 1071 be more likely to survive end-to-end even though the service may not 1072 be supported at every hop. Such arrangements would only require 1073 simple registered/not-registered packet classification, rather than 1074 the managed, application-specific traffic policing against customer- 1075 specific traffic contracts that Diffserv uses. 1077 8.2. 'Latency Friendliness' 1079 The L4S service does rely on self-constraint - not in terms of 1080 limiting rate, but in terms of limiting latency (burstiness). It is 1081 hoped that self-interest and standardisation of dynamic behaviour 1082 (cf. TCP slow-start) will be sufficient to prevent transports from 1083 sending excessive bursts of L4S traffic, given the application's own 1084 latency will suffer most from such behaviour. 1086 Whether burst policing becomes necessary remains to be seen. Without 1087 it, there will be potential for attacks on the low latency of the L4S 1088 service. However it may only be necessary to apply such policing 1089 reactively, e.g. punitively targeted at any deployments of new bursty 1090 malware. 1092 A per-flow (5-tuple) queue protection function 1093 [I-D.briscoe-docsis-q-protection] has been developed for the low 1094 latency queue in DOCSIS, which has adopted the DualQ L4S 1095 architecture. It protects the low latency service from any queue- 1096 building flows that accidentally or maliciously classify themselves 1097 into the low latency queue. It is designed to score flows based 1098 solely on their contribution to queuing (not flow rate in itself). 1099 Then, if the shared low latency queue is at risk of exceeding a 1100 threshold, the function redirects enough packets of the highest 1101 scoring flow(s) into the Classic queue to preserve low latency. 1103 Such a queue protection function is not considered a necessary part 1104 of the L4S architecture, which works without it (in a similar way to 1105 how the Internet works without per-flow rate policing). Indeed, 1106 under normal circumstances, DOCSIS queue protection does not 1107 intervene, and if operators find it is not necessary they can disable 1108 it. Part of the L4S experiment will be to see whether such a 1109 function is necessary. 1111 8.3. Interaction between Rate Policing and L4S 1113 As mentioned in Section 5.2, L4S should remove the need for low 1114 latency Diffserv classes. However, those Diffserv classes that give 1115 certain applications or users priority over capacity, would still be 1116 applicable in certain scenarios (e.g. corporate networks). Then, 1117 within such Diffserv classes, L4S would often be applicable to give 1118 traffic low latency and low loss as well. Within such a Diffserv 1119 class, the bandwidth available to a user or application is often 1120 limited by a rate policer. Similarly, in the default Diffserv class, 1121 rate policers are used to partition shared capacity. 1123 A classic rate policer drops any packets exceeding a set rate, 1124 usually also giving a burst allowance (variants exist where the 1125 policer re-marks non-compliant traffic to a discard-eligible Diffserv 1126 codepoint, so they may be dropped elsewhere during contention). 1127 Whenever L4S traffic encounters one of these rate policers, it will 1128 experience drops and the source has to fall back to a Classic 1129 congestion control, thus losing the benefits of L4S. So, in networks 1130 that already use rate policers and plan to deploy L4S, it will be 1131 preferable to redesign these rate policers to be more friendly to the 1132 L4S service. 1134 This is currently a research area. It might be achieved by setting a 1135 threshold where ECN marking is introduced, such that it is just under 1136 the policed rate or just under the burst allowance where drop is 1137 introduced. This could be applied to various types of policer, e.g. 1138 [RFC2697], [RFC2698] or the 'local' (non-ConEx) variant of the ConEx 1139 congestion policer [I-D.briscoe-conex-policing]. It might also be 1140 possible to design scalable congestion controls to respond less 1141 catastrophically to loss that has not been preceded by a period of 1142 increasing delay. 1144 The design of L4S-friendly rate policers will require a separate 1145 dedicated document. For further discussion of the interaction 1146 between L4S and Diffserv, see [I-D.briscoe-tsvwg-l4s-diffserv]. 1148 8.4. ECN Integrity 1150 Receiving hosts can fool a sender into downloading faster by 1151 suppressing feedback of ECN marks (or of losses if retransmissions 1152 are not necessary or available otherwise). Various ways to protect 1153 TCP feedback integrity have been developed. For instance: 1155 o The sender can test the integrity of the receiver's feedback by 1156 occasionally setting the IP-ECN field to the congestion 1157 experienced (CE) codepoint, which is normally only set by a 1158 congested link. Then the sender can test whether the receiver's 1159 feedback faithfully reports what it expects (see 2nd para of 1160 Section 20.2 of [RFC3168]). 1162 o A network can enforce a congestion response to its ECN markings 1163 (or packet losses) by auditing congestion exposure (ConEx) 1164 [RFC7713]. 1166 o The TCP authentication option (TCP-AO [RFC5925]) can be used to 1167 detect tampering with TCP congestion feedback. 1169 o The ECN Nonce [RFC3540] was proposed to detect tampering with 1170 congestion feedback, but it has been reclassified as historic 1171 [RFC8311]. 1173 Appendix C.1 of [I-D.ietf-tsvwg-ecn-l4s-id] gives more details of 1174 these techniques including their applicability and pros and cons. 1176 9. Acknowledgements 1178 Thanks to Richard Scheffenegger, Wes Eddy, Karen Nielsen and David 1179 Black for their useful review comments. 1181 Bob Briscoe and Koen De Schepper were part-funded by the European 1182 Community under its Seventh Framework Programme through the Reducing 1183 Internet Transport Latency (RITE) project (ICT-317700). Bob Briscoe 1184 was also part-funded by the Research Council of Norway through the 1185 TimeIn project. The views expressed here are solely those of the 1186 authors. 1188 10. References 1190 10.1. Normative References 1192 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1193 Requirement Levels", BCP 14, RFC 2119, 1194 DOI 10.17487/RFC2119, March 1997, 1195 . 1197 10.2. Informative References 1199 [DCttH15] De Schepper, K., Bondarenko, O., Briscoe, B., and I. 1200 Tsang, "`Data Centre to the Home': Ultra-Low Latency for 1201 All", RITE project Technical Report , 2015, 1202 . 1204 [Hohlfeld14] 1205 Hohlfeld , O., Pujol, E., Ciucu, F., Feldmann, A., and P. 1206 Barford, "A QoE Perspective on Sizing Network Buffers", 1207 Proc. ACM Internet Measurement Conf (IMC'14) hmm, November 1208 2014. 1210 [I-D.briscoe-conex-policing] 1211 Briscoe, B., "Network Performance Isolation using 1212 Congestion Policing", draft-briscoe-conex-policing-01 1213 (work in progress), February 2014. 1215 [I-D.briscoe-docsis-q-protection] 1216 Briscoe, B. and G. White, "Queue Protection to Preserve 1217 Low Latency", draft-briscoe-docsis-q-protection-00 (work 1218 in progress), July 2019. 1220 [I-D.briscoe-tsvwg-l4s-diffserv] 1221 Briscoe, B., "Interactions between Low Latency, Low Loss, 1222 Scalable Throughput (L4S) and Differentiated Services", 1223 draft-briscoe-tsvwg-l4s-diffserv-02 (work in progress), 1224 November 2018. 1226 [I-D.cardwell-iccrg-bbr-congestion-control] 1227 Cardwell, N., Cheng, Y., Yeganeh, S., and V. Jacobson, 1228 "BBR Congestion Control", draft-cardwell-iccrg-bbr- 1229 congestion-control-00 (work in progress), July 2017. 1231 [I-D.ietf-quic-transport] 1232 Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed 1233 and Secure Transport", draft-ietf-quic-transport-20 (work 1234 in progress), April 2019. 1236 [I-D.ietf-tcpm-accurate-ecn] 1237 Briscoe, B., Kuehlewind, M., and R. Scheffenegger, "More 1238 Accurate ECN Feedback in TCP", draft-ietf-tcpm-accurate- 1239 ecn-08 (work in progress), March 2019. 1241 [I-D.ietf-tcpm-generalized-ecn] 1242 Bagnulo, M. and B. Briscoe, "ECN++: Adding Explicit 1243 Congestion Notification (ECN) to TCP Control Packets", 1244 draft-ietf-tcpm-generalized-ecn-03 (work in progress), 1245 October 2018. 1247 [I-D.ietf-tsvwg-aqm-dualq-coupled] 1248 Schepper, K., Briscoe, B., and G. White, "DualQ Coupled 1249 AQMs for Low Latency, Low Loss and Scalable Throughput 1250 (L4S)", draft-ietf-tsvwg-aqm-dualq-coupled-09 (work in 1251 progress), July 2019. 1253 [I-D.ietf-tsvwg-ecn-encap-guidelines] 1254 Briscoe, B., Kaippallimalil, J., and P. Thaler, 1255 "Guidelines for Adding Congestion Notification to 1256 Protocols that Encapsulate IP", draft-ietf-tsvwg-ecn- 1257 encap-guidelines-13 (work in progress), May 2019. 1259 [I-D.ietf-tsvwg-ecn-l4s-id] 1260 Schepper, K. and B. Briscoe, "Identifying Modified 1261 Explicit Congestion Notification (ECN) Semantics for 1262 Ultra-Low Queuing Delay (L4S)", draft-ietf-tsvwg-ecn-l4s- 1263 id-06 (work in progress), March 2019. 1265 [I-D.smith-encrypted-traffic-management] 1266 Smith, K., "Network management of encrypted traffic", 1267 draft-smith-encrypted-traffic-management-05 (work in 1268 progress), May 2016. 1270 [I-D.sridharan-tcpm-ctcp] 1271 Sridharan, M., Tan, K., Bansal, D., and D. Thaler, 1272 "Compound TCP: A New TCP Congestion Control for High-Speed 1273 and Long Distance Networks", draft-sridharan-tcpm-ctcp-02 1274 (work in progress), November 2008. 1276 [I-D.stewart-tsvwg-sctpecn] 1277 Stewart, R., Tuexen, M., and X. Dong, "ECN for Stream 1278 Control Transmission Protocol (SCTP)", draft-stewart- 1279 tsvwg-sctpecn-05 (work in progress), January 2014. 1281 [I-D.white-tsvwg-nqb] 1282 White, G. and T. Fossati, "Identifying and Handling Non 1283 Queue Building Flows in a Bottleneck Link", draft-white- 1284 tsvwg-nqb-02 (work in progress), June 2019. 1286 [L4Sdemo16] 1287 Bondarenko, O., De Schepper, K., Tsang, I., and B. 1288 Briscoe, "orderedUltra-Low Delay for All: Live Experience, 1289 Live Analysis", Proc. MMSYS'16 pp33:1--33:4, May 2016, 1290 . 1294 [Mathis09] 1295 Mathis, M., "Relentless Congestion Control", PFLDNeT'09 , 1296 May 2009, . 1301 [NewCC_Proc] 1302 Eggert, L., "Experimental Specification of New Congestion 1303 Control Algorithms", IETF Operational Note ion-tsv-alt-cc, 1304 July 2007. 1306 [PI2] De Schepper, K., Bondarenko, O., Tsang, I., and B. 1307 Briscoe, "PI^2 : A Linearized AQM for both Classic and 1308 Scalable TCP", Proc. ACM CoNEXT 2016 pp.105-119, December 1309 2016, 1310 . 1312 [PragueLinux] 1313 Briscoe, B., De Schepper, K., Albisser, O., Misund, J., 1314 Tilmans, O., Kuehlewind, M., and A. Ahmed, "Implementing 1315 the `TCP Prague' Requirements for Low Latency Low Loss 1316 Scalable Throughput (L4S)", Proc. Linux Netdev 0x13 , 1317 March 2019, . 1320 [RFC2697] Heinanen, J. and R. Guerin, "A Single Rate Three Color 1321 Marker", RFC 2697, DOI 10.17487/RFC2697, September 1999, 1322 . 1324 [RFC2698] Heinanen, J. and R. Guerin, "A Two Rate Three Color 1325 Marker", RFC 2698, DOI 10.17487/RFC2698, September 1999, 1326 . 1328 [RFC2884] Hadi Salim, J. and U. Ahmed, "Performance Evaluation of 1329 Explicit Congestion Notification (ECN) in IP Networks", 1330 RFC 2884, DOI 10.17487/RFC2884, July 2000, 1331 . 1333 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1334 of Explicit Congestion Notification (ECN) to IP", 1335 RFC 3168, DOI 10.17487/RFC3168, September 2001, 1336 . 1338 [RFC3246] Davie, B., Charny, A., Bennet, J., Benson, K., Le Boudec, 1339 J., Courtney, W., Davari, S., Firoiu, V., and D. 1340 Stiliadis, "An Expedited Forwarding PHB (Per-Hop 1341 Behavior)", RFC 3246, DOI 10.17487/RFC3246, March 2002, 1342 . 1344 [RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust Explicit 1345 Congestion Notification (ECN) Signaling with Nonces", 1346 RFC 3540, DOI 10.17487/RFC3540, June 2003, 1347 . 1349 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows", 1350 RFC 3649, DOI 10.17487/RFC3649, December 2003, 1351 . 1353 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1354 Congestion Control Protocol (DCCP)", RFC 4340, 1355 DOI 10.17487/RFC4340, March 2006, 1356 . 1358 [RFC4774] Floyd, S., "Specifying Alternate Semantics for the 1359 Explicit Congestion Notification (ECN) Field", BCP 124, 1360 RFC 4774, DOI 10.17487/RFC4774, November 2006, 1361 . 1363 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1364 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1365 . 1367 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1368 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 1369 . 1371 [RFC5925] Touch, J., Mankin, A., and R. Bonica, "The TCP 1372 Authentication Option", RFC 5925, DOI 10.17487/RFC5925, 1373 June 2010, . 1375 [RFC6040] Briscoe, B., "Tunnelling of Explicit Congestion 1376 Notification", RFC 6040, DOI 10.17487/RFC6040, November 1377 2010, . 1379 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 1380 and K. Carlberg, "Explicit Congestion Notification (ECN) 1381 for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 1382 2012, . 1384 [RFC7540] Belshe, M., Peon, R., and M. Thomson, Ed., "Hypertext 1385 Transfer Protocol Version 2 (HTTP/2)", RFC 7540, 1386 DOI 10.17487/RFC7540, May 2015, 1387 . 1389 [RFC7560] Kuehlewind, M., Ed., Scheffenegger, R., and B. Briscoe, 1390 "Problem Statement and Requirements for Increased Accuracy 1391 in Explicit Congestion Notification (ECN) Feedback", 1392 RFC 7560, DOI 10.17487/RFC7560, August 2015, 1393 . 1395 [RFC7665] Halpern, J., Ed. and C. Pignataro, Ed., "Service Function 1396 Chaining (SFC) Architecture", RFC 7665, 1397 DOI 10.17487/RFC7665, October 2015, 1398 . 1400 [RFC7713] Mathis, M. and B. Briscoe, "Congestion Exposure (ConEx) 1401 Concepts, Abstract Mechanism, and Requirements", RFC 7713, 1402 DOI 10.17487/RFC7713, December 2015, 1403 . 1405 [RFC8033] Pan, R., Natarajan, P., Baker, F., and G. White, 1406 "Proportional Integral Controller Enhanced (PIE): A 1407 Lightweight Control Scheme to Address the Bufferbloat 1408 Problem", RFC 8033, DOI 10.17487/RFC8033, February 2017, 1409 . 1411 [RFC8170] Thaler, D., Ed., "Planning for Protocol Adoption and 1412 Subsequent Transitions", RFC 8170, DOI 10.17487/RFC8170, 1413 May 2017, . 1415 [RFC8257] Bensley, S., Thaler, D., Balasubramanian, P., Eggert, L., 1416 and G. Judd, "Data Center TCP (DCTCP): TCP Congestion 1417 Control for Data Centers", RFC 8257, DOI 10.17487/RFC8257, 1418 October 2017, . 1420 [RFC8290] Hoeiland-Joergensen, T., McKenney, P., Taht, D., Gettys, 1421 J., and E. Dumazet, "The Flow Queue CoDel Packet Scheduler 1422 and Active Queue Management Algorithm", RFC 8290, 1423 DOI 10.17487/RFC8290, January 2018, 1424 . 1426 [RFC8298] Johansson, I. and Z. Sarker, "Self-Clocked Rate Adaptation 1427 for Multimedia", RFC 8298, DOI 10.17487/RFC8298, December 1428 2017, . 1430 [RFC8311] Black, D., "Relaxing Restrictions on Explicit Congestion 1431 Notification (ECN) Experimentation", RFC 8311, 1432 DOI 10.17487/RFC8311, January 2018, 1433 . 1435 [RFC8312] Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 1436 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 1437 RFC 8312, DOI 10.17487/RFC8312, February 2018, 1438 . 1440 [RFC8511] Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst, 1441 "TCP Alternative Backoff with ECN (ABE)", RFC 8511, 1442 DOI 10.17487/RFC8511, December 2018, 1443 . 1445 [TCP-CA] Jacobson, V. and M. Karels, "Congestion Avoidance and 1446 Control", Laurence Berkeley Labs Technical Report , 1447 November 1988, . 1449 [TCP-sub-mss-w] 1450 Briscoe, B. and K. De Schepper, "Scaling TCP's Congestion 1451 Window for Small Round Trip Times", BT Technical Report 1452 TR-TUB8-2015-002, May 2015, 1453 . 1456 [UnorderedLTE] 1457 Austrheim, M., "Implementing immediate forwarding for 4G 1458 in a network simulator", Masters Thesis, Uni Oslo , June 1459 2019. 1461 Appendix A. Standardization items 1463 The following table includes all the items that will need to be 1464 standardized to provide a full L4S architecture. 1466 The table is too wide for the ASCII draft format, so it has been 1467 split into two, with a common column of row index numbers on the 1468 left. 1470 The columns in the second part of the table have the following 1471 meanings: 1473 WG: The IETF WG most relevant to this requirement. The "tcpm/iccrg" 1474 combination refers to the procedure typically used for congestion 1475 control changes, where tcpm owns the approval decision, but uses 1476 the iccrg for expert review [NewCC_Proc]; 1478 TCP: Applicable to all forms of TCP congestion control; 1480 DCTCP: Applicable to Data Center TCP as currently used (in 1481 controlled environments); 1483 DCTCP bis: Applicable to an future Data Center TCP congestion 1484 control intended for controlled environments; 1486 XXX Prague: Applicable to a Scalable variant of XXX (TCP/SCTP/RMCAT) 1487 congestion control. 1489 +-----+------------------------+------------------------------------+ 1490 | Req | Requirement | Reference | 1491 | # | | | 1492 +-----+------------------------+------------------------------------+ 1493 | 0 | ARCHITECTURE | | 1494 | 1 | L4S IDENTIFIER | [I-D.ietf-tsvwg-ecn-l4s-id] | 1495 | 2 | DUAL QUEUE AQM | [I-D.ietf-tsvwg-aqm-dualq-coupled] | 1496 | 3 | Suitable ECN Feedback | [I-D.ietf-tcpm-accurate-ecn], | 1497 | | | [I-D.stewart-tsvwg-sctpecn]. | 1498 | | | | 1499 | | SCALABLE TRANSPORT - | | 1500 | | SAFETY ADDITIONS | | 1501 | 4-1 | Fall back to | [I-D.ietf-tsvwg-ecn-l4s-id] S.2.3, | 1502 | | Reno/Cubic on loss | [RFC8257] | 1503 | 4-2 | Fall back to | [I-D.ietf-tsvwg-ecn-l4s-id] S.2.3 | 1504 | | Reno/Cubic if classic | | 1505 | | ECN bottleneck | | 1506 | | detected | | 1507 | | | | 1508 | 4-3 | Reduce RTT-dependence | [I-D.ietf-tsvwg-ecn-l4s-id] S.2.3 | 1509 | | | | 1510 | 4-4 | Scaling TCP's | [I-D.ietf-tsvwg-ecn-l4s-id] S.2.3, | 1511 | | Congestion Window for | [TCP-sub-mss-w] | 1512 | | Small Round Trip Times | | 1513 | | SCALABLE TRANSPORT - | | 1514 | | PERFORMANCE | | 1515 | | ENHANCEMENTS | | 1516 | 5-1 | Setting ECT in TCP | [I-D.ietf-tcpm-generalized-ecn] | 1517 | | Control Packets and | | 1518 | | Retransmissions | | 1519 | 5-2 | Faster-than-additive | [I-D.ietf-tsvwg-ecn-l4s-id] (Appx | 1520 | | increase | A.2.2) | 1521 | 5-3 | Faster Convergence at | [I-D.ietf-tsvwg-ecn-l4s-id] (Appx | 1522 | | Flow Start | A.2.2) | 1523 +-----+------------------------+------------------------------------+ 1524 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1525 | # | WG | TCP | DCTCP | DCTCP-bis | TCP | SCTP | RMCAT | 1526 | | | | | | Prague | Prague | Prague | 1527 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1528 | 0 | tsvwg | Y | Y | Y | Y | Y | Y | 1529 | 1 | tsvwg | | | Y | Y | Y | Y | 1530 | 2 | tsvwg | n/a | n/a | n/a | n/a | n/a | n/a | 1531 | | | | | | | | | 1532 | | | | | | | | | 1533 | | | | | | | | | 1534 | 3 | tcpm | Y | Y | Y | Y | n/a | n/a | 1535 | | | | | | | | | 1536 | 4-1 | tcpm | | Y | Y | Y | Y | Y | 1537 | | | | | | | | | 1538 | 4-2 | tcpm/ | | | | Y | Y | ? | 1539 | | iccrg? | | | | | | | 1540 | | | | | | | | | 1541 | | | | | | | | | 1542 | | | | | | | | | 1543 | | | | | | | | | 1544 | 4-3 | tcpm/ | | | Y | Y | Y | ? | 1545 | | iccrg? | | | | | | | 1546 | 4-4 | tcpm | Y | Y | Y | Y | Y | ? | 1547 | | | | | | | | | 1548 | | | | | | | | | 1549 | 5-1 | tcpm | Y | Y | Y | Y | n/a | n/a | 1550 | | | | | | | | | 1551 | 5-2 | tcpm/ | | | Y | Y | Y | ? | 1552 | | iccrg? | | | | | | | 1553 | 5-3 | tcpm/ | | | Y | Y | Y | ? | 1554 | | iccrg? | | | | | | | 1555 +-----+--------+-----+-------+-----------+--------+--------+--------+ 1557 Authors' Addresses 1559 Bob Briscoe (editor) 1560 CableLabs 1561 UK 1563 Email: ietf@bobbriscoe.net 1564 URI: http://bobbriscoe.net/ 1565 Koen De Schepper 1566 Nokia Bell Labs 1567 Antwerp 1568 Belgium 1570 Email: koen.de_schepper@nokia.com 1571 URI: https://www.bell-labs.com/usr/koen.de_schepper 1573 Marcelo Bagnulo 1574 Universidad Carlos III de Madrid 1575 Av. Universidad 30 1576 Leganes, Madrid 28911 1577 Spain 1579 Phone: 34 91 6249500 1580 Email: marcelo@it.uc3m.es 1581 URI: http://www.it.uc3m.es 1583 Greg White 1584 CableLabs 1585 US 1587 Email: G.White@CableLabs.com