idnits 2.17.1 draft-fairhurst-tsvwg-cc-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 6, 2019) is 1694 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'Jacobson88' is mentioned on line 365, but not defined == Missing Reference: 'I-D.ietf-quic-loss-recovery' is mentioned on line 606, but not defined == Unused Reference: 'I-D.ietf-tsvwg-l4s-arch' is defined on line 945, but no explicit reference was found in the text ** Downref: Normative reference to an Experimental RFC: RFC 3742 ** Downref: Normative reference to an Experimental RFC: RFC 6928 ** Downref: Normative reference to an Experimental RFC: RFC 7661 == Outdated reference: A later version (-34) exists of draft-ietf-quic-transport-22 == Outdated reference: A later version (-11) exists of draft-ietf-tcpm-2140bis-00 == Outdated reference: A later version (-28) exists of draft-ietf-tcpm-accurate-ecn-09 == Outdated reference: A later version (-15) exists of draft-ietf-tcpm-rack-05 == Outdated reference: A later version (-28) exists of draft-ietf-tcpm-rfc793bis-14 == Outdated reference: A later version (-25) exists of draft-ietf-tsvwg-aqm-dualq-coupled-10 == Outdated reference: A later version (-20) exists of draft-ietf-tsvwg-l4s-arch-04 == Outdated reference: A later version (-19) exists of draft-irtf-panrg-what-not-to-do-03 -- Obsolete informational reference (is this intentional?): RFC 896 (Obsoleted by RFC 7805) -- Obsolete informational reference (is this intentional?): RFC 2140 (Obsoleted by RFC 9040) -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) -- Obsolete informational reference (is this intentional?): RFC 2616 (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) -- Obsolete informational reference (is this intentional?): RFC 8312 (Obsoleted by RFC 9438) Summary: 3 errors (**), 0 flaws (~~), 12 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force G. Fairhurst 3 Internet-Draft University of Aberdeen 4 Intended status: Standards Track September 6, 2019 5 Expires: March 9, 2020 7 Guidelines for Internet Congestion Control at Endpoints 8 draft-fairhurst-tsvwg-cc-03 10 Abstract 12 This document provides guidance on the design of methods to avoid 13 congestion collapse and to provide congestion control. 14 Recommendations and requirements on this topic are distributed across 15 many documents in the RFC series. This therefore seeks to gather and 16 consolidate these recommendations. It is intended to provide input 17 to the design of new congestion control methods in protocols, such as 18 IETF QUIC. 20 The present document is for discussion and comment by the IETF. If 21 published, this plans to update the Best Current Practice in BCP 41, 22 which currently includes "Congestion Control Principles" provided in 23 RFC2914. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on March 9, 2020. 42 Copyright Notice 44 Copyright (c) 2019 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (https://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 60 1.1. Best Current Practice in the RFC-Series . . . . . . . . . 3 61 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 3. Principles of Congestion Control . . . . . . . . . . . . . . 4 63 3.1. A Diversity of Path Characteristics . . . . . . . . . . . 5 64 3.2. Flow Multiplexing and Congestion . . . . . . . . . . . . 6 65 3.3. Avoiding Congestion Collapse and Flow Starvation . . . . 8 66 4. Guidelines for Performing Congestion Control . . . . . . . . 9 67 4.1. Connection Initialization . . . . . . . . . . . . . . . . 10 68 4.2. Using Path Capacity . . . . . . . . . . . . . . . . . . . 11 69 4.3. Timers and Retransmission . . . . . . . . . . . . . . . . 13 70 4.4. Responding to Potential Congestion . . . . . . . . . . . 14 71 4.5. Using More Capacity . . . . . . . . . . . . . . . . . . . 15 72 4.6. Network Signals . . . . . . . . . . . . . . . . . . . . . 16 73 4.7. Protection of Protocol Mechanisms . . . . . . . . . . . . 17 74 5. IETF Guidelines on Evaluation of Congestion Control . . . . . 17 75 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 17 76 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 77 8. Security Considerations . . . . . . . . . . . . . . . . . . . 18 78 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 18 79 9.1. Normative References . . . . . . . . . . . . . . . . . . 18 80 9.2. Informative References . . . . . . . . . . . . . . . . . 20 81 Appendix A. Revision Notes . . . . . . . . . . . . . . . . . . . 24 82 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 25 84 1. Introduction 86 The IETF has specified Internet transports (e.g., TCP 87 [I-D.ietf-tcpm-rfc793bis], UDP [RFC0768], UDP-Lite [RFC3828], SCTP 88 [RFC4960], and DCCP [RFC4340]) as well as protocols layered on top of 89 these transports (e.g., RTP, QUIC [I-D.ietf-quic-transport], SCTP/UDP 90 [RFC6951], DCCP/UDP [RFC6773]) and transports that work directly over 91 the IP network layer. These transports are implemented in endpoints 92 (Internet hosts or routers acting as endpoints) and are designed to 93 detect and react to network congestion. TCP was the first transport 94 to provide this, although the TCP specifications found in RFC 793 95 predates this and did not contain any discussion of using or managing 96 a congestion window. 98 Recommendations and requirements on this topic are distributed across 99 many documents in the RFC series. This document therefore seeks to 100 gather and consolidate these recommendations. It is intended to 101 provide input to the design of new congestion control methods in 102 protocols. The focus of the present document is for unicast point- 103 to-point transports, this includes migration from using one path to 104 another path. 106 Some recommendations [RFC5783] and requirements in this document 107 apply to point-to-multipoint transports, however this topic extends 108 beyond the current document's scope. [RFC2914] provides additional 109 guidance on the use of multicast. 111 1.1. Best Current Practice in the RFC-Series 113 Like RFC2119, this documents borrows heavily from earlier 114 publications addressing the need for end-to-end congestion control, 115 and this subsection provides an overview of key topics. 117 [RFC2914] provides a general discussion of the principles of 118 congestion control. Section 3.1 describes preventing congestion 119 collapse. Section 3 discussed Fairness, stating "The equitable 120 sharing of bandwidth among flows depends on the fact that all flows 121 are running compatible congestion control algorithms." 123 Section 3.3 of [RFC2914] notes: "In addition to the prevention of 124 congestion collapse and concerns about fairness, a third reason for a 125 flow to use end-to-end congestion control can be to optimize its own 126 performance regarding throughput, delay, and loss. In some 127 circumstances, for example in environments of high statistical 128 multiplexing, the delay and loss rate experienced by a flow are 129 largely independent of its own sending rate. However, in 130 environments with lower levels of statistical multiplexing or with 131 per-flow scheduling, the delay and loss rate experienced by a flow is 132 in part a function of the flow's own sending rate. Thus, a flow can 133 use end-to-end congestion control to limit the delay or loss 134 experienced by its own packets. We would note, however, that in an 135 environment like the current best-effort Internet, concerns regarding 136 congestion collapse and fairness with competing flows limit the range 137 of congestion control behaviors available to a flow." 139 In addition to the prevention of congestion collapse and concerns 140 about fairness, a flow using end-to-end congestion control can 141 optimize its own performance regarding throughput, delay, and loss 142 [RFC2914]. 144 The standardization of congestion control in new transports can avoid 145 a congestion control "arms race" among competing protocols [RFC2914]. 147 That is, avoid designs of transports that could compete for Internet 148 resource in a way that significantly reduces the ability of other 149 flows to use the Internet. 151 The popularity of the Internet has led to a proliferation in the 152 number of TCP implementations [RFC2914]. A variety of non-TCP 153 transports have also being deployed. Some transport implementations 154 fail to use standardised congestion avoidance mechanisms correctly 155 because of poor implementation [RFC2525]. However, this is not the 156 only reason, and some transports have chosen mechanisms that are not 157 presently standardised, or have adopted approaches to their design 158 that differ that differ from present standards. Guidance is needed 159 therefore not only for future standardisation, but to ensure safe and 160 appropriate evolution of transports that have not presently been 161 submitted for standardisation. 163 2. Terminology 165 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 166 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 167 document are to be interpreted as described in [RFC2119]. 169 The path between endpoints (sometimes called "Internet Hosts" or 170 source and destination nodes in IPv6) consists of the endpoint 171 protocol stack at the sender and the receiver (which together 172 implement the transport service), and a succession of links and 173 network devices (routers or middleboxes) that provide connectivity 174 across the network. The set of network devices forming the path is 175 not usually fixed, and it should generally be assumed that this set 176 can change over arbitrary lengths of time. 178 [RFC5783] defines congestion control as "the feedback-based 179 adjustment of the rate at which data is sent into the network. 180 Congestion control is an indispensable set of principles and 181 mechanisms for maintaining the stability of the Internet." [RFC5783] 182 also provides an informational snapshot taken by the IRTF's Internet 183 Congestion Control Research Group (ICCRG) from October 2008. 185 Other terminology is directly copied from the cited RFCs. 187 3. Principles of Congestion Control 189 This section summarises the principles for providing congestion 190 control, and provides the background forSection 4. 192 3.1. A Diversity of Path Characteristics 194 Internet transports do not usually rely upon prior resource 195 reservation of capacity along the path they use. In the absence of 196 such a reservation, endpoints are unable to determine a safe rate at 197 which to start or continue their transmission. The use of an 198 Internet path therefore requires a combination of end-to-end 199 transport mechanisms to detect and then respond to changes in the 200 capacity that it discovers is available across the network path. 201 Buffering (an increase in latency) or loss (discard of a packet) 202 arises when the traffic arriving at a link or network exceeds the 203 resources available. 205 A transport that uses a path to send packets impacts any Internet 206 flows (possibly from or to other endpoints) that share the capacity 207 of a common network device or link (i.e., are (i.e., multiplexed). 208 As with loss, latency can also be incurred for other reasons 209 [RFC3819] (Quality of Service link scheduling, link radio resource 210 management/bandwidth on demand, transient outages, link 211 retransmission, and connection/resource setup below the IP layer, 212 etc). 214 When choosing an appropriate rate, packet loss needs to be 215 considered. A network device that does not support Active Queue 216 Management (AQM) [RFC7567] typically uses a drop-tail policy to drop 217 excess IP packets when its queue becomes full. Although losses are 218 not always due to congestion (loss may be due to link corruption, 219 receiver overrun, etc. [RFC3819]), endpoint congestion control has 220 to conservatively assume that any loss is potentially due to 221 congestion and then reduce the sending rate of their flows to reflect 222 the available capacity. 224 Many designs place the responsibility of rate-adaptivity at the 225 sender (source) endpoint, based on feedback provided by the remote 226 endpoint (receiver). Congestion control can also be implemented by 227 determining an appropriate rate limit at the receiver and using this 228 limit to control the maximum transport rate (e.g., using methods such 229 as [RFC5348] and [RFC4828]). 231 Principles include: 233 o A transport design is REQUIRED be robust to a change in the set of 234 devices forming the network path. A reconfiguration, reset or 235 other event could interrupt this path or trigger a change in the 236 set of network devices forming the path. 238 o Transports are REQUIRED to operate safely over the wide range of 239 path characteristics presented by Internet paths. 241 o The path characteristics can change over relatively short 242 intervals of time (i.e., characteristics discovered do not 243 necessarily remain valid for multiple Round Trip Times, RTTs). In 244 particular, the transport SHOULD measure and adapt to the 245 characteristics of the path(s) being used. 247 3.2. Flow Multiplexing and Congestion 249 It is normal to observe some perturbation in latency and/or loss when 250 flows shares a common network bottleneck with other traffic. This 251 impact needs to be considered and Internet flows ought to implement 252 appropriate safeguards to avoid inappropriate impact on other flows 253 that share the resources along a path. Congestion control methods 254 satisfy this requirement and therefore can help avoid congestion 255 collapse. 257 "This raises the issue of the appropriate granularity of a "flow", 258 where we define a `flow' as the level of granularity appropriate for 259 the application of both fairness and congestion control. From RFC 260 2309: "There are a few `natural' answers: 1) a TCP or UDP connection 261 (source address/port, destination address/port); 2) a source/ 262 destination host pair; 3) a given source host or a given destination 263 host. We would guess that the source/destination host pair gives the 264 most appropriate granularity in many circumstances. The granularity 265 of flows for congestion management is, at least in part, a policy 266 question that needs to be addressed in the wider IETF community." 267 [RFC2914] 269 Internet transports need to react to avoid congestion that impacts 270 other flows sharing a path. The Requirements for Internet Hosts 271 [RFC1122] formally mandates that endpoints perform congestion 272 control. "Because congestion control is critical to the stable 273 operation of the Internet, applications and other protocols that 274 choose to use UDP as an Internet transport must employ mechanisms to 275 prevent congestion collapse and to establish some degree of fairness 276 with concurrent traffic [RFC2914]. Additional mechanisms are, in 277 some cases, needed in the upper layer protocol for an application 278 that sends datagrams (e.g., using UDP) [RFC8085]. 280 Endpoints could use more than one flow. "The specific issue of a 281 browser opening multiple connections to the same destination has been 282 addressed by [RFC2616]. Section 8.1.4 states that "Clients that use 283 persistent connections SHOULD limit the number of simultaneous 284 connections that they maintain to a given server. A single-user 285 client SHOULD NOT maintain more than 2 connections with any server or 286 proxy." [RFC2140]. This suggests that there are opportunities for 287 transport connections between the same endpoints (from the same or 288 differing applications) might share some information, including their 289 congestion control state, if they are known to share the same path. 291 An endpoint can become aware of congestion by various means. A 292 signal that indicates congestion on the end-to-end network path, 293 needs to result in a congestion control reaction by the transport to 294 reduce the maximum rate permitted by the sending endpoint[RFC8087]). 296 The general recommendation in the UDP Guidelines [RFC8085] is that 297 applications SHOULD leverage existing congestion control techniques, 298 such as those defined for TCP [RFC5681], TCP-Friendly Rate Control 299 (TFRC) [RFC5348], SCTP [RFC4960], and other IETF-defined transports. 300 This is because there are many trade offs and details that can have a 301 serious impact on the performance of congestion control for the 302 application they support and other traffic that seeks to share the 303 resources along the path over which they communicate. 305 Network devices can be configured to isolate the queuing of packets 306 for different flows, or aggregates of flows, and thereby assist in 307 reducing the impact of flow multiplexing on other flows. This could 308 include methods seeking to equally distribute resources between 309 sharing flows, but this is explicitly not a requirement for a network 310 device [Flow-Rate-Fairness]. Endpoints can not rely on the presence 311 and correct configuration of these methods, and therefore even when a 312 path is expected to support such methods, also need to employ methods 313 that work end-to-end. 315 Experience has shown that successful protocols developed in a 316 specific context or for a particular application tend to also become 317 used in a wider range of contexts. Therefore, IETF specifications by 318 default target deployment on the general Internet, or need to be 319 defined for use only within a controlled environment. 321 Principles include: 323 o Endpoints MUST perform congestion control [RFC1122] . 325 o If an application or protocol chooses not to use a congestion- 326 controlled transport protocol, it SHOULD control the rate at which 327 it sends datagrams to a destination host, in order to fulfil the 328 requirements of [RFC2914], as stated in [RFC8085]. 330 o Transports SHOULD control the aggregate traffic they send on a 331 path. They ought not to use multiple congestion-controlled flows 332 between the same endpoints to gain a performance advantage. 334 o Transports that do not target Internet deployment need to be 335 constrained to only operate in a controlled environment (e.g., see 336 Section 3.6 of [RFC8085]) and provide appropriate mechanisms to 337 prevent traffic accidentally leaving the controlled environment 338 [RFC8084]. 340 o Although network devices can be configured to reduce the impact of 341 flow multiplexing on other flows, endpoints MUST NOT rely solely 342 on the presence and correct configuration of these methods, except 343 when constrained to operate in a controlled environment. 345 3.3. Avoiding Congestion Collapse and Flow Starvation 347 A significant pathology can arise when a poorly designed transport 348 creates congestion. This can result in severe service degradation or 349 "Internet meltdown". This phenomenon was first observed during the 350 early growth phase of the Internet in the mid 1980s [RFC0896] 351 [RFC0970]; This is technically called "congestion collapse". 352 [RFC2914] notes that informally, "congestion collapse occurs when an 353 increase in the network load results in a decrease in the useful work 354 done by the network." 356 Congestion collapse was first reported in the mid 1980s [RFC0896], 357 and was largely due to TCP connections unnecessarily retransmitting 358 packets that were either in transit or had already been received at 359 the receiver . We call the congestion collapse that results from the 360 unnecessary retransmission of packets classical congestion collapse. 361 Classical congestion collapse is a stable condition that can result 362 in throughput that is a small fraction of normal [RFC0896]. Problems 363 with classical congestion collapse have generally been corrected by 364 the timer improvements and congestion control mechanisms in modern 365 implementations of TCP [Jacobson88]. This was a key focus of 366 [RFC2309]. 368 A second form of potential congestion collapse occurs due to 369 undelivered packets [RFC2914]: "Congestion collapse from undelivered 370 packets arises when bandwidth is wasted by delivering packets through 371 the network that are dropped before reaching their ultimate 372 destination. This is probably the largest unresolved danger with 373 respect to congestion collapse in the Internet today. Different 374 scenarios can result in different degrees of congestion collapse, in 375 terms of the fraction of the congested links' bandwidth used for 376 productive work. The danger of congestion collapse from undelivered 377 packets is due primarily to the increasing deployment of open-loop 378 applications not using end-to-end congestion control. Even more 379 destructive would be best-effort applications that *increase* their 380 sending rate in response to an increased packet drop rate (e.g., 381 automatically using an increased level of FEC (Forward Error 382 Correction))." 383 Transports need to be specifically designed with measures to avoid 384 starving other flows of capacity (e.g., [RFC7567]). [RFC2309] also 385 discussed the dangers of congestion-unresponsive flows, and states 386 that "all UDP-based streaming applications should incorporate 387 effective congestion avoidance mechanisms." [RFC7567] and [RFC8085] 388 both reaffirm this, encouraging development of methods to prevent 389 starvation. 391 Principles include: 393 o Transports MUST avoid inducing flow starvation to other flows that 394 share resources along the path they use. 396 o Endpoints MUST treat a loss of all feedback (e.g., expiry of a 397 retransmission time out, RTO) as an indication of persistent 398 congestion (i.e., an indication of potential congestion collapse). 400 o When an endpoint detects persistent congestion, it MUST reduce the 401 maximum rate (e.g., reduce its congestion window). This normally 402 involves the use of protocol timers to detect a lack of 403 acknowledgment for transmitted data. 405 o Protocol timers (e.g., used for retransmission or to detect 406 persistent congestion) need to be appropriately initialised. A 407 transport SHOULD adapt its protocol timers to follow the measured 408 the path Round Trip Rime (RTT). 410 o A transport MUST employ exponential backoff each time persistent 411 congestion is detected [RFC1122], until the path characteristics 412 can again be confirmed. 414 o Network devices can provide mechanisms to mitigate the impact of 415 congestion collapse by transport flows (e.g., priority forwarding 416 of control information, and starvation detection) and SHOULD 417 mitigate the impact of non-conformant and malicious flows 418 [RFC7567]). These mechanism complement, but do not replace, the 419 endpoint congestion avoidance mechanisms. 421 4. Guidelines for Performing Congestion Control 423 This section provides guidance for designers of a new transport 424 protocol that decide to implement congestion control and its 425 associated mechanisms. 427 This section draws on language used in the specifications of TCP and 428 other IETF transports. For example, a protocol timer is generally 429 needed to detect persistent congestion, and this document uses the 430 term Retransmission Timeout (RTO) to refer to the operation of this 431 timer. Similarly, the document refers to a congestion window that 432 controls the rate of transmission by the congestion controller. The 433 use of these terms does not imply that endpoints need to implement 434 functions in the way that TCP currently does. Each new transport 435 needs to make its own design decisions about how to meet the 436 recommendations and requirements for congestion control. 438 4.1. Connection Initialization 440 When a connection or flow to a new destination is established, the 441 endpoints have little information about the characteristics of the 442 network path they will use. This section describes how a flow starts 443 transmission over such a path. 445 Flow Start: A new flow between two endpoints cannot assume that 446 capacity is available at the start of the flow, unless it uses a 447 mechanism to explicitly reserve capacity. In the absence of a 448 capacity signal, a flow MUST therefore start slowly. 450 The TCP slow-start algorithm is the accepted standard for flow 451 startup [RFC5681]. TCP uses the notion of an Initial Window (IW) 452 [RFC3390], updated by [RFC6928]) to define the initial volume of 453 data that can be sent on a path. This is not the smallest burst, 454 or the smallest window, but it is considered a safe starting point 455 for a path that is not suffering persistent congestion, and is 456 applicable until feedback about the path is received. The initial 457 sending rate (e.g., determined by the IW) needs to be viewed as 458 tentative until the capacity is confirmed to be available. 460 Initial RTO Interval: When a flow sends the first packet it 461 typically has no way to know the actual RTT of the path it uses. 462 The initial value used to the principal retransmission timer, used 463 to detect lack of responsiveness from the remote endpoint. In TCP 464 this is the starting value of the RTO, or corresponding timer in 465 another protocol. The initial value is therefore a trade off that 466 has important consequences on the overall Internet stability 467 [RFC6928] [RFC8085]. In the absence of any knowledge about the 468 latency of a path, the RTO MUST be conservatively set to no less 469 than 1 second. Values shorter than 1 second can be problematic 470 (see the appendix of [RFC6298]). (Note: Linux TCP has deployed a 471 smaller initial RTO value) 473 Initial RTO Expiry: If the RTO timer expires while awaiting 474 completion of the connection setup (in TCP, the ACK of a SYN 475 segment), and the implementation is using an RTO less than 3 476 seconds, the local endpoint can resend the connection setup. The 477 RTO MUST then be re-initialized to increase it to 3 seconds when 478 data transmission begins (i.e., after the three-way handshake 479 completes) [RFC6298] [RFC8085]. This conservative increase is 480 necessary to avoid congestion collapse when many flows retransmit 481 across a shared bottleneck with restricted capacity. 483 Initial Measured RTO: Once an RTT measurement is available (e.g., 484 through reception of an acknowledgement), this value must be 485 adjusted, and MUST take into account the RTT variance. For the 486 first sample, this variance cannot be determined, and a local 487 endpoint must therefore initialise the variance to RTT/2 (see 488 equation 2.2 of [RFC6928] and related text for UDP in section 489 3.1.1 of [RFC8085]). 491 Current State: A congestion controller MAY assume that recently used 492 capacity between a pair of endpoints is an indication of future 493 capacity available in the next RTT between the same endpoints. It 494 must react (reduce its rate) if this is not confirmed to be true. 496 Cached State: A congestion controller that recently used a specific 497 path could use additional state that lets a flow take-over the 498 capacity that was previously consumed by another flow (e.g., in 499 the last RTT) which it understands is using the same path, or 500 which was recently using that path. In TCP, this mechanism is 501 referred to as TCP Control Block (TCB) sharing [RFC2140] 502 [I-D.ietf-tcpm-2140bis]. The capacity and other information can 503 be used to suggest a faster initial sending rate, but this 504 information MUST be viewed as tentative until it is confirmed by 505 receiving confirmation that actual traffic has been sent across 506 the path. A sender MUST reduce its rate if this capacity is not 507 confirmed within the current RTO interval. 509 4.2. Using Path Capacity 511 This section describes how a sender needs to regulate the maximum 512 volume of data in flight over the interval of the current RTT, and 513 how it manages transmission of the capacity that it perceives is 514 available. 516 Congestion Management: The capacity available to a flow could be 517 expressed as the number of bytes in flight, the sending rate or a 518 limit on the number of unacknowledged segments. When determining 519 the capacity used, all data sent by a sender needs to be 520 accounted, this includes any additional overhead or data generated 521 by the transport. A congestion controller for a flow that uses 522 packet FEC encoding (e.g., [RFC6363]) needs to consider the 523 additional overhead introduced by packet FEC. A transport 524 performing congestion management will usually optimise performance 525 for its application by avoiding excessive loss or delay and 526 maintain a congestion window. In steady-state this congestion 527 window reflects a safe limit to the sending rate that has not 528 resulted in persistent congestion. 530 One common model views the path between two endpoints as a pipe. 531 New packets enter the pipe at the sending endpoint, older ones 532 leave at the receiving endpoint. Received data (leaving the 533 network path) is usually acknowledged to the sender. The rate 534 that data leaves the pipe indicates the share of the capacity that 535 has been utilised by the flow. If, on average (over an RTT), the 536 sending rate equals the receiving rate, this indicates that this 537 capacity can be safely used again in the next RTT. If the average 538 receiving rate is less than the sending rate, then the path is 539 either queuing packets, the RTT/path has changed, or there is 540 packet loss. 542 Transient Path: Path capacity information is transient. A sender 543 that does not use capacity has no understanding whether previously 544 used capacity remains available to use, or whether that capacity 545 has disappeared (e.g., to a change in the path that results in a 546 smaller bottleneck, or when more traffic emerges that consumes the 547 previously available capacity). For this reason, a transport that 548 is limited by the volume of data available to send MUST NOT 549 continue to grow the congestion window when the current congestion 550 window is more than twice the volume of data acknowledged in the 551 last RTT. 553 Standard TCP states that a TCP sender "SHOULD set the congestion 554 window to no more than the Restart Window (R)" before beginning 555 transmission, if the sender has not sent data in an interval that 556 exceeds the current retransmission timeout, i.e., when an 557 application becomes idle [RFC5681]. An experimental specification 558 [RFC7661] permits TCP senders to tentatively maintain a congestion 559 window larger than the path supported in the last RTT when 560 application-limited, provided that they appropriately and rapidly 561 collapse the congestion window when potential congestion is 562 detected. This mechanism is called Congestion Window Validation 563 (CWV). 565 Burst Mitigation: Even in the absence of congestion, statistical 566 multiplexing of flows can result in transient effects for flows 567 sharing common resources. A sender therefore SHOULD avoid 568 inducing excessive congestion to other flows (collateral damage). 570 While a congestion controller ought to limit sending at the 571 granularity of the current RTT, this can be insufficient to 572 satisfy the goals of preventing starvation and mitigating 573 collateral damage. This requires moderating the burst rate of the 574 sender to avoid significant periods where a flow(s) consume all 575 buffer capacity at the path bottleneck, which would otherwise 576 prevent other flows from gaining a reasonable share. 578 Endpoints SHOULD provide mechanisms to regulate the bursts of 579 transmission that the application/protocol sends to the network 580 (section 3.1.6 of [RFC8085]). ACK-Clocking [RFC5681] can help 581 mitigate bursts for protocols that receive continuous feedback of 582 reception (such as TCP). Sender pacing can mitigate this 583 [RFC8085], (See Section 4.6 of [RFC3449]), and has been 584 recommended for TCP in conditions where ACK-Clocking is not 585 effective, (e.g., [RFC3742], [RFC7661]). SCTP [RFC4960] defines a 586 maximum burst length (Max.Burst) with a recommended value of 4 587 segments to limit the SCTP burst size. 589 4.3. Timers and Retransmission 591 This section describes mechanisms to detect and provide 592 retransmission, and to protect the network in the absence of timely 593 feedback. 595 Loss Detection: Loss detection occurs after a sender determines 596 there is no delivery confirmation within an expected period of 597 time (e.g., by observing the time-ordering of the reception of 598 ACKs, as in TCP DupACK) or by utilising a timer to detect loss 599 (e.g., a transmission timer with a period less than the RTO, 600 [RFC8085] [I-D.ietf-tcpm-rack]) or a combination of using a timer 601 and ordering information to trigger retransmission of data. 603 Retransmission: Retransmission of lost packets or messages is a 604 common reliability mechanism. When loss is detected, the sender 605 can choose to retransmit the lost data, ignore the loss, or send 606 other data (e.g., [I-D.ietf-quic-loss-recovery]). Any 607 transmission consumes network capacity, therefore retransmissions 608 MUST NOT increase the network load in response to congestion loss 609 (which worsens that congestion) [RFC8085]. Any method that sends 610 additional data following loss is therefore responsible for 611 congestion control of the retransmissions (and any other packets 612 sent, including FEC information) as well as the original traffic. 614 Measuring the RTT: Once an endpoint has started communicating with 615 its peer, the RTT be MUST adjusted by measuring the actual path 616 RTT and its variance (see equation 2.3 of [RFC6928]). 618 Maintaining the RTO: The RTO SHOULD be set based on recent RTT 619 observations [RFC8085]. 621 RTO Expiry: Persistent lack of feedback (e.g., detected by an RTO 622 timer, or other means) MUST be treated an indication of potential 623 congestion collapse. A failure to receive any specific response 624 within a RTO interval could potentially be a result of a RTT 625 change, change of path, excessive loss, or even congestion 626 collapse. If there is no response within the RTO interval, TCP 627 collapses the congestion window to one segment [RFC5681]. Other 628 transports need to similarly respond when they detect loss of 629 feedback. 631 An endpoint needs to exponentially backoff the RTO interval 632 [RFC8085] each time the RTO expires. That is the RTO interval 633 MUST be set to the RTO * 2 [RFC6298] [RFC8085]. 635 Maximum RTO: A maximum value MAY be placed on the RTO interval. The 636 maximum limit to the RTO interval MUST NOT be less than 60 seconds 637 [RFC6298]. 639 4.4. Responding to Potential Congestion 641 Internet flows SHOULD implement appropriate safeguards to avoid 642 inappropriate impact on other flows that share the resources along a 643 path. The safety and responsiveness of new proposals need to be 644 evaluated [RFC5166]. In determining an appropriate congestion 645 response, designs could take into consideration the size of the 646 packets that experience congestion [RFC4828]. 648 Congestion Response: An endpoint MUST promptly reduce the rate of 649 transmission when it receive or detects an indication of 650 congestion (e.g., loss) [RFC2914]. 652 TCP Reno established a method that relies on multiplicative- 653 decrease to halve the sending rate while congestion is detected. 654 This response to congestion indications is considered sufficient 655 for safe Internet operation, but other decrease factors have also 656 been published in the RFC Series [RFC8312]. 658 ECN Response: A congestion control design should provide the 659 necessary mechanisms to support Explicit Congestion Notification 660 (ECN) [RFC3168] [RFC6679], as described in section 3.1.7 of 661 [RFC8085]. This can help determine an appropriate congestion 662 window when supported by routers on the path [RFC7567] to enable 663 rapid early indication of incipient congestion. 665 The early detection of incipient congestion justifies a different 666 reaction to an explicit congestion signal compared to the reaction 667 to packet loss [RFC8311] [RFC8087]. Simple feedback of received 668 Congestion Experienced (CE) marks [RFC3168], relies only on an 669 indication that congestion has been experienced within the last 670 RTT. This style of response is appropriate when a flow uses 671 ECT(0). The reaction to reception of this indication was modified 672 in TCP ABE [RFC8511]. Further detail about the received CE- 673 marking can be obtained by using more accurate receiver feedback 674 (e.g., [I-D.ietf-tcpm-accurate-ecn] and extended RTP feedback). 675 The more detailed feedback provides an opportunity for a finer- 676 granularity of congestion response. 678 Current work-in-progress [I-D.ietf-tsvwg-l4s-arch]defines a 679 reaction for packets marked with ECT(1), building on the style of 680 detailed feedback provided by [I-D.ietf-tcpm-accurate-ecn] and a 681 modified marking system [I-D.ietf-tsvwg-aqm-dualq-coupled]. 683 Robustness to Path Change: The detection of congestion and the 684 resulting reduction MUST NOT solely depend upon reception of a 685 signal from the remote endpoint, because congestion indications 686 could themselves be lost under persistent congestion. 688 The only way to surely confirm that a sending endpoint has 689 successfully communicated with a remote endpoint is to utilise a 690 timer (seeSection 4.3) to detect a lack of response that could 691 result from a change in the path or the path characteristics 692 (usually called the RTO). Congestion controllers that are unable 693 to react after one (or at most a few) RTTs after receiving a 694 congestion indication should observe the guidance in section 3.3 695 of the UDP Guidelines [RFC8085]. 697 Persistent Congestion: Persistent congestion can result in 698 congestion collapse, which MUST be aggressively avoided [RFC2914]. 699 Endpoints that experience persistent congestion and have already 700 exponentially reduced their congestion window to the restart 701 window (e.g., one packet), MUST further reduce the rate if the RTO 702 timer continues to expire. For example, TFRC[RFC5348] continues 703 to reduce its sending rate under persistent congestion to one 704 packet per RT, and then exponentially backs off the time between 705 single packet transmissions if the congestion continues to persist 706 [RFC2914]. 708 [RFC8085] provides guidelines for a sender that does not, or is 709 unable to, adapt the congestion window. 711 4.5. Using More Capacity 713 In the absence of persistent congestion, an endpoint is permitted to 714 increase its congestion window and hence the sending rate. An 715 increase should only occur when there is additional data available to 716 send across the path (i.e., the sender will utilise the additional 717 capacity in the next RTT). 719 TCP Reno [RFC5681] defines an algorithm, known as the Additive- 720 Increase/ Multiplicative-Decrease (AIMD) that allows a sender to 721 exponentially increase the congestion window each RTT from the 722 initial window to the first detected congestion event. This is 723 designed to allow new flows to rapidly acquire a suitable congestion 724 window. Where the bandwidth delay product (BDP) is large, it can 725 take many RTTs to determine a suitable share of the path capacity. 726 Such high BDP paths benefit from methods that more rapidly increase 727 the congestion window, but in compensation these need to be designed 728 to also react rapidly to any detected congestion (e.g., TCP Cubic 729 [RFC8312]). 731 Increasing Congestion Window: A sender MUST NOT continue to increase 732 its rate for more than an RTT after a congestion indication is 733 received. The transport SHOULD stop increasing its congestion 734 window as soon as it receives indication of congestion to avoid 735 excessive "overshoot". 737 While the sender is increasing the congestion window, a sender 738 will transmit faster than the last known safe rate. Any increase 739 above the last confirmed rate needs to be regarded as tentative 740 and the sender reduce their rate below the last confirmed safe 741 rate when congestion is experienced (a congestion event). 743 Congestion: An endpoint MUST utilise a method that assures the 744 sender will keep the rate below the previously confirmed safe rate 745 for multiple RTTs after an observed congestion event. In TCP, 746 this is performed by using a linear increase from a slow start 747 threshold that is re-initialised when congestion is experienced. 749 Avoiding Overshoot: Overshoot of the congestion window beyond the 750 point of congestion can significantly impact other flows sharing 751 resources along a path. It is important to note that as endpoints 752 experience more paths with a large BDP and a wider range of 753 potential path RTT, that variability or changes in the path can 754 have very significant constraints on appropriate dynamics for 755 increasing the congestion window (see also burst 756 mitigation,Section 4.2). 758 4.6. Network Signals 760 An endpoint can utilise signals from the network to help determine 761 how to regulate the traffic it sends. 763 Network Signals: Mechanisms MUST NOT solely rely on transport 764 messages or specific signalling messages to perform safely. (See 765 section 5.2 of [RFC8085] describing use of ICMP messages). The 766 path characteristics can change at any time. Transport mechanisms 767 need to be robust to potential black-holing of any signals (i.e., 768 need to be robust to loss or modification of packets). 770 A mechanism that utilises signals originating in the network 771 (e.g., RSVP, NSIS, Quick-Start, ECN), MUST assume that the set of 772 network devices on the path can change. This motivates the use of 773 soft-state when designing protocols that interact with signals 774 originating from network devices [I-D.irtf-panrg-what-not-to-do] 775 (e.g., ECN). This can include context-sensitive treatment of 776 "soft" signals provided to the endpoint [RFC5164]. 778 4.7. Protection of Protocol Mechanisms 780 An endpoint needs to provide protection from attacks on the traffic 781 it generates, or attacks that seek to increase the capacity it 782 consumes (impacting other traffic that shared a bottleneck). 784 Off Path Attack: A design MUST protect from off-path attack to the 785 protocol [RFC8085]. An attack on the congestion control can lead 786 to a Denial of Service (DoS) vulnerability for the flow being 787 controlled and/or other flows that share network resources along 788 the path. 790 Validation of Signals: Network signalling and control messages 791 (e.g., ICMP [RFC0792]) MUST be validated before they are used to 792 protect from malicious abuse. This MUST at least include 793 protection from off-path attack [RFC8085]. 795 On Path Attack: A protocol can be designed to protect from on-path 796 attacks, but this requires more complexity and the use of 797 encryption/authentication mechanisms (e.g., IPsec [RFC4301], QUIC 798 [I-D.ietf-quic-transport]). 800 5. IETF Guidelines on Evaluation of Congestion Control 802 The IETF has provided guidance [RFC5033] for considering alternate 803 congestion control algorithms. 805 The IRTF has also described a set of metrics and related trade-off 806 between metrics that can be used to compare, contrast, and evaluate 807 congestion control techniques [RFC5166]. [RFC5783] provides a 808 snapshot of congestion-control research in 2008. 810 6. Acknowledgements 812 This document owes much to the insight offered by Sally Floyd, both 813 in the writing of RFC2914 and her help and review in many years that 814 followed this. 816 Nicholas Kuhn helped develop the first draft of these guidelines. 817 Tom Jones and Ana Custura reviewed the first version of this draft. 818 The University of Aberdeen received funding to support this work from 819 the European Space Agency. 821 7. IANA Considerations 823 This memo includes no request to IANA. 825 RFC Editor Note: If there are no requirements for IANA, the section 826 will be removed during conversion into an RFC by the RFC Editor. 828 8. Security Considerations 830 This document introduces no new security considerations. Each RFC 831 listed in this document discusses the security considerations of the 832 specification it contains. The security considerations for the use 833 of transports are provided in the references section of the cited 834 RFCs. Security guidance for applications using UDP is provided in 835 the UDP Usage Guidelines [RFC8085]. 837 Section 4.7 describes general requirements relating to the design of 838 safe protocols and their protection from on and off path attack. 840 Section 4.6 follows current best practice to validate ICMP messages 841 prior to use. 843 9. References 845 9.1. Normative References 847 [RFC1122] Braden, R., Ed., "Requirements for Internet Hosts - 848 Communication Layers", STD 3, RFC 1122, 849 DOI 10.17487/RFC1122, October 1989, 850 . 852 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 853 Requirement Levels", BCP 14, RFC 2119, 854 DOI 10.17487/RFC2119, March 1997, 855 . 857 [RFC2914] Floyd, S., "Congestion Control Principles", BCP 41, 858 RFC 2914, DOI 10.17487/RFC2914, September 2000, 859 . 861 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 862 of Explicit Congestion Notification (ECN) to IP", 863 RFC 3168, DOI 10.17487/RFC3168, September 2001, 864 . 866 [RFC3390] Allman, M., Floyd, S., and C. Partridge, "Increasing TCP's 867 Initial Window", RFC 3390, DOI 10.17487/RFC3390, October 868 2002, . 870 [RFC3742] Floyd, S., "Limited Slow-Start for TCP with Large 871 Congestion Windows", RFC 3742, DOI 10.17487/RFC3742, March 872 2004, . 874 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP 875 Friendly Rate Control (TFRC): Protocol Specification", 876 RFC 5348, DOI 10.17487/RFC5348, September 2008, 877 . 879 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 880 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 881 . 883 [RFC6298] Paxson, V., Allman, M., Chu, J., and M. Sargent, 884 "Computing TCP's Retransmission Timer", RFC 6298, 885 DOI 10.17487/RFC6298, June 2011, 886 . 888 [RFC6928] Chu, J., Dukkipati, N., Cheng, Y., and M. Mathis, 889 "Increasing TCP's Initial Window", RFC 6928, 890 DOI 10.17487/RFC6928, April 2013, 891 . 893 [RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF 894 Recommendations Regarding Active Queue Management", 895 BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, 896 . 898 [RFC7661] Fairhurst, G., Sathiaseelan, A., and R. Secchi, "Updating 899 TCP to Support Rate-Limited Traffic", RFC 7661, 900 DOI 10.17487/RFC7661, October 2015, 901 . 903 [RFC8085] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage 904 Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085, 905 March 2017, . 907 9.2. Informative References 909 [Flow-Rate-Fairness] 910 Briscoe, Bob., "Flow Rate Fairness: Dismantling a 911 Religion, ACM Computer Communication Review 37(2):63-74", 912 April 2007. 914 [I-D.ietf-quic-transport] 915 Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed 916 and Secure Transport", draft-ietf-quic-transport-22 (work 917 in progress), July 2019. 919 [I-D.ietf-tcpm-2140bis] 920 Touch, J., Welzl, M., and S. Islam, "TCP Control Block 921 Interdependence", draft-ietf-tcpm-2140bis-00 (work in 922 progress), April 2019. 924 [I-D.ietf-tcpm-accurate-ecn] 925 Briscoe, B., Kuehlewind, M., and R. Scheffenegger, "More 926 Accurate ECN Feedback in TCP", draft-ietf-tcpm-accurate- 927 ecn-09 (work in progress), July 2019. 929 [I-D.ietf-tcpm-rack] 930 Cheng, Y., Cardwell, N., Dukkipati, N., and P. Jha, "RACK: 931 a time-based fast loss detection algorithm for TCP", 932 draft-ietf-tcpm-rack-05 (work in progress), April 2019. 934 [I-D.ietf-tcpm-rfc793bis] 935 Eddy, W., "Transmission Control Protocol Specification", 936 draft-ietf-tcpm-rfc793bis-14 (work in progress), July 937 2019. 939 [I-D.ietf-tsvwg-aqm-dualq-coupled] 940 Schepper, K., Briscoe, B., and G. White, "DualQ Coupled 941 AQMs for Low Latency, Low Loss and Scalable Throughput 942 (L4S)", draft-ietf-tsvwg-aqm-dualq-coupled-10 (work in 943 progress), July 2019. 945 [I-D.ietf-tsvwg-l4s-arch] 946 Briscoe, B., Schepper, K., Bagnulo, M., and G. White, "Low 947 Latency, Low Loss, Scalable Throughput (L4S) Internet 948 Service: Architecture", draft-ietf-tsvwg-l4s-arch-04 (work 949 in progress), July 2019. 951 [I-D.irtf-panrg-what-not-to-do] 952 Dawkins, S., "Path Aware Networking: Obstacles to 953 Deployment (A Bestiary of Roads Not Taken)", draft-irtf- 954 panrg-what-not-to-do-03 (work in progress), May 2019. 956 [RFC0768] Postel, J., "User Datagram Protocol", STD 6, RFC 768, 957 DOI 10.17487/RFC0768, August 1980, 958 . 960 [RFC0792] Postel, J., "Internet Control Message Protocol", STD 5, 961 RFC 792, DOI 10.17487/RFC0792, September 1981, 962 . 964 [RFC0896] Nagle, J., "Congestion Control in IP/TCP Internetworks", 965 RFC 896, DOI 10.17487/RFC0896, January 1984, 966 . 968 [RFC0970] Nagle, J., "On Packet Switches With Infinite Storage", 969 RFC 970, DOI 10.17487/RFC0970, December 1985, 970 . 972 [RFC2140] Touch, J., "TCP Control Block Interdependence", RFC 2140, 973 DOI 10.17487/RFC2140, April 1997, 974 . 976 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 977 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 978 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 979 S., Wroclawski, J., and L. Zhang, "Recommendations on 980 Queue Management and Congestion Avoidance in the 981 Internet", RFC 2309, DOI 10.17487/RFC2309, April 1998, 982 . 984 [RFC2525] Paxson, V., Allman, M., Dawson, S., Fenner, W., Griner, 985 J., Heavens, I., Lahey, K., Semke, J., and B. Volz, "Known 986 TCP Implementation Problems", RFC 2525, 987 DOI 10.17487/RFC2525, March 1999, 988 . 990 [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., 991 Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 992 Transfer Protocol -- HTTP/1.1", RFC 2616, 993 DOI 10.17487/RFC2616, June 1999, 994 . 996 [RFC3449] Balakrishnan, H., Padmanabhan, V., Fairhurst, G., and M. 997 Sooriyabandara, "TCP Performance Implications of Network 998 Path Asymmetry", BCP 69, RFC 3449, DOI 10.17487/RFC3449, 999 December 2002, . 1001 [RFC3819] Karn, P., Ed., Bormann, C., Fairhurst, G., Grossman, D., 1002 Ludwig, R., Mahdavi, J., Montenegro, G., Touch, J., and L. 1003 Wood, "Advice for Internet Subnetwork Designers", BCP 89, 1004 RFC 3819, DOI 10.17487/RFC3819, July 2004, 1005 . 1007 [RFC3828] Larzon, L-A., Degermark, M., Pink, S., Jonsson, L-E., Ed., 1008 and G. Fairhurst, Ed., "The Lightweight User Datagram 1009 Protocol (UDP-Lite)", RFC 3828, DOI 10.17487/RFC3828, July 1010 2004, . 1012 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 1013 Internet Protocol", RFC 4301, DOI 10.17487/RFC4301, 1014 December 2005, . 1016 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1017 Congestion Control Protocol (DCCP)", RFC 4340, 1018 DOI 10.17487/RFC4340, March 2006, 1019 . 1021 [RFC4828] Floyd, S. and E. Kohler, "TCP Friendly Rate Control 1022 (TFRC): The Small-Packet (SP) Variant", RFC 4828, 1023 DOI 10.17487/RFC4828, April 2007, 1024 . 1026 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1027 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1028 . 1030 [RFC5033] Floyd, S. and M. Allman, "Specifying New Congestion 1031 Control Algorithms", BCP 133, RFC 5033, 1032 DOI 10.17487/RFC5033, August 2007, 1033 . 1035 [RFC5164] Melia, T., Ed., "Mobility Services Transport: Problem 1036 Statement", RFC 5164, DOI 10.17487/RFC5164, March 2008, 1037 . 1039 [RFC5166] Floyd, S., Ed., "Metrics for the Evaluation of Congestion 1040 Control Mechanisms", RFC 5166, DOI 10.17487/RFC5166, March 1041 2008, . 1043 [RFC5783] Welzl, M. and W. Eddy, "Congestion Control in the RFC 1044 Series", RFC 5783, DOI 10.17487/RFC5783, February 2010, 1045 . 1047 [RFC6363] Watson, M., Begen, A., and V. Roca, "Forward Error 1048 Correction (FEC) Framework", RFC 6363, 1049 DOI 10.17487/RFC6363, October 2011, 1050 . 1052 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 1053 and K. Carlberg, "Explicit Congestion Notification (ECN) 1054 for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 1055 2012, . 1057 [RFC6773] Phelan, T., Fairhurst, G., and C. Perkins, "DCCP-UDP: A 1058 Datagram Congestion Control Protocol UDP Encapsulation for 1059 NAT Traversal", RFC 6773, DOI 10.17487/RFC6773, November 1060 2012, . 1062 [RFC6951] Tuexen, M. and R. Stewart, "UDP Encapsulation of Stream 1063 Control Transmission Protocol (SCTP) Packets for End-Host 1064 to End-Host Communication", RFC 6951, 1065 DOI 10.17487/RFC6951, May 2013, 1066 . 1068 [RFC8084] Fairhurst, G., "Network Transport Circuit Breakers", 1069 BCP 208, RFC 8084, DOI 10.17487/RFC8084, March 2017, 1070 . 1072 [RFC8087] Fairhurst, G. and M. Welzl, "The Benefits of Using 1073 Explicit Congestion Notification (ECN)", RFC 8087, 1074 DOI 10.17487/RFC8087, March 2017, 1075 . 1077 [RFC8311] Black, D., "Relaxing Restrictions on Explicit Congestion 1078 Notification (ECN) Experimentation", RFC 8311, 1079 DOI 10.17487/RFC8311, January 2018, 1080 . 1082 [RFC8312] Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 1083 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 1084 RFC 8312, DOI 10.17487/RFC8312, February 2018, 1085 . 1087 [RFC8511] Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst, 1088 "TCP Alternative Backoff with ECN (ABE)", RFC 8511, 1089 DOI 10.17487/RFC8511, December 2018, 1090 . 1092 Appendix A. Revision Notes 1094 Note to RFC-Editor: please remove this entire section prior to 1095 publication. 1097 Individual draft -00: 1099 o Comments and corrections are welcome directly to the authors or 1100 via the IETF TSVWG, working group mailing list. 1102 IndivRFC896 idual draft -01: 1104 o This update is proposed for initial WG comments. 1106 o If there is interest in progressing this document, the next 1107 version will include more complee referencing to citred material. 1109 Individual draft -02: 1111 o Correction of typos. 1113 Individual draft -03: 1115 o Added section 1.1 with text on current BCP status with additional 1116 alignment and updates to RFC2914 on Congestion Control Principles 1117 (after question from M. Scharf). 1119 o Edits to consolidate starvation text. 1121 o Added text that multicast currently noting that this is out of 1122 scope. 1124 o Revised sender-based CC text after comment from C. Perkins 1125 (Section 3.1,3.3 and other places). 1127 o Added more about FEC after comment from C. Perkins. 1129 o Added an explicit reference to RFC 5783 and updated this text 1130 (after question from M. Scharf). 1132 o To avoid doubt, added a para about "Each new transport needs to 1133 make its own design decisions about how to meet the 1134 recommendations and requirements for congestion control." 1136 o Upated references. 1138 o This draft does not attempt to address further alignment with 1139 draft-ietf-tcpm-rto-consider. This will form part of a future 1140 revision. 1142 Author's Address 1144 Godred Fairhurst 1145 University of Aberdeen 1146 School of Engineering 1147 Fraser Noble Building 1148 Aberdeen AB24 3U 1149 UK 1151 Email: gorry@erg.abdn.ac.uk