idnits 2.17.1 draft-ietf-tcpm-prr-rfc6937bis-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. -- The draft header indicates that this document obsoletes RFC6937, but the abstract doesn't seem to directly say this. It does mention RFC6937 though, so this could be OK. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 467 has weird spacing: '... bd d d...' == The document doesn't use any RFC 2119 keywords, yet has text resembling RFC 2119 boilerplate text. -- The document date (22 February 2021) is 1159 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RACK' is mentioned on line 279, but not defined == Unused Reference: 'RFC0793' is defined on line 638, but no explicit reference was found in the text == Unused Reference: 'FACK' is defined on line 667, but no explicit reference was found in the text == Unused Reference: 'Laminar' is defined on line 686, but no explicit reference was found in the text == Unused Reference: 'RFC3517' is defined on line 694, but no explicit reference was found in the text ** Obsolete normative reference: RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 3517 (Obsoleted by RFC 6675) Summary: 2 errors (**), 0 flaws (~~), 9 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TCP Maintenance Working Group M. Mathis 3 Internet-Draft N. Dukkipati 4 Obsoletes: 6937 (if approved) Y. Cheng 5 Intended status: Standards Track Google, Inc. 6 Expires: 26 August 2021 22 February 2021 8 Proportional Rate Reduction for TCP 9 draft-ietf-tcpm-prr-rfc6937bis-01 11 Abstract 13 This document updates the experimental Proportional Rate Reduction 14 (PRR) algorithm, described RFC 6937, to standards track. PRR 15 potentially replaces the Fast Recovery and Rate-Halving algorithms. 16 All of these algorithms regulate the amount of data sent by TCP or 17 other transport protocol during loss recovery. PRR accurately 18 regulates the actual flight size through recovery such that at the 19 end of recovery it will be as close as possible to the ssthresh, as 20 determined by the congestion control algorithm. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at https://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on 26 August 2021. 39 Copyright Notice 41 Copyright (c) 2021 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 46 license-info) in effect on the date of publication of this document. 47 Please review these documents carefully, as they describe your rights 48 and restrictions with respect to this document. Code Components 49 extracted from this document must include Simplified BSD License text 50 as described in Section 4.e of the Trust Legal Provisions and are 51 provided without warranty as described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 56 1.1. Document and WG Information . . . . . . . . . . . . . . . 3 57 2. Background . . . . . . . . . . . . . . . . . . . . . . . . . 3 58 3. Changes From RFC 6937 . . . . . . . . . . . . . . . . . . . . 5 59 4. Relationships to other standards . . . . . . . . . . . . . . 6 60 5. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 7 61 6. Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 8 62 7. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 9 63 8. Properties . . . . . . . . . . . . . . . . . . . . . . . . . 12 64 9. Adapting PRR to other transport protocols . . . . . . . . . . 14 65 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 14 66 11. Security Considerations . . . . . . . . . . . . . . . . . . . 15 67 12. Normative References . . . . . . . . . . . . . . . . . . . . 15 68 13. Informative References . . . . . . . . . . . . . . . . . . . 15 69 Appendix A. Strong Packet Conservation Bound . . . . . . . . . . 17 70 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 18 72 1. Introduction 74 This document updates the Proportional Rate Reduction (PRR) algorithm 75 described in [RFC6937] from experimental to standards track. PRR 76 accuracy regulates the amount of data sent during loss recovery, such 77 that at the end of recovery the flight size will be as close as 78 possible to the ssthresh, as determined by the congestion control 79 algorithm. PRR has been deployed in at least 3 major operating 80 systems covering the vast majority of today's web traffic. 82 The only change from RFC 6937 is the introduction of a new heuristic 83 that replaces a manual configuration parameter. There have been no 84 changes to the behaviors of the algorithms or the previously 85 published results. The new heuristic only changes behaviors in 86 corner cases that were not relevant prior to the Lost Retransmission 87 Detection (LRD) algorithm which was not implemented until after RFC 88 6937 was published. This document also includes additional 89 discussion about integration into other congestion control and 90 recovery algorithms. 92 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 93 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 94 document are to be interpreted as described in [RFC2119] 96 1.1. Document and WG Information 98 Formatted: 2021-02-22 14:22:57-08:00 100 Please send all comments, questions and feedback to tcpm@ietf.org 102 About revision 00: 104 The introduction above was drawn from draft-mathis-tcpm-rfc6937bis- 105 00. All of the text below was copied verbatim from RFC 6937, to 106 facilitate comparison between RFC 6937 and this document as it 107 evolves. 109 About revision 01: 111 * Recast the RFC 6937 introduction as background 113 * Made "Changes From RFC 6937" an explicit section 115 * Made Relationships to other standards more explicit 117 * Added a generalized safeACK heuristic 119 * Provided hints for non TCP implementations 121 * Added language about detecting ACK splitting, but have no advice 122 on actions (yet) 124 2. Background 126 This section is copied almost verbatim from the introduction to RFC 127 6937. 129 Standard congestion control [RFC5681] requires that TCP (and other 130 protocols) reduce their congestion window (cwnd) in response to 131 losses. Fast Recovery, described in the same document, is the 132 reference algorithm for making this adjustment. Its stated goal is 133 to recover TCP's self clock by relying on returning ACKs during 134 recovery to clock more data into the network. Fast Recovery 135 typically adjusts the window by waiting for one half round-trip time 136 (RTT) of ACKs to pass before sending any data. It is fragile because 137 it cannot compensate for the implicit window reduction caused by the 138 losses themselves. 140 RFC 6675 [RFC6675] makes Fast Recovery with Selective Acknowledgement 141 (SACK) [RFC2018] more accurate by computing "pipe", a sender side 142 estimate of the number of bytes still outstanding in the network. 144 With RFC 6675, Fast Recovery is implemented by sending data as 145 necessary on each ACK to prevent pipe from falling below slow-start 146 threshold (ssthresh), the window size as determined by the congestion 147 control algorithm. This protects Fast Recovery from timeouts in many 148 cases where there are heavy losses, although not if the entire second 149 half of the window of data or ACKs are lost. However, a single ACK 150 carrying a SACK option that implies a large quantity of missing data 151 can cause a step discontinuity in the pipe estimator, which can cause 152 Fast Retransmit to send a burst of data. 154 The Rate-Halving algorithm sends data on alternate ACKs during 155 recovery, such that after 1 RTT the window has been halved. Rate- 156 Halving was implemented in Linux after only being informally 157 published [RHweb], including an uncompleted document [RHID]. Rate- 158 Halving also does not adequately compensate for the implicit window 159 reduction caused by the losses and assumes a net 50% window 160 reduction, which was completely standard at the time it was written 161 but not appropriate for modern congestion control algorithms, such as 162 CUBIC [CUBIC], which reduce the window by less than 50%. As a 163 consequence, Rate-Halving often allows the window to fall further 164 than necessary, reducing performance and increasing the risk of 165 timeouts if there are additional losses. 167 PRR avoids these excess window adjustments such that at the end of 168 recovery the actual window size will be as close as possible to 169 ssthresh, the window size as determined by the congestion control 170 algorithm. It is patterned after Rate-Halving, but using the 171 fraction that is appropriate for the target window chosen by the 172 congestion control algorithm. During PRR, one of two additional 173 Reduction Bound algorithms limits the total window reduction due to 174 all mechanisms, including transient application stalls and the losses 175 themselves. 177 We describe two slightly different Reduction Bound algorithms: 178 Conservative Reduction Bound (CRB), which is strictly packet 179 conserving; and a Slow Start Reduction Bound (SSRB), which is more 180 aggressive than CRB by, at most, 1 segment per ACK. PRR-CRB meets 181 the Strong Packet Conservation Bound described in Appendix A; 182 however, in real networks it does not perform as well as the 183 algorithms described in RFC 6675, which prove to be more aggressive 184 in a significant number of cases. SSRB offers a compromise by 185 allowing TCP to send 1 additional segment per ACK relative to CRB in 186 some situations. Although SSRB is less aggressive than RFC 6675 187 (transmitting fewer segments or taking more time to transmit them), 188 it outperforms it, due to the lower probability of additional losses 189 during recovery. 191 The Strong Packet Conservation Bound on which PRR and both Reduction 192 Bounds are based is patterned after Van Jacobson's packet 193 conservation principle: segments delivered to the receiver are used 194 as the clock to trigger sending the same number of segments back into 195 the network. As much as possible, PRR and the Reduction Bound 196 algorithms rely on this self clock process, and are only slightly 197 affected by the accuracy of other estimators, such as pipe [RFC6675] 198 and cwnd. This is what gives the algorithms their precision in the 199 presence of events that cause uncertainty in other estimators. 201 The original definition of the packet conservation principle 202 [Jacobson88] treated packets that are presumed to be lost (e.g., 203 marked as candidates for retransmission) as having left the network. 204 This idea is reflected in the pipe estimator defined in RFC 6675 and 205 used here, but it is distinct from the Strong Packet Conservation 206 Bound as described in Appendix A, which is defined solely on the 207 basis of data arriving at the receiver. 209 3. Changes From RFC 6937 211 The largest change since RFC 6937 [RFC6937] is the introduction of a 212 new heuristic that uses good recovery progress (For TCP, snd.una 213 advances and no additional segments are marked as lost) to select 214 which Reduction Bound. RFC 6937 left the choice of Reduction Bound 215 to the discretion of the implementer but recommended to use BBR-SSRB 216 by default. For all of the environments explored in earlier PRR 217 research, the new heuristic is consistent with the old 218 recommendation. 220 The paper "An Internet-Wide Analysis of Traffic Policing" 221 [Flach2016policing] uncovered a crucial situation, not previously 222 explored, where both Reduction Bounds perform very poorly, but for 223 different reasons. Under many configurations, token bucket traffic 224 policers [token_bucket] can suddenly start discarding a large 225 fraction of the traffic, without any warning to the end systems. The 226 transport congestion control has no opportunity to measure the token 227 rate, and sets ssthresh based on the previously observed path 228 performance. This value for ssthresh may result in a data rate that 229 is substantially larger than the token rate, causing persistent high 230 loss. Under these conditions, both reduction bounds perform very 231 poorly. PRR-CRB is too timid, sometimes causing very long recovery 232 times at smaller than necessary windows, and PRR-SSRB is too 233 aggressive, often causing many retransmissions to be lost multiple 234 times. 236 Investigating these environments led to the development of a 237 "safeACK" heuristic to dynamically switch between Reduction Bounds: 238 use PRR-SSRB for ACKs reporting that the recovery is making good 239 progress (snd.una is advancing without any new losses) and PRR-CRB 240 otherwise 242 This heuristic is only invoked where application-limited behavior, 243 losses or other events cause the flight size to fall below ssthresh. 244 The extreme loss rates that make the heuristic important are only 245 common in the presence of token bucket policers, which are 246 pathologically wasteful and inefficient [Flach2016policing]. In 247 these environments the heuristic serves to salvage a bad situation 248 and any reasonable implementation of the heuristic performs far 249 better than either bound by itself. The heuristic has no effect 250 whatsoever in congestion events where there are no lost 251 retransmissions, including all of the examples described below and in 252 RFC 6937. 254 Since RFC 6937 was written, PRR has also been adapted to perform 255 multiplicative window reduction for non-loss based congestion control 256 algorithms, such as for RFC 3168 style ECN. This is typically done 257 by using some parts of the loss recovery state machine (in particular 258 the RecoveryPoint from RFC 6675) to invoke the PRR ACK processing for 259 exactly one round trip worth of ACKs. 261 For RFC 6937 we published a companion paper [IMC11] in which we 262 evaluated Fast Retransmit, Rate-Halving and various experimental PRR 263 versions in a large scale measurement study. Today, the legacy 264 algorithms used in that study have already faded from the code base, 265 making such comparisons impossible without recreating historical 266 algorithms. Readers interested in the measurement study should 267 review section 5 of RFC 6937 and the IMC paper [IMC11]. 269 4. Relationships to other standards 271 PRR is described as modifications to "TCP Congestion Control" 272 [RFC5681], and "A Conservative Loss Recovery Algorithm Based on 273 Selective Acknowledgment (SACK) for TCP" [RFC6675]. It is most 274 accurate and more easily implemented with SACK [RFC2018], but does 275 not require SACK. 277 The SafeACK heuristic came about as a consequence of robust Lost 278 Retransmission Detection under development in an early precursor to 279 [RACK]. Without LRD, policers that cause very high loss rates are 280 guaranteed to also cause retransmission timeouts because both RFC 281 5681 and RFC 6675 will send retransmissions above the policed rate. 282 PRR and the SafeACK heuristic were already well in place before the 283 RACK algorithm was fully matured. Note that there is no experience 284 implementing or testing RACK without PRR. 286 For this reason it is recommended that PRR is implemented with RACK. 288 5. Definitions 290 The following terms, parameters, and state variables are used as they 291 are defined in earlier documents: 293 RFC 793: snd.una (send unacknowledged). 295 RFC 5681: duplicate ACK, FlightSize, Sender Maximum Segment Size 296 (SMSS). 298 RFC 6675: covered (as in "covered sequence numbers"). 300 Voluntary window reductions: choosing not to send data in response to 301 some ACKs, for the purpose of reducing the sending window size and 302 data rate. 304 We define some additional variables: 306 SACKd: The total number of bytes that the scoreboard indicates have 307 been delivered to the receiver. This can be computed by scanning the 308 scoreboard and counting the total number of bytes covered by all sack 309 blocks. If SACK is not in use, SACKd is not defined. 311 DeliveredData: The total number of bytes that the current ACK 312 indicates have been delivered to the receiver. With SACK, 313 DeliveredData can be computed precisely as the change in snd.una, 314 plus the (signed) change in SACKd. In recovery without SACK, 315 DeliveredData is estimated to be 1 SMSS on duplicate 316 acknowledgements, and on a subsequent partial or full ACK, 317 DeliveredData is estimated to be the change in snd.una, minus 1 SMSS 318 for each preceding duplicate ACK. If this calculation results in a 319 negative DeliveredData the data sender can infer that the receiver is 320 using a ACK splitting attack [and do what? @@@@] 322 Note that DeliveredData is robust; for TCP using SACK, DeliveredData 323 can be precisely computed anywhere along the return path by 324 inspecting the returning ACKs. The consequence of missing ACKs is 325 that later ACKs will show a larger DeliveredData. Furthermore, for 326 any TCP (with or without SACK), the sum of DeliveredData must agree 327 with the forward progress over the same time interval. 329 safeACK: A local variable indicating that the current ACK reported 330 good progress -- snd.una advanced with no additional segments newly 331 marked lost. 333 sndcnt: A local variable indicating exactly how many bytes should be 334 sent in response to each ACK. Note that the decision of which data 335 to send (e.g., retransmit missing data or send more new data) is out 336 of scope for this document. 338 6. Algorithms 340 At the beginning of recovery, initialize PRR state. This assumes a 341 modern congestion control algorithm, CongCtrlAlg(), that might set 342 ssthresh to something other than FlightSize/2: 344 ssthresh = CongCtrlAlg() // Target cwnd after recovery 345 prr_delivered = 0 // Total bytes delivered during recovery 346 prr_out = 0 // Total bytes sent during recovery 347 RecoverFS = snd.nxt-snd.una // FlightSize at the start of recovery 349 Figure 1 351 On every ACK during recovery compute: 352 DeliveredData = change_in(snd.una) + change_in(SACKd) 353 prr_delivered += DeliveredData 354 pipe = (RFC 6675 pipe algorithm) 355 safeACK = (snd.una advances with no new losses) 356 if (pipe > ssthresh) { 357 // Proportional Rate Reduction 358 sndcnt = CEIL(prr_delivered * ssthresh / RecoverFS) - prr_out 359 } else { 360 // Two version of the reduction bound 361 if (safeACK) { // PRR+SSRB 362 limit = MAX(prr_delivered - prr_out, DeliveredData) + MSS 363 } else { // PRR+CRB 364 limit = prr_delivered - prr_out 365 } 366 // Attempt to catch up, as permitted by limit 367 sndcnt = MIN(ssthresh - pipe, limit) 368 } 370 Figure 2 372 On any data transmission or retransmission: 373 prr_out += (data sent) // strictly less than or equal to sndcnt 375 Figure 3 377 7. Examples 379 We illustrate these algorithms by showing their different behaviors 380 for two scenarios: TCP experiencing either a single loss or a burst 381 of 15 consecutive losses. In all cases we assume bulk data (no 382 application pauses), standard Additive Increase Multiplicative 383 Decrease (AIMD) congestion control, and cwnd = FlightSize = pipe = 20 384 segments, so ssthresh will be set to 10 at the beginning of recovery. 385 We also assume standard Fast Retransmit and Limited Transmit 386 [RFC3042], so TCP will send 2 new segments followed by 1 retransmit 387 in response to the first 3 duplicate ACKs following the losses. 389 Each of the diagrams below shows the per ACK response to the first 390 round trip for the various recovery algorithms when the zeroth 391 segment is lost. The top line indicates the transmitted segment 392 number triggering the ACKs, with an X for the lost segment. "cwnd" 393 and "pipe" indicate the values of these algorithms after processing 394 each returning ACK. "Sent" indicates how much 'N'ew or 395 'R'etransmitted data would be sent. Note that the algorithms for 396 deciding which data to send are out of scope of this document. 398 We are including the Linux Rate_Halving implementation to illustrate 399 the state-of-the-art at the time, even though this algorithm is no 400 longer supported. 402 When there is a single loss, PRR with either of the Reduction Bound 403 algorithms has the same behavior. We show "RB", a flag indicating 404 which Reduction Bound subexpression ultimately determined the value 405 of sndcnt. When there are minimal losses, "limit" (both algorithms) 406 will always be larger than ssthresh - pipe, so the sndcnt will be 407 ssthresh - pipe, indicated by "s" in the "RB" row. 409 RFC 6675 410 ack# X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 411 cwnd: 20 20 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 412 pipe: 19 19 18 18 17 16 15 14 13 12 11 10 10 10 10 10 10 10 10 413 sent: N N R N N N N N N N N 415 Rate-Halving (Historical Linux) 416 ack# X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 417 cwnd: 20 20 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 418 pipe: 19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 419 sent: N N R N N N N N N N N 421 PRR 422 ack# X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 423 pipe: 19 19 18 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 10 424 sent: N N R N N N N N N N N 425 RB: s s 426 Cwnd is not shown because PRR does not use it. 428 Key for RB 429 s: sndcnt = ssthresh - pipe // from ssthresh 430 b: sndcnt = prr_delivered - prr_out + SMSS // from banked 431 d: sndcnt = DeliveredData + SMSS // from DeliveredData 432 (Sometimes, more than one applies.) 434 Figure 4 436 Note that all 3 algorithms send the same total amount of data. RFC 437 6675 experiences a "half window of silence", while the Rate-Halving 438 and PRR spread the voluntary window reduction across an entire RTT. 440 Next, we consider the same initial conditions when the first 15 441 packets (0-14) are lost. During the remainder of the lossy RTT, only 442 5 ACKs are returned to the sender. We examine each of these 443 algorithms in succession. 445 RFC 6675 446 ack# X X X X X X X X X X X X X X X 15 16 17 18 19 447 cwnd: 20 20 11 11 11 448 pipe: 19 19 4 10 10 449 sent: N N 7R R R 451 Rate-Halving (Historical Linux) 452 ack# X X X X X X X X X X X X X X X 15 16 17 18 19 453 cwnd: 20 20 5 5 5 454 pipe: 19 19 4 4 4 455 sent: N N R R R 457 PRR-CRB 458 ack# X X X X X X X X X X X X X X X 15 16 17 18 19 459 pipe: 19 19 4 4 4 460 sent: N N R R R 461 RB: b b b 463 PRR-SSRB 464 ack# X X X X X X X X X X X X X X X 15 16 17 18 19 465 pipe: 19 19 4 5 6 466 sent: N N 2R 2R 2R 467 RB: bd d d 469 Figure 5 471 In this specific situation, RFC 6675 is more aggressive because once 472 Fast Retransmit is triggered (on the ACK for segment 17), TCP 473 immediately retransmits sufficient data to bring pipe up to cwnd. 474 Our earlier measurements [RFC 6937 section 6] indicates that RFC 6675 475 significantly outperforms Rate-Halving, PRR-CRB, and some other 476 similarly conservative algorithms that we tested, showing that it is 477 significantly common for the actual losses to exceed the window 478 reduction determined by the congestion control algorithm. 480 The Linux implementation of Rate-Halving included an early version of 481 the Conservative Reduction Bound [RHweb]. With this algorithm, the 5 482 ACKs trigger exactly 1 transmission each (2 new data, 3 old data), 483 and cwnd is set to 5. At a window size of 5, it takes 3 round trips 484 to retransmit all 15 lost segments. Rate-Halving does not raise the 485 window at all during recovery, so when recovery finally completes, 486 TCP will slow start cwnd from 5 up to 10. In this example, TCP 487 operates at half of the window chosen by the congestion control for 488 more than 3 RTTs, increasing the elapsed time and exposing it to 489 timeouts in the event that there are additional losses. 491 PRR-CRB implements a Conservative Reduction Bound. Since the total 492 losses bring pipe below ssthresh, data is sent such that the total 493 data transmitted, prr_out, follows the total data delivered to the 494 receiver as reported by returning ACKs. Transmission is controlled 495 by the sending limit, which is set to prr_delivered - prr_out. This 496 is indicated by the RB:b tagging in the figure. In this case, PRR- 497 CRB is exposed to exactly the same problems as Rate-Halving; the 498 excess window reduction causes it to take excessively long to recover 499 the losses and exposes it to additional timeouts. 501 PRR-SSRB increases the window by exactly 1 segment per ACK until pipe 502 rises to ssthresh during recovery. This is accomplished by setting 503 limit to one greater than the data reported to have been delivered to 504 the receiver on this ACK, implementing slow start during recovery, 505 and indicated by RB:d tagging in the figure. Although increasing the 506 window during recovery seems to be ill advised, it is important to 507 remember that this is actually less aggressive than permitted by RFC 508 5681, which sends the same quantity of additional data as a single 509 burst in response to the ACK that triggered Fast Retransmit. 511 For less extreme events, where the total losses are smaller than the 512 difference between FlightSize and ssthresh, PRR-CRB and PRR-SSRB have 513 identical behaviors. 515 8. Properties 517 The following properties are common to both PRR-CRB and PRR-SSRB, 518 except as noted: 520 PRR maintains TCP's ACK clocking across most recovery events, 521 including burst losses. RFC 6675 can send large unclocked bursts 522 following burst losses. 524 Normally, PRR will spread voluntary window reductions out evenly 525 across a full RTT. This has the potential to generally reduce the 526 burstiness of Internet traffic, and could be considered to be a type 527 of soft pacing. Hypothetically, any pacing increases the probability 528 that different flows are interleaved, reducing the opportunity for 529 ACK compression and other phenomena that increase traffic burstiness. 530 However, these effects have not been quantified. 532 If there are minimal losses, PRR will converge to exactly the target 533 window chosen by the congestion control algorithm. Note that as TCP 534 approaches the end of recovery, prr_delivered will approach RecoverFS 535 and sndcnt will be computed such that prr_out approaches ssthresh. 537 Implicit window reductions, due to multiple isolated losses during 538 recovery, cause later voluntary reductions to be skipped. For small 539 numbers of losses, the window size ends at exactly the window chosen 540 by the congestion control algorithm. 542 For burst losses, earlier voluntary window reductions can be undone 543 by sending extra segments in response to ACKs arriving later during 544 recovery. Note that as long as some voluntary window reductions are 545 not undone, the final value for pipe will be the same as ssthresh, 546 the target cwnd value chosen by the congestion control algorithm. 548 PRR with either Reduction Bound improves the situation when there are 549 application stalls, e.g., when the sending application does not queue 550 data for transmission quickly enough or the receiver stops advancing 551 rwnd (receiver window). When there is an application stall early 552 during recovery, prr_out will fall behind the sum of the 553 transmissions permitted by sndcnt. The missed opportunities to send 554 due to stalls are treated like banked voluntary window reductions; 555 specifically, they cause prr_delivered - prr_out to be significantly 556 positive. If the application catches up while TCP is still in 557 recovery, TCP will send a partial window burst to catch up to exactly 558 where it would have been had the application never stalled. Although 559 this burst might be viewed as being hard on the network, this is 560 exactly what happens every time there is a partial RTT application 561 stall while not in recovery. We have made the partial RTT stall 562 behavior uniform in all states. Changing this behavior is out of 563 scope for this document. 565 PRR with Reduction Bound is less sensitive to errors in the pipe 566 estimator. While in recovery, pipe is intrinsically an estimator, 567 using incomplete information to estimate if un-SACKed segments are 568 actually lost or merely out of order in the network. Under some 569 conditions, pipe can have significant errors; for example, pipe is 570 underestimated when a burst of reordered data is prematurely assumed 571 to be lost and marked for retransmission. If the transmissions are 572 regulated directly by pipe as they are with RFC 6675, a step 573 discontinuity in the pipe estimator causes a burst of data, which 574 cannot be retracted once the pipe estimator is corrected a few ACKs 575 later. For PRR, pipe merely determines which algorithm, PRR or the 576 Reduction Bound, is used to compute sndcnt from DeliveredData. While 577 pipe is underestimated, the algorithms are different by at most 1 578 segment per ACK. Once pipe is updated, they converge to the same 579 final window at the end of recovery. 581 Under all conditions and sequences of events during recovery, PRR-CRB 582 strictly bounds the data transmitted to be equal to or less than the 583 amount of data delivered to the receiver. We claim that this Strong 584 Packet Conservation Bound is the most aggressive algorithm that does 585 not lead to additional forced losses in some environments. It has 586 the property that if there is a standing queue at a bottleneck with 587 no cross traffic, the queue will maintain exactly constant length for 588 the duration of the recovery, except for +1/-1 fluctuation due to 589 differences in packet arrival and exit times. See Appendix A for a 590 detailed discussion of this property. 592 Although the Strong Packet Conservation Bound is very appealing for a 593 number of reasons, our earlier measurements [RFC 6937 section 6] 594 demonstrate that it is less aggressive and does not perform as well 595 as RFC 6675, which permits bursts of data when there are bursts of 596 losses. PRR-SSRB is a compromise that permits TCP to send 1 extra 597 segment per ACK as compared to the Packet Conserving Bound. From the 598 perspective of a strict Packet Conserving Bound, PRR-SSRB does indeed 599 open the window during recovery; however, it is significantly less 600 aggressive than RFC 6675 in the presence of burst losses. 602 9. Adapting PRR to other transport protocols 604 The main PRR algorithm and reductions bounds can be adapted to any 605 transport that can support RFC 6675. 607 The safeACK heuristic can be generalized as any ACK of a 608 retransmission that does not cause some other segment to be marked 609 for retransmission. That is, PRR_SSRB is safe on any ACK that 610 reduces the total number of pending and outstanding retransmissions. 612 10. Acknowledgements 614 This document is based in part on previous incomplete work by Matt 615 Mathis, Jeff Semke, and Jamshid Mahdavi [RHID] and influenced by 616 several discussions with John Heffner. 618 Monia Ghobadi and Sivasankar Radhakrishnan helped analyze the 619 experiments. 621 Ilpo Jarvinen reviewed the code. 623 Mark Allman improved the document through his insightful review. 625 Neal Cardwell for reviewing and testing the patch. 627 11. Security Considerations 629 PRR does not change the risk profile for TCP. 631 Implementers that change PRR from counting bytes to segments have to 632 be cautious about the effects of ACK splitting attacks [Savage99], 633 where the receiver acknowledges partial segments for the purpose of 634 confusing the sender's congestion accounting. 636 12. Normative References 638 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 639 RFC 793, DOI 10.17487/RFC0793, September 1981, 640 . 642 [RFC2018] Mathis, M., Mahdavi, J., Floyd, S., and A. Romanow, "TCP 643 Selective Acknowledgment Options", RFC 2018, 644 DOI 10.17487/RFC2018, October 1996, 645 . 647 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 648 Requirement Levels", BCP 14, RFC 2119, 649 DOI 10.17487/RFC2119, March 1997, 650 . 652 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 653 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 654 . 656 [RFC6675] Blanton, E., Allman, M., Wang, L., Jarvinen, I., Kojo, M., 657 and Y. Nishida, "A Conservative Loss Recovery Algorithm 658 Based on Selective Acknowledgment (SACK) for TCP", 659 RFC 6675, DOI 10.17487/RFC6675, August 2012, 660 . 662 13. Informative References 664 [CUBIC] Rhee, I. and L. Xu, "CUBIC: A new TCP-friendly high-speed 665 TCP variant", PFLDnet 2005, February 2005. 667 [FACK] Mathis, M. and J. Mahdavi, "Forward Acknowledgment: 668 Refining TCP Congestion Control", ACM SIGCOMM SIGCOMM96, 669 August 1996. 671 [Flach2016policing] 672 Flach, T., Papageorge, P., Terzis, A., Pedrosa, L., Cheng, 673 Y., Al Karim, T., Katz-Bassett, E., and R. Govindan, "An 674 Internet-Wide Analysis of Traffic Policing", ACM 675 SIGCOMM SIGCOMM2016, August 2016. 677 [IMC11] Dukkipati, N., Mathis, M., Cheng, Y., and M. Ghobadi, 678 "Proportional Rate Reduction for TCP", Proceedings of the 679 11th ACM SIGCOMM Conference on Internet Measurement 680 2011, Berlin, Germany, November 2011. 682 [Jacobson88] 683 Jacobson, V., "Congestion Avoidance and Control", SIGCOMM 684 Comput. Commun. Rev. 18(4), August 1988. 686 [Laminar] Mathis, M., "Laminar TCP and the case for refactoring TCP 687 congestion control", Work in Progress, 16 July 2012. 689 [RFC3042] Allman, M., Balakrishnan, H., and S. Floyd, "Enhancing 690 TCP's Loss Recovery Using Limited Transmit", RFC 3042, 691 DOI 10.17487/RFC3042, January 2001, 692 . 694 [RFC3517] Blanton, E., Allman, M., Fall, K., and L. Wang, "A 695 Conservative Selective Acknowledgment (SACK)-based Loss 696 Recovery Algorithm for TCP", RFC 3517, 697 DOI 10.17487/RFC3517, April 2003, 698 . 700 [RFC6937] Mathis, M., Dukkipati, N., and Y. Cheng, "Proportional 701 Rate Reduction for TCP", RFC 6937, DOI 10.17487/RFC6937, 702 May 2013, . 704 [RHID] Mathis, M., Semke, J., and J. Mahdavi, "The Rate-Halving 705 Algorithm for TCP Congestion Control", Work in Progress, 706 August 1999. 708 [RHweb] Mathis, M. and J. Mahdavi, "TCP Rate-Halving with Bounding 709 Parameters", Web publication, December 1997, 710 . 712 [Savage99] Savage, S., Cardwell, N., Wetherall, D., and T. Anderson, 713 "TCP congestion control with a misbehaving receiver", 714 SIGCOMM Comput. Commun. Rev. 29(5), October 1999. 716 Appendix A. Strong Packet Conservation Bound 718 PRR-CRB is based on a conservative, philosophically pure, and 719 aesthetically appealing Strong Packet Conservation Bound, described 720 here. Although inspired by the packet conservation principle 721 [Jacobson88], it differs in how it treats segments that are missing 722 and presumed lost. Under all conditions and sequences of events 723 during recovery, PRR-CRB strictly bounds the data transmitted to be 724 equal to or less than the amount of data delivered to the receiver. 725 Note that the effects of presumed losses are included in the pipe 726 calculation, but do not affect the outcome of PRR-CRB, once pipe has 727 fallen below ssthresh. 729 We claim that this Strong Packet Conservation Bound is the most 730 aggressive algorithm that does not lead to additional forced losses 731 in some environments. It has the property that if there is a 732 standing queue at a bottleneck that is carrying no other traffic, the 733 queue will maintain exactly constant length for the entire duration 734 of the recovery, except for +1/-1 fluctuation due to differences in 735 packet arrival and exit times. Any less aggressive algorithm will 736 result in a declining queue at the bottleneck. Any more aggressive 737 algorithm will result in an increasing queue or additional losses if 738 it is a full drop tail queue. 740 We demonstrate this property with a little thought experiment: 742 Imagine a network path that has insignificant delays in both 743 directions, except for the processing time and queue at a single 744 bottleneck in the forward path. By insignificant delay, we mean when 745 a packet is "served" at the head of the bottleneck queue, the 746 following events happen in much less than one bottleneck packet time: 747 the packet arrives at the receiver; the receiver sends an ACK that 748 arrives at the sender; the sender processes the ACK and sends some 749 data; the data is queued at the bottleneck. 751 If sndcnt is set to DeliveredData and nothing else is inhibiting 752 sending data, then clearly the data arriving at the bottleneck queue 753 will exactly replace the data that was served at the head of the 754 queue, so the queue will have a constant length. If queue is drop 755 tail and full, then the queue will stay exactly full. Losses or 756 reordering on the ACK path only cause wider fluctuations in the queue 757 size, but do not raise its peak size, independent of whether the data 758 is in order or out of order (including loss recovery from an earlier 759 RTT). Any more aggressive algorithm that sends additional data will 760 overflow the drop tail queue and cause loss. Any less aggressive 761 algorithm will under-fill the queue. Therefore, setting sndcnt to 762 DeliveredData is the most aggressive algorithm that does not cause 763 forced losses in this simple network. Relaxing the assumptions 764 (e.g., making delays more authentic and adding more flows, delayed 765 ACKs, etc.) is likely to increase the fine grained fluctuations in 766 queue size but does not change its basic behavior. 768 Note that the congestion control algorithm implements a broader 769 notion of optimal that includes appropriately sharing the network. 770 Typical congestion control algorithms are likely to reduce the data 771 sent relative to the Packet Conserving Bound implemented by PRR, 772 bringing TCP's actual window down to ssthresh. 774 Authors' Addresses 776 Matt Mathis 777 Google, Inc. 778 1600 Amphitheatre Parkway 779 Mountain View, California 94043 780 United States of America 782 Email: mattmathis@google.com 784 Nandita Dukkipati 785 Google, Inc. 786 1600 Amphitheatre Parkway 787 Mountain View, California 94043 788 United States of America 790 Email: nanditad@google.com 792 Yuchung Cheng 793 Google, Inc. 794 1600 Amphitheatre Parkway 795 Mountain View, California 94043 796 United States of America 798 Email: ycheng@google.com