idnits 2.17.1 draft-ietf-tcpm-proportional-rate-reduction-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 352 has weird spacing: '... bd d d...' -- The document date (Feb 6, 2013) is 4069 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 3517 (Obsoleted by RFC 6675) Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TCP Maintenance Working Group M. Mathis 3 Internet-Draft N. Dukkipati 4 Intended status: Experimental Y. Cheng 5 Expires: August 10, 2013 Google, Inc 6 Feb 6, 2013 8 Proportional Rate Reduction for TCP 9 draft-ietf-tcpm-proportional-rate-reduction-04.txt 11 Abstract 13 This document describes an experimental algorithm, Proportional Rate 14 Reduction (PPR) to improve the accuracy of the amount of data sent by 15 TCP during loss recovery. Standard Congestion Control requires that 16 TCP and other protocols reduce their congestion window in response to 17 losses. This window reduction naturally occurs in the same round 18 trip as the data retransmissions to repair the losses, and is 19 implemented by choosing not to transmit any data in response to some 20 ACKs arriving from the receiver. Two widely deployed algorithms are 21 used to implement this window reduction: Fast Recovery and Rate 22 Halving. Both algorithms are needlessly fragile under a number of 23 conditions, particularly when there is a burst of losses such that 24 the number of ACKs returning to the sender is small. Proportional 25 Rate Reduction minimizes these excess window adjustments such that at 26 the end of recovery the actual window size will be as close as 27 possible to ssthresh, the window size determined by the congestion 28 control algorithm. It is patterned after Rate Halving, but using the 29 fraction that is appropriate for target window chosen by the 30 congestion control algorithm. 32 Status of this Memo 34 This Internet-Draft is submitted in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF). Note that other groups may also distribute 39 working documents as Internet-Drafts. The list of current Internet- 40 Drafts is at http://datatracker.ietf.org/drafts/current/. 42 Internet-Drafts are draft documents valid for a maximum of six months 43 and may be updated, replaced, or obsoleted by other documents at any 44 time. It is inappropriate to use Internet-Drafts as reference 45 material or to cite them other than as "work in progress." 47 This Internet-Draft will expire on August 10, 2013. 49 Copyright Notice 51 Copyright (c) 2013 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. Code Components extracted from this document must 60 include Simplified BSD License text as described in Section 4.e of 61 the Trust Legal Provisions and are provided without warranty as 62 described in the Simplified BSD License. 64 Table of Contents 66 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 67 2. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 5 68 3. Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 6 69 3.1. Examples . . . . . . . . . . . . . . . . . . . . . . . . . 6 70 4. Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 9 71 5. Measurements . . . . . . . . . . . . . . . . . . . . . . . . . 11 72 6. Conclusion and Recommendations . . . . . . . . . . . . . . . . 12 73 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 74 8. Security Considerations . . . . . . . . . . . . . . . . . . . 13 75 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 76 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 14 77 10.1. Normative References . . . . . . . . . . . . . . . . . . . 14 78 10.2. Informative References . . . . . . . . . . . . . . . . . . 14 79 Appendix A. Strong Packet Conservation Bound . . . . . . . . . . 15 80 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 82 1. Introduction 84 This document describes an experimental algorithm, Proportional Rate 85 Reduction (PPR) to improve the accuracy of the amount of data sent by 86 TCP during loss recovery. 88 Standard Congestion Control [RFC5681] requires that TCP (and other 89 protocols) reduce their congestion window in response to losses. 90 Fast Recovery, described in the same document, is the reference 91 algorithm for making this adjustment. Its stated goal is to recover 92 TCP's self clock by relying on returning ACKs during recovery to 93 clock more data into the network. Fast Recovery typically adjusts 94 the window by waiting for one half RTT of ACKs to pass before sending 95 any data. It is fragile because it can not compensate for the 96 implicit window reduction caused by the losses themselves. 98 RFC 6675 [RFC6675] makes Fast Recovery with SACK [RFC2018] more 99 accurate by computing "pipe", a sender side estimate of the number of 100 bytes still outstanding in the network. With RFC 6675, Fast Recovery 101 is implemented by sending data as necessary on each ACK to prevent 102 pipe from falling below ssthresh, the window size as determined by 103 the congestion control algorithm. This protects Fast Recovery from 104 timeouts in many cases where there are heavy losses, although not if 105 the entire second half of the window of data or ACKs are lost. 106 However, a single ACK carrying a SACK option that implies a large 107 quantity of missing data can cause a step discontinuity in the pipe 108 estimator, which can cause Fast Retransmit to send a burst of data. 110 The rate-halving algorithm sends data on alternate ACKs during 111 recovery, such that after one RTT the window has been halved. Rate- 112 halving is implemented in Linux after only being informally published 113 [RHweb], including an uncompleted Internet-Draft [RHID]. Rate- 114 halving also does not adequately compensate for the implicit window 115 reduction caused by the losses and assumes a net 50% window 116 reduction, which was completely standard at the time it was written, 117 but not appropriate for modern congestion control algorithms such as 118 Cubic [CUBIC], which reduce the window by less than 50%. As a 119 consequence rate-halving often allows the window to fall further than 120 necessary, reducing performance and increasing the risk of timeouts 121 if there are additional losses. 123 Proportional Rate Reduction (PPR) avoids these excess window 124 adjustments such that at the end of recovery the actual window size 125 will be as close as possible to ssthresh, the window size determined 126 by the congestion control algorithm. It is patterned after Rate 127 Halving, but using the fraction that is appropriate for the target 128 window chosen by the congestion control algorithm. During PRR one of 129 two additional reduction bound algorithms limits the total window 130 reduction due to all mechanisms, including transient application 131 stalls and the losses themselves. 133 We describe two slightly different reduction bound algorithms: 134 conservative reduction bound (CRB), which is strictly packet 135 conserving; and a slow start reduction bound (SSRB), which is more 136 aggressive than CRB by at most one segment per ACK. PRR-CRB meets 137 the Strong Packet Conservation Bound described in Appendix A, however 138 in real networks it does not perform as well as the algorithms 139 described in RFC 6675, which prove to be more aggressive in a 140 significant number of cases. SSRB offers a compromise by allowing 141 TCP to send one additional segment per ACK relative to CRB in some 142 situations. Although SSRB is less aggressive than RFC 6675 143 (transmitting fewer segments or taking more time to transmit them) it 144 outperforms it, due to the lower probability of additional losses 145 during recovery. 147 The Strong Packet Conservation Bound on which PRR and both reduction 148 bounds are based is patterned after Van Jacobson's packet 149 conservation principle: segments delivered to the receiver are used 150 as the clock to trigger sending the same number of segments back into 151 the network. As much as possible Proportional Rate Reduction and the 152 reduction bound algorithms rely on this self clock process, and are 153 only slightly affected by the accuracy of other estimators, such as 154 pipe [RFC6675] and cwnd. This is what gives the algorithms their 155 precision in the presence of events that cause uncertainty in other 156 estimators. 158 The original definition of the packet conservation principle 159 [Jacobson88] treated packets that are presumed to be lost (e.g. 160 marked as candidates for retransmission) as having left the network. 161 This idea is reflected in the pipe estimator defined in RFC 6675 and 162 used here, but it is distinct from Strong Packet Conservation Bound 163 described in Appendix A, which is defined solely on the basis of data 164 arriving at the receiver. 166 We evaluated these and other algorithms in a large scale measurement 167 study presented in a companion paper [IMC11] and summarized in 168 Section 5. This measurement study was based on RFC 3517 [RFC3517], 169 which has since been superseded by RFC 6675. Since there are slight 170 difference between the two specifications, and we were meticulous 171 about our implementation of RFC 3517 we are not comfortable 172 unconditionally asserting that our measurement results apply to RFC 173 6675, although we believe this to be the case. We have instead 174 chosen to be pedantic about describing measurement results relative 175 to RFC 3517, on which they were actually based. General discussions 176 algorithms and their properties have been updated to refer to RFC 177 6675. 179 We found that for authentic network traffic PRR+SSRB outperforms both 180 RFC 3517 and Linux Rate Halving even though it is less aggressive 181 than RFC 3517. We believe that these results apply to RFC 6675 as 182 well. 184 The algorithms are described as modifications to RFC 5681 [RFC5681], 185 TCP Congestion Control, using concepts drawn from the pipe algorithm 186 [RFC6675]. They are most accurate and more easily implemented with 187 SACK [RFC2018], but do not require SACK. 189 2. Definitions 191 The following terms, parameters and state variables are used as they 192 are defined in earlier documents: 194 RFC 793: snd.una 196 RFC 5681: duplicate ACK, FlightSize, Sender Maximum Segment Size 197 (SMSS) 199 RFC 6675: covered (as in "covered sequence numbers") 201 Voluntary window reductions: choosing not to send data in response to 202 some ACKs, for the purpose of reducing the sending window size and 203 data rate. 205 We define some additional variables: 207 SACKd: The total number of bytes that the scoreboard indicates have 208 been delivered to the receiver. This can be computed by scanning the 209 scoreboard and counting the total number of bytes covered by all sack 210 blocks. If SACK is not in use, SACKd is not defined. 212 DeliveredData: The total number of bytes that the current ACK 213 indicates have been delivered to the receiver. When not in recovery, 214 DeliveredData is the change in snd.una. With SACK, DeliveredData can 215 be computed precisely as the change in snd.una plus the (signed) 216 change in SACKd. In recovery without SACK, DeliveredData is 217 estimated to be 1 SMSS on duplicate acknowledgements, and on a 218 subsequent partial or full ACK, DeliveredData is estimated to be the 219 change in snd.una, minus one SMSS for each preceding duplicate ACK. 221 Note that DeliveredData is robust: for TCP using SACK, DeliveredData 222 can be precisely computed anywhere in the network just by inspecting 223 the returning ACKs. The consequence of missing ACKs is that later 224 ACKs will show a larger DeliveredData. Furthermore, for any TCP 225 (with or without SACK) the sum of DeliveredData must agree with the 226 forward progress over the same time interval. 228 We introduce a local variable "sndcnt", which indicates exactly how 229 many bytes should be sent in response to each ACK. Note that the 230 decision of which data to send (e.g. retransmit missing data or send 231 more new data) is out of scope for this document. 233 3. Algorithms 235 At the beginning of recovery initialize PRR state. This assumes a 236 modern congestion control algorithm, CongCtrlAlg(), that might set 237 ssthresh to something other than FlightSize/2: 239 ssthresh = CongCtrlAlg() // Target cwnd after recovery 240 prr_delivered = 0 // Total bytes delivered during recovery 241 prr_out = 0 // Total bytes sent during recovery 242 RecoverFS = snd.nxt-snd.una // FlightSize at the start of recovery 244 On every ACK during recovery compute: 246 DeliveredData = change_in(snd.una) + change_in(SACKd) 247 prr_delivered += DeliveredData 248 pipe = (RFC 6675 pipe algorithm) 249 if (pipe > ssthresh) { 250 // Proportional Rate Reduction 251 sndcnt = CEIL(prr_delivered * ssthresh / RecoverFS) - prr_out 252 } else { 253 // Two version of the reduction bound 254 if (conservative) { // PRR+CRB 255 limit = prr_delivered - prr_out 256 } else { // PRR+SSRB 257 limit = MAX(prr_delivered - prr_out, DeliveredData) + MSS 258 } 259 // Attempt to catch up, as permitted by limit 260 sndcnt = MIN(ssthresh - pipe, limit) 261 } 263 On any data transmission or retransmission: 265 prr_out += (data sent) // strictly less than or equal to sndcnt 267 3.1. Examples 269 We illustrate these algorithms by showing their different behaviors 270 for two scenarios: TCP experiencing either a single loss or a burst 271 of 15 consecutive losses. In all cases we assume bulk data (no 272 application pauses), standard AIMD congestion control and cwnd = 273 FlightSize = pipe = 20 segments, so ssthresh will be set to 10 at the 274 beginning of recovery. We also assume standard Fast Retransmit and 275 Limited Transmit [RFC3042], so TCP will send two new segments 276 followed by one retransmit in response to the first 3 duplicate ACKs 277 following the losses. 279 Each of the diagrams below shows the per ACK response to the first 280 round trip for the various recovery algorithms when the zeroth 281 segment is lost. The top line indicates the transmitted segment 282 number triggering the ACKs, with an X for the lost segment. "cwnd" 283 and "pipe" indicate the values of these algorithms after processing 284 each returning ACK. "Sent" indicates how much 'N'ew or 285 'R'etransmitted data would be sent. Note that the algorithms for 286 deciding which data to send are out of scope of this document. 288 When there is a single loss, PRR with either of the reduction bound 289 algorithms has the same behavior. We show "RB", a flag indicating 290 which reduction bound subexpression ultimately determined the value 291 of sndcnt. When there is minimal losses "limit" (both algorithms) 292 will always be larger than ssthresh - pipe, so the sndcnt will be 293 ssthresh - pipe indicated by "s" in the "RB" row. 295 RFC 6675 296 ack# X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 297 cwnd: 20 20 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 298 pipe: 19 19 18 18 17 16 15 14 13 12 11 10 10 10 10 10 10 10 10 299 sent: N N R N N N N N N N N 301 Rate Halving (Linux) 302 ack# X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 303 cwnd: 20 20 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 304 pipe: 19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 305 sent: N N R N N N N N N N N 307 PRR 308 ack# X 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 309 pipe: 19 19 18 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 10 310 sent: N N R N N N N N N N N 311 RB: s s 312 Cwnd is not shown because PRR does not use it. 314 Key for RB 315 s: sndcnt = ssthresh - pipe // from ssthresh 316 b: sndcnt = prr_delivered - prr_out + SMSS // from banked 317 d: sndcnt = DeliveredData + SMSS // from DeliveredData 318 (Sometimes more than one applies) 320 Note that all three algorithms send the same total amount of data. 321 RFC 6675 experiences a "half-window of silence", while the Rate 322 Halving and PRR spread the voluntary window reduction across an 323 entire RTT. 325 Next we consider the same initial conditions when the first 15 326 packets (0-14) are lost. During the remainder of the lossy RTT, only 327 5 ACKs are returned to the sender. We examine each of these 328 algorithms in succession. 330 RFC 6675 331 ack# X X X X X X X X X X X X X X X 15 16 17 18 19 332 cwnd: 20 20 11 11 11 333 pipe: 19 19 4 10 10 334 sent: N N 7R R R 336 Rate Halving (Linux) 337 ack# X X X X X X X X X X X X X X X 15 16 17 18 19 338 cwnd: 20 20 5 5 5 339 pipe: 19 19 4 4 4 340 sent: N N R R R 342 PRR-CRB 343 ack# X X X X X X X X X X X X X X X 15 16 17 18 19 344 pipe: 19 19 4 4 4 345 sent: N N R R R 346 RB: b b b 348 PRR-SSRB 349 ack# X X X X X X X X X X X X X X X 15 16 17 18 19 350 pipe: 19 19 4 5 6 351 sent: N N 2R 2R 2R 352 RB: bd d d 354 In this specific situation, RFC 6675 is more aggressive, because once 355 fast retransmit is triggered (on the ACK for segment 17) TCP 356 immediately retransmits sufficient data to bring pipe up to cwnd. 357 Our measurement data (see Section 5) indicates that RFC 6675 358 significantly outperforms Rate Halving, PRR-CRB and some other 359 similarly conservative algorithms that we tested, showing that it is 360 significantly common for the actual losses to exceed the window 361 reduction determined by the congestion control algorithm. 363 The Linux implementation of Rate Halving includes an early version of 364 the conservative reduction bound [RHweb]. In this situation the five 365 ACKs trigger exactly one transmission each (2 new data, 3 old data), 366 and cwnd is set to 5. At a window size of 5, it takes three round 367 trips to retransmit all 15 lost segments. Rate Halving does not 368 raise the window at all during recovery, so when recovery finally 369 completes, TCP will slowstart cwnd from 5 up to 10. In this example, 370 TCP operates at half of the window chosen by the congestion control 371 for more than three RTTs, increasing the elapsed time and exposing it 372 to timeouts in the event that there are additional losses. 374 PRR-CRB implements a conservative reduction bound. Since the total 375 losses bring pipe below ssthresh, data is sent such that the total 376 data transmitted, prr_out, follows the total data delivered to the 377 receiver as reported by returning ACKs. Transmission is controlled 378 by the sending limit, which was set to prr_delivered - prr_out. This 379 is indicated by the RB:b tagging in the figure. In this case PRR-CRB 380 is exposed to exactly the same problems as Rate Halving, the excess 381 window reduction causes it to take excessively long to recover the 382 losses and exposes it to additional timeouts. 384 PRR-SSRB increases the window by exactly 1 segment per ACK until pipe 385 rises to ssthresh during recovery. This is accomplished by setting 386 limit to one greater than the data reported to have been delivered to 387 the receiver on this ACK, implementing slowstart during recovery, and 388 indicated by RB:d tagging in the figure. Although increasing the 389 window during recovery seems to be ill advised, it is important to 390 remember that this is actually less aggressive than permitted by RFC 391 5681, which sends the same quantity of additional data as a single 392 burst in response to the ACK that triggered Fast Retransmit 394 For less extreme events, where the total losses are smaller than the 395 difference between Flight Size and ssthresh, PRR-CRB and PRR-SSRB 396 have identical behaviours. 398 4. Properties 400 The following properties are common to both PRR-CRB and PRR-SSRB 401 except as noted: 403 Proportional Rate Reduction maintains TCPs ACK clocking across most 404 recovery events, including burst losses. RFC 6675 can send large 405 unclocked bursts following burst losses. 407 Normally Proportional Rate Reduction will spread voluntary window 408 reductions out evenly across a full RTT. This has the potential to 409 generally reduce the burstiness of Internet traffic, and could be 410 considered to be a type of soft pacing. Hypothetically, any pacing 411 increases the probability that different flows are interleaved, 412 reducing the opportunity for ACK compression and other phenomena that 413 increase traffic burstiness. However these effects have not been 414 quantified. 416 If there are minimal losses, Proportional Rate Reduction will 417 converge to exactly the target window chosen by the congestion 418 control algorithm. Note that as TCP approaches the end of recovery 419 prr_delivered will approach RecoverFS and sndcnt will be computed 420 such that prr_out approaches ssthresh. 422 Implicit window reductions due to multiple isolated losses during 423 recovery cause later voluntary reductions to be skipped. For small 424 numbers of losses the window size ends at exactly the window chosen 425 by the congestion control algorithm. 427 For burst losses, earlier voluntary window reductions can be undone 428 by sending extra segments in response to ACKs arriving later during 429 recovery. Note that as long as some voluntary window reductions are 430 not undone, the final value for pipe will be the same as ssthresh, 431 the target cwnd value chosen by the congestion control algorithm. 433 Proportional Rate Reduction with either reduction bound improves the 434 situation when there are application stalls (e.g. when the sending 435 application does not queue data for transmission quickly enough or 436 the receiver stops advancing rwnd). When there is an application 437 stall early during recovery prr_out will fall behind the sum of the 438 transmissions permitted by sndcnt. The missed opportunities to send 439 due to stalls are treated like banked voluntary window reductions: 440 specifically they cause prr_delivered-prr_out to be significantly 441 positive. If the application catches up while TCP is still in 442 recovery, TCP will send a partial window burst to catch up to exactly 443 where it would have been, had the application never stalled. 444 Although this burst might be viewed as being hard on the network, 445 this is exactly what happens every time there is a partial RTT 446 application stall while not in recovery. We have made the partial 447 RTT stall behavior uniform in all states. Changing this behavior is 448 out of scope for this document. 450 Proportional Rate Reduction with Reduction Bound is less sensitive to 451 errors in the pipe estimator. While in recovery, pipe is 452 intrinsically an estimator, using incomplete information to estimate 453 if un-SACKed segments are actually lost or merely out-of-order in the 454 network. Under some conditions pipe can have significant errors, for 455 example pipe is underestimated when when a burst of reordered data is 456 prematurely assumed to be lost and marked for retransmission. If the 457 transmissions are regulated directly by pipe as they are with RFC 458 6675, such as step discontinuity in the pipe estimator causes a burst 459 of data, which can not be retracted once the pipe estimator is 460 corrected a few ACKs later. For PRR, pipe merely determines which 461 algorithm, Proportional Rate Reduction or the reduction bound, is 462 used to compute sndcnt from DeliveredData. While pipe is 463 underestimated the algorithms are different by at most one segment 464 per ACK. Once pipe is updated they converge to the same final window 465 at the end of recovery. 467 Under all conditions and sequences of events during recovery, PRR-CRB 468 strictly bounds the data transmitted to be equal to or less than the 469 amount of data delivered to the receiver. We claim that this Strong 470 Packet Conservation Bound is the most aggressive algorithm that does 471 not lead to additional forced losses in some environments. It has 472 the property that if there is a standing queue at a bottleneck with 473 no cross traffic, the queue will maintain exactly constant length for 474 the duration of the recovery, except for +1/-1 fluctuation due to 475 differences in packet arrival and exit times. See Appendix A for a 476 detailed discussion of this property. 478 Although the Strong Packet Conserving Bound in very appealing for a 479 number of reasons, our measurements summarized in Section 5 480 demonstrate that it is less aggressive and does not perform as well 481 as RFC 6675, which permits large bursts of data when there are bursts 482 of losses. PRR-SSRB is a compromise that permits TCP to send one 483 extra segment per ACK as compared to the packet conserving bound. 484 From the perspective of a strict packet conserving bound, PRR-SSRB 485 does indeed open the window during recovery, however it is 486 significantly less aggressive than RFC6675 in the presence of burst 487 losses. 489 5. Measurements 491 In a companion IMC11 paper [IMC11] we describe some measurements 492 comparing the various strategies for reducing the window during 493 recovery. The experiments were performed on servers carrying Google 494 production traffic and are briefly summarized here. 496 The various window reduction algorithms and extensive instrumentation 497 were all implemented in Linux 2.6. We used the uniform set of 498 algorithms present in the base Linux implementation, including CUBIC 499 [CUBIC], limited transmit [RFC3042], threshold transmit from [FACK] 500 (this algorithm was not present in RFC 3517, but a similar algorithm 501 has been added to RFC 6675) and lost retransmission detection 502 algorithms. We confirmed that the behaviors of Rate Halving (the 503 Linux default), RFC 3517 and PRR were authentic to their respective 504 specifications and that performance and features were comparable to 505 the kernels in production use. All of the different window reduction 506 algorithms were all present in a common kernel and could be selected 507 with a sysctl, such that we had an absolutely uniform baseline for 508 comparing them. 510 Our experiments included an additional algorithm, PRR with an 511 unlimited bound (PRR-UB), which sends ssthresh-pipe bursts when pipe 512 falls below ssthresh. This behavior parallels RFC 3517. 514 An important detail of this configuration is that CUBIC only reduces 515 the window by 30%, as opposed to the 50% reduction used by 516 traditional congestion control algorithms. This accentuates the 517 tendency for RFC 3517 and PRR-UB to send a burst at the point when 518 Fast Retransmit gets triggered because pipe is likely to already be 519 below ssthresh. Precisely this condition was observed for 32% of the 520 recovery events: pipe fell below ssthresh before Fast Retransmit is 521 triggered, thus the various PRR algorithms start in the reduction 522 bound phase, and RFC 3517 sends bursts of segments with the fast 523 retransmit. 525 In the companion paper we observe that PRR-SSRB spends the least time 526 in recovery of all the algorithms tested, largely because it 527 experiences fewer timeouts once it is already in recovery. 529 RFC 3517 experiences 29% more detected lost retransmissions and 2.6% 530 more timeouts (presumably due to undetected lost retransmissions) 531 than PRR-SSRB. These results are representative of PRR-UB and other 532 algorithms that send bursts when pipe falls below ssthresh. 534 Rate Halving experiences 5% more timeouts and significantly smaller 535 final cwnd values at the end of recovery. The smaller cwnd sometimes 536 causes the recovery itself to take extra round trips. These results 537 are representative of PRR-CRB and other algorithms that implement 538 strict packet conservation during recovery. 540 6. Conclusion and Recommendations 542 Although the Strong Packet Conserving Bound used in PRR-CRB is very 543 appealing for a number of reasons, our measurements show that it is 544 less aggressive and does not perform as well as RFC 3517, (and by 545 implication RFC 6675), which permit bursts of data when there are 546 bursts of losses. RFC 3517 and RFC 6675 are conservative in the 547 original sense of Van Jacobson's packet conservation principle, which 548 included the assumption that presumed lost segments have indeed left 549 the network. PRR-CRB makes no such assumption, following instead a 550 Strong Packet Conserving Bound, in which only packets that have 551 actually arrived at the receiver are considered to have left the 552 network. PRR-SSRB is a compromise that permits TCP to send one extra 553 segment per ACK relative to the Strong Packet Conserving Bound, to 554 partially compensate for excess losses. 556 From the perspective of the Strong Packet Conserving Bound, PRR-SSRB 557 does indeed open the window during recovery, however it is 558 significantly less aggressive than RFC 3517 (and RFC 6675) in the 559 presence of burst losses. Even so, it often outperforms RFC 3517, 560 (and presumably RFC 6675) because it avoids some of the self 561 inflicted losses caused by bursts. 563 At this time we see no reason not to test and deploy PRR-SSRB on a 564 large scale. Implementers worried about any potential impact of 565 raising the window during recovery may want to optionally support 566 PRR-CRB (which is actually simpler to implement) for comparison 567 studies. Furthermore, there is one minor detail of PRR that can be 568 improved by replacing pipe by total_pipe as defined by Laminar TCP 569 [Laminar]. 571 One final comment about terminology: we expect that common usage will 572 drop "slow start reduction bound" from the algorithm name. This 573 document needed to be pedantic about having distinct names for 574 proportional rate reduction and every variant of the reduction bound. 575 However, we do not anticipate any future exploration of the 576 alternative reduction bounds. 578 7. Acknowledgements 580 This draft is based in part on previous incomplete work by Matt 581 Mathis, Jeff Semke and Jamshid Mahdavi [RHID] and influenced by 582 several discussion with John Heffner. 584 Monia Ghobadi and Sivasankar Radhakrishnan helped analyze the 585 experiments. 587 Ilpo Jarvinen reviewed the code. 589 Mark Allman improved the document through his insightful review. 591 8. Security Considerations 593 Proportional Rate Reduction does not change the risk profile for TCP. 595 Implementers that change PRR from counting bytes to segments have to 596 be cautious about the effects of ACK splitting attacks [Savage99], 597 where the receiver acknowledges partial segments for the purpose of 598 confusing the sender's congestion accounting. 600 9. IANA Considerations 602 This document makes no request of IANA. 604 Note to RFC Editor: this section may be removed on publication as an 605 RFC. 607 10. References 609 10.1. Normative References 611 [RFC2018] Mathis, M., Mahdavi, J., Floyd, S., and A. Romanow, "TCP 612 Selective Acknowledgment Options", RFC 2018, October 1996. 614 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 615 Control", RFC 5681, September 2009. 617 [RFC6675] Blanton, E., Allman, M., Wang, L., Jarvinen, I., Kojo, M., 618 and Y. Nishida, "A Conservative Loss Recovery Algorithm 619 Based on Selective Acknowledgment (SACK) for TCP", 620 RFC 6675, August 2012. 622 10.2. Informative References 624 [RFC3042] Allman, M., Balakrishnan, H., and S. Floyd, "Enhancing 625 TCP's Loss Recovery Using Limited Transmit", RFC 3042, 626 January 2001. 628 [RFC3517] Blanton, E., Allman, M., Fall, K., and L. Wang, "A 629 Conservative Selective Acknowledgment (SACK)-based Loss 630 Recovery Algorithm for TCP", RFC 3517, April 2003. 632 [IMC11] Dukkipati, N., Mathis, M., and Y. Cheng, "Proportional 633 Rate Reduction for TCP", ACM Internet Measurement 634 Conference IMC11, December 2011. 636 [FACK] Mathis, M. and J. Mahdavi, "Forward Acknowledgment: 637 Refining TCP Congestion Control", ACM SIGCOMM SIGCOMM96, 638 August 1996. 640 [RHID] Mathis, M., Semke, J., Mahdavi, J., and K. Lahey, "The 641 Rate-Halving Algorithm for TCP Congestion Control", 642 draft-mathis-tcp-ratehalving (work in progress), 643 June 1999. 645 [RHweb] Mathis, M. and J. Mahdavi, "TCP Rate-Halving with Bounding 646 Parameters", Web publication , December 1997. 648 [CUBIC] Rhee, I. and L. Xu, "CUBIC: A new TCP-friendly high-speed 649 TCP variant", PFLDnet 2005, Feb 2005. 651 [Jacobson88] 652 Jacobson, V., "Congestion Avoidance and Control", SIGCOMM 653 Comput. Commun. Rev. 18(4), Aug 1988. 655 [Savage99] 656 Savage, S., Cardwell, N., Wetherall, D., and T. Anderson, 657 "TCP congestion control with a misbehaving receiver", 658 SIGCOMM Comput. Commun. Rev. 29(5), October 1999. 660 [Laminar] Mathis, M., "Laminar TCP and the case for refactoring TCP 661 congestion control", draft-mathis-tcpm-tcp-laminar-01 662 (work in progress), July 2012. 664 Appendix A. Strong Packet Conservation Bound 666 PRR-CRB is based on a conservative, philosophically pure and 667 aesthetically appealing Strong Packet Conservation Bound, described 668 here. Although inspired by Van Jacobson's packet conservation 669 principle [Jacobson88], it differs in how it treats segments that are 670 missing and presumed lost. Under all conditions and sequences of 671 events during recovery, PRR-CRB strictly bounds the data transmitted 672 to be equal to or less than the amount of data delivered to the 673 receiver. Note that the effects of presumed losses are included in 674 the pipe calculation, but do not affect the outcome of PRR-CRB, once 675 pipe has fallen below ssthresh. 677 We claim that this Strong Packet Conservation Bound is the most 678 aggressive algorithm that does not lead to additional forced losses 679 in some environments. It has the property that if there is a 680 standing queue at a bottleneck that is carrying no other traffic, the 681 queue will maintain exactly constant length for the entire duration 682 of the recovery, except for +1/-1 fluctuation due to differences in 683 packet arrival and exit times. Any less aggressive algorithm will 684 result in a declining queue at the bottleneck. Any more aggressive 685 algorithm will result in an increasing queue or additional losses if 686 it is a full drop tail queue. 688 We demonstrate this property with a little thought experiment: 690 Imagine a network path that has insignificant delays in both 691 directions, except for the processing time and queue at a single 692 bottleneck in the forward path. By insignificant delay, we mean when 693 a packet is "served" at the head of the bottleneck queue, the 694 following events happen in much less than one bottleneck packet time: 695 the packet arrives at the receiver; the receiver sends an ACK; which 696 arrives at the sender; the sender processes the ACK and sends some 697 data; the data is queued at the bottleneck. 699 If sndcnt is set to DeliveredData and nothing else is inhibiting 700 sending data, then clearly the data arriving at the bottleneck queue 701 will exactly replace the data that was served at the head of the 702 queue, so the queue will have a constant length. If queue is drop 703 tail and full then the queue will stay exactly full. Losses or 704 reordering on the ACK path only cause wider fluctuations in the queue 705 size, but do not raise its peak size, independent of whether the data 706 is in order or out-of-order (including loss recovery from an earlier 707 RTT). Any more aggressive algorithm which sends additional data will 708 overflow the drop tail queue and cause loss. Any less aggressive 709 algorithm will under fill the queue. Therefore setting sndcnt to 710 DeliveredData is the most aggressive algorithm that does not cause 711 forced losses in this simple network. Relaxing the assumptions (e.g. 712 making delays more authentic and adding more flows, delayed ACKs, 713 etc) are likely to increases the fine grained fluctuations in queue 714 size but do not change its basic behavior. 716 Note that the congestion control algorithm implements a broader 717 notion of optimal that includes appropriately sharing the network. 718 Typical congestion control algorithms are likely to reduce the data 719 sent relative to the packet conserving bound implemented by PRR 720 bringing TCP's actual window down to ssthresh. 722 Authors' Addresses 724 Matt Mathis 725 Google, Inc 726 1600 Amphitheater Parkway 727 Mountain View, California 93117 728 USA 730 Email: mattmathis@google.com 731 Nandita Dukkipati 732 Google, Inc 733 1600 Amphitheater Parkway 734 Mountain View, California 93117 735 USA 737 Email: nanditad@google.com 739 Yuchung Cheng 740 Google, Inc 741 1600 Amphitheater Parkway 742 Mountain View, California 93117 743 USA 745 Email: ycheng@google.com