idnits 2.17.1 draft-mathis-tcpm-proportional-rate-reduction-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 7, 2011) is 4799 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- -- Missing reference section? 'RFC 5681' on line 420 looks like a reference -- Missing reference section? 'RHweb' on line 423 looks like a reference -- Missing reference section? 'RHID' on line 427 looks like a reference -- Missing reference section? 'CUBIC' on line 432 looks like a reference -- Missing reference section? 'RFC 3517' on line 416 looks like a reference -- Missing reference section? 'RFC 2018' on line 150 looks like a reference -- Missing reference section? 'Relentless' on line 269 looks like a reference -- Missing reference section? 'SPLIT' on line 401 looks like a reference Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TCP Maintenance Working Group M. Mathis 3 Internet-Draft N. Dukkipati 4 Intended status: Experimental Y. Cheng 5 Expires: September 8, 2011 Google, Inc 6 March 7, 2011 8 Proportional Rate Reduction for TCP 9 draft-mathis-tcpm-proportional-rate-reduction-00.txt 11 Abstract 13 This document describes a pair experimental algorithms, Proportional 14 Rate Reduction (PPR) and Reduction Bound (RB) that improve the 15 accuracy of the amount of data sent by TCP during loss recovery. 16 Standard Congestion Control requires that TCP and other protocols 17 reduce their congestion window in response to losses. This window 18 reduction naturally occurs in the same round trip as the data 19 retransmissions to repair the losses, and is implemented by choosing 20 not to transmit any data in response to some ACKs arriving from the 21 receiver. There are two widely deployed algorithms used to implement 22 this window reduction: Fast Recovery and Rate Halving. Both 23 algorithms are needlessly fragile under a number of conditions, 24 particularly when there is a burst of losses that such that the 25 number of ACKs delivered is so small that the effective window falls 26 below ssthresh, the target value chosen by the congestion control 27 algorithm. Proportional Rate Reduction avoids these excess window 28 reductions such that at the end of recovery the actual window size 29 will be as close as possible to the window size determined by the 30 congestion control algorithm. It is patterned after rate halving, 31 but using the fraction that is appropriate for target window chosen 32 by the congestion control algorithm. In addition a second algorithm, 33 Reduction Bound, monitors the total window reduction due to all 34 mechanisms, including application stalls, the losses themselves and 35 inhibits further window reductions when possible. 37 Status of this Memo 39 This Internet-Draft is submitted in full conformance with the 40 provisions of BCP 78 and BCP 79. 42 Internet-Drafts are working documents of the Internet Engineering 43 Task Force (IETF). Note that other groups may also distribute 44 working documents as Internet-Drafts. The list of current Internet- 45 Drafts is at http://datatracker.ietf.org/drafts/current/. 47 Internet-Drafts are draft documents valid for a maximum of six months 48 and may be updated, replaced, or obsoleted by other documents at any 49 time. It is inappropriate to use Internet-Drafts as reference 50 material or to cite them other than as "work in progress." 52 This Internet-Draft will expire on September 8, 2011. 54 Copyright Notice 56 Copyright (c) 2011 IETF Trust and the persons identified as the 57 document authors. All rights reserved. 59 This document is subject to BCP 78 and the IETF Trust's Legal 60 Provisions Relating to IETF Documents 61 (http://trustee.ietf.org/license-info) in effect on the date of 62 publication of this document. Please review these documents 63 carefully, as they describe your rights and restrictions with respect 64 to this document. Code Components extracted from this document must 65 include Simplified BSD License text as described in Section 4.e of 66 the Trust Legal Provisions and are provided without warranty as 67 described in the Simplified BSD License. 69 Table of Contents 71 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 72 2. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 5 73 3. Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 6 74 4. Algorithm Properties . . . . . . . . . . . . . . . . . . . . . 7 75 5. Comparison to Fast Recovery and other algorithms . . . . . . . 8 76 6. Packet Conservation Bound . . . . . . . . . . . . . . . . . . 9 77 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 10 78 8. Security Considerations . . . . . . . . . . . . . . . . . . . 10 79 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 80 Appendix A. References . . . . . . . . . . . . . . . . . . . . . 11 81 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 11 83 1. Introduction 85 This document describes a pair experimental algorithms, Proportional 86 Rate Reduction (PPR) and Reduction Bound (RB) that improve the 87 accuracy of the amount of data sent by TCP during loss recovery. 89 Standard Congestion Control [RFC 5681] requires that TCP (and other 90 protocols) reduce their congestion window in response to losses. 91 Fast Recovery, described in the same document, is the reference 92 algorithm for making this adjustment. It's stated goal is to recover 93 TCP's self clock by relying on returning ACKs during recovery to 94 clock more data into the network. Fast Recovery adjusts the window 95 by waiting for one half RTT of ACKs to pass before sending any data. 96 It is fragile because it can not compensate for the implicit window 97 reduction caused by the losses them selves, and is exposed to 98 timeouts. For example if half of the data or ACKs are lost, Fast 99 Recovery's expected behavior would be to reduce the window by not 100 sending in response to the first half window of ACKs, but then it 101 would not receive any more ACKs and would timeout because it failed 102 to send anything at all. 104 The rate-halving algorithm improves this situation by sending data on 105 alternate ACKs during recovery, such that after one RTT the window 106 has been halved. Rate-having is implemented in Linux, after being 107 only informally published[RHweb] including an uncompleted Internet- 108 Draft[RHID]. Rate-halving also does not adequately compensate for 109 the implicit window reduction caused by the losses and also assumes a 110 50% window reduction, which was completely standard at the time it 111 was written. (Several modern congestion control algorithms, such as 112 Cubic[CUBIC], can sometimes reduce the window by much less than 50%.) 113 As a consequence rate-halving often allows the window to fall further 114 than necessary, reducing performance and increasing the risk of 115 timeouts if there are any additional losses. 117 Proportional Rate Reduction (PPR) avoids these excess window 118 reductions such that at the end of recovery the actual window size 119 will be as close as possible to the window size determined by the 120 congestion control algorithm. It is patterned after Rate Halving, 121 but using the fraction that is appropriate for target window chosen 122 by the congestion control algorithm. In addition, a second 123 algorithm, Reduction Bound (RB), monitors the total window reduction 124 due to all mechanisms, including application stalls, the losses 125 themselves and attempts to inhibit further window reductions. 127 The foundation of Proportional Rate Reduction is Van Jacobson's 128 packet conservation principle: segments delivered to the receiver are 129 used as the clock to trigger sending additional segments into the 130 network. As much as possible Proportional Rate Reduction and 131 Reduction Bound rely on this self clock process, and are only 132 slightly affected by the accuracy of other estimators, such as 133 pipe[RFC 3517] and cwnd. This is what gives the algorithms their 134 precision in the presence of events that cause uncertainty in other 135 estimators. 137 Note that in the round trip time following the detection of a loss 138 TCP has to balance three partially conflicting actions: 139 retransmitting the missing data needed to repair the losses, sending 140 as much new data as possible to preserve TCP's self clock, and not 141 sending data in response to some of the ACKs in order to make the 142 window adjustment prescribed by the congestion control algorithm. We 143 use the term "Voluntary Window Reduction", to refer to this last 144 process: choosing not to send data in response to an ACK that would 145 otherwise permit it. 147 These algorithms are described as modifications to RFC 5681, TCP 148 Congestion Control, using concepts drawn from the pipe algorithm [RFC 149 3517]. They are most accurate and more easily implemented with 150 SACK[RFC 2018], but they can be implemented without SACK. 152 2. Definitions 154 The following terms, parameters and state variables are used as they 155 are defined in earlier documents: 157 RFC 3517: covered 159 RFC 5681: duplicate ACK, FlightSize, Receiver Maximum Segment Size 160 (RMSS) 162 We define some additional variables: 164 SACKd: The total number of bytes that the scoreboard indicates has 165 been delivered to the receiver. This can be computed by scanning the 166 scoreboard and counting the total number of bytes covered by all sack 167 blocks. 169 DeliveredData: The total number of bytes that the current ACK 170 indicates have been delivered to the receiver, relative to all past 171 ACKs. When not in recovery, DeliveredData is the change in snd.una. 172 With SACK, DeliveredData is not an estimator and can be computed 173 precisely as the change in snd.una plus the change in SACKd. Note 174 that if there are SACK blocks and snd.una advances, the change in 175 SACKd is typically negative. In recovery without SACK, DeliveredData 176 is estimated to be 1 rmss on duplicate acknowledgements, and on a 177 subsequent partial or full ACK, DeliveredData is estimated to be the 178 change in snd.una, minus one rmss for each preceding duplicate ACK. 180 Note that DeliveredData is robust: for TCP using SACK, DeliveredData 181 can be precisely computed anywhere in the network just by inspecting 182 the returning ACKs. The consequence of missing ACKs is that later 183 ACKs will show a larger DeliveredData, and that for any TCP the sum 184 of DeliveredData must agree with the forward progress over the same 185 time interval. 187 We introduce a local variable "sndcnt", which indicates exactly how 188 many bytes should be sent in response to each ACK while in recovery. 189 Note that the decision of which data to send (e.g. retransmit missing 190 data or send more new data) is out of scope for this document. 192 3. Algorithm 194 At the beginning of recovery initialize state. This assumes a modern 195 congestion control algorithm, CongCtrlAlg(), that might set ssthresh 196 to something other than FlightSize/2: 198 ssthresh = CongCtrlAlg() // Target cwnd after recovery 199 prr_delivered = 0 // Total bytes delivered during recov 200 prr_out = 0 // Total bytes sent during recovery 201 RecoverFS = snd.nxt-snd.una // Flightsize at the start of recov 202 pipe = as defined in [RFC 3517] // Estimated bytes in the network 204 On every ACK that advances snd.una compute: 206 DeliveredData = delta(snd.una) + delta(SACKd) 207 prr_delivered += DeliveredData 208 pipe = (RFC 3517 pipe algorithm) 209 if (pipe > ssthresh) { 210 // Proportional Rate Reduction 211 sndcnt = CEIL(prr_delivered * ssthresh / RecoverFS) - prr_out 212 } else { 213 // Reduction Bound 214 sndcnt = MIN(ssthresh - pipe, prr_delivered - prr_out) 215 } 216 sndcnt = MAX(sndcnt, 0) // positive 218 On any data transmission or retransmission: 220 prr_out += (data sent) // strictly less than or equal to sndcnt 222 Algorithm summary: If pipe (the estimated data is in flight) is 223 larger than ssthresh (the target cwnd at the end of recovery) then 224 Proportional Rate Reduction spreads the the voluntary window 225 reductions across a full RTT, such that at the end of recovery (as 226 prr_delivered approaches RecoverFS) prr_out approaches ssthresh, the 227 target value for cwnd. If there are excess losses such that pipe 228 falls below ssthresh, Reduction Bound first tries to hold pipe at 229 ssthresh by undoing past voluntary window reductions (as long as 230 prr_delivered > prr_out). While there are past voluntary window 231 reductions single recovery ACKs can trigger sending multiple 232 segments. If there are too many losses then prr_delivered - prr_out 233 will be exactly the same as DeliveredData for the current ACK, 234 resulting in sndcnt = DeliveredData and there will be no further 235 Voluntary Window Reductions. 237 4. Algorithm Properties 239 Normally Proportional Rate Reduction will spread Voluntary Window 240 reductions out evenly across a full RTT. This has the potential to 241 generally reduce the burstiness of Internet traffic, and could be 242 considered to be a type of soft pacing. Theoretically any pacing 243 increases the probability that different flows are interleaved, 244 reducing the opportunity for ACK compression and other phenomena that 245 increase traffic burstiness. However these effects have not been 246 quantified. 248 If there are minimal losses, Proportional Rate Reduction will 249 converge to exactly the target window chosen by the congestion 250 control algorithm. Note that as TCP approaches the end of recovery 251 prr_delivered will approach RecoverFS and sndcnt will be computed 252 such that prr_out approaches ssthresh. 254 Implicit window reductions due to multiple isolated losses during 255 recovery cause later Voluntary Reductions to be skipped. For small 256 numbers of losses the window size ends at exactly the window chosen 257 by the congestion control algorithm. 259 For burst losses, earlier Voluntary Window Reductions can be undone 260 by sending extra segments in response to ACKs arriving later during 261 recovery. Note that as long as some Voluntary Window Reductions are 262 not undone, the final value for pipe will be the same as ssthresh, 263 the target cwnd value chosen by the congestion control algorithm. 265 At every ACK, cumulative data sent during recovery is strictly bound 266 by the cumulative data delivered to the receiver during recovery. 267 This property is referred to as the "Relentless bound", because it 268 parallels the congestion control algorithm used in Relentless 269 TCP[Relentless]. Any smaller bound implies that we unnecessarily 270 gave up a opportunity to transmit data, and any larger bound has 271 pathological behavior in some network topologies. See Section 272 Section 6 for a further discussion of this property. 274 Proportional Rate Reduction with Reduction Bound improves the 275 situation when there are application stalls (e.g. when the sending 276 application does not queue data for transmission quickly enough or 277 the receiver stops advancing rwnd). When there is a application 278 stall early during recovery prr_out will fall behind the sum of the 279 transmissions permitted by sndcnt. The missed opportunities to send 280 due to stalls are treated like banked Voluntary Window Reductions: 281 specifically they cause prr_delivered-prr_out to be significantly 282 positive. If the application catches up while TCP is still in 283 recovery, TCP will send a partial window burst to catch up to exactly 284 where it would have been, had the application never stalled. 285 Although this burst might be viewed as being hard on the network, 286 this is exactly what happens every time there is a partial RTT 287 application stall while not in recovery. We have made the partial 288 RTT stall behavior uniform in all states. Improving this behavior is 289 out of scope for this document. 291 Proportional Rate Reduction with Reduction Bound is significantly 292 less sensitive to errors of the pipe estimator. While in recovery, 293 pipe is intrinsically an estimator, using incomplete information to 294 guess if un-SACKed segments are actually lost or out-of-order in the 295 network. Under some conditions pipe can have significant errors, for 296 example when a burst of reordered data is presumed to be lost and is 297 retransmitted, but then the original data arrives before the 298 retransmission. If the transmissions are regulated directly by pipe 299 as they are in RFC 3517, then errors and discontinuities in the pipe 300 estimator can cause significant errors in the amount of data sent. 301 With Proportional Rate Reduction with Reduction Bound, pipe merely 302 determines how sndcnt is computed from DataDelivered. Since short 303 term errors in pipe are smoothed out across multiple ACKs and both 304 Proportional Rate Reduction and Reduction Bound converge to the same 305 final window, errors in the pipe estimator have less impact on the 306 final outcome (This needs to be tested better). 308 5. Comparison to Fast Recovery and other algorithms 310 To compare PRR-RB to other recovery algorithms, consider how the 311 voluntary window reductions are distributed during TCP recovery. 312 With PRR they are spread evenly across the recovery RTT, such that 313 the final window is determined by the congestion control algorithm. 315 With Fast Recovery, the voluntary window reductions all occur during 316 the first half of the recovery RTT, before TCP has a sufficient 317 measure of the total lost data or ACKs. The possibility exists that 318 TCP will only receive half of the expected number of ACKs, and will 319 "voluntarily" reduce the window to zero, causing a timeout. Fast 320 Recovery does more quickly free space at a bottleneck network queue, 321 because the voluntary window reductions happen on average a quarter 322 of an RTT earlier than PRR or Ratehalving. It is unknown if this has 323 any significant effect on overall Internet traffic dynamics. 325 Rate halving also schedules the voluntary window reductions on 326 alternate ACKs, but with insufficient attention to how low the window 327 has fallen. 329 An alternative algorithm could transmit one segment in response to 330 every segment delivered to the receiver (the relentless bound, see 331 below) until prr_out reaches sshtresh, and then stop transmitting 332 entirely until there is a full or partial ACK. Although this 333 approach minimizes the chances of the actual window falling too low, 334 it is likely to reduce the robustness of the data retransmission and 335 recovery strategy, because algorithms to detect lost retransmissions 336 require sending new data following retransmissions[CITE?]. 338 An even more aggressive algorithm could follow the relentless bound 339 all the way to the end of recovery, and then make the window 340 adjustment after the end of recovery. While this is the absolutely 341 maximally aggressive recovery strategy (see the next section), it has 342 the potential to be unfair, because delaying the window adjustment by 343 one RTT will have an adverse effect on other flows sharing the link. 345 [Add Concluding Remarks] 347 6. Packet Conservation Bound 349 Under all conditions and sequences of events during recovery, PRR-RB 350 strictly bounds the data transmitted to be equal to or less than the 351 amount of data delivered to the receiver. We claim that this packet 352 conservation bound is the most aggressive algorithm that does not 353 lead to pathological behaviors (additional forced losses) in some 354 environments. Furthermore, any less aggressive bound will result in 355 missed opportunities to safely send data without inordinate risk of 356 loss. While we believe that this assertion might be formally 357 provable, we demonstrate it with a little thought experiment: 359 Imagine a network path that has insignificant delays in both 360 directions, except the processing time and queue at a single 361 bottleneck in the forward path. By insignificant delay, I mean when 362 a packet is "served" at the head of the bottleneck queue, the 363 following events happen in much less than one packet time at the 364 bottleneck: the packet arrives at the receiver; the receiver sends an 365 ACK; which arrives at the sender; the sender processes the ACK and 366 sends some data; the data is queued at the bottleneck. 368 If sndcnt is set to DataDelivered and nothing else is inhibiting 369 sending data, then clearly the data arriving at the bottleneck queue 370 will exactly replace the data that was served at the head of the 371 queue, so the queue will have a constant length. If queue is drop 372 tail and full then the queue will stay exactly full, even in the 373 presence of losses or reordering on the ACK path, and independent of 374 whether the data is in order or out-of-order (e.g. simple reordering 375 or loss recovery from an earlier RTT). Any more aggressive 376 algorithm, sending additional data will cause a queue overflow and 377 loss. Any less aggressive algorithm will under fill the queue. 378 Therefore setting sndcnt to DataDeliverd is the most aggressive 379 algorithm that does not cause forced losses in this simple network. 380 Relaxing the assumptions (e.g. making delays more authentic and 381 adding more flows, delayed ACKs, etc) increases the noise (jitter) in 382 the system but does not change it's basic behavior. 384 Note that the congestion control algorithm implements a broader 385 notion of optimal that includes appropriately sharing of the network. 386 PRR-RB will normally choose to send less data than permitted by this 387 bound as it brings the TCP's actual window down to ssthresh, as 388 chosen by the congestion control algorithm. 390 7. Acknowledgements 392 This draft is based in part on previous incomplete work by Matt 393 Mathis, Jeff Semke and Jamshid Mahdavi[RHID] and influenced by 394 several discussion with John Heffner. 396 8. Security Considerations 398 Proportional Rate Reduction does not change the risk profile for TCP. 400 Implementers that change PRR from counting bytes to segments have to 401 be cautious about the effects of ACK splitting attacks[SPLIT], where 402 the receiver acknowledges partial segments for the purpose of 403 confusing the sender's congestion accounting. 405 9. IANA Considerations 407 This document makes no request of IANA. 409 Note to RFC Editor: this section may be removed on publication as an 410 RFC. 412 Appendix A. References 414 TODO: A proper reference section. 416 [RFC 3517] "A Conservative Selective Acknowledgment (SACK)-based Loss 417 Recovery Algorithm for TCP". E. Blanton, M. Allman, K. Fall, L. 418 Wang. April 2003. 420 [RFC 5681] "TCP Congestion Control". M. Allman, V. Paxson, E. 421 Blanton. September 2009. 423 [RHweb] "TCP Rate-Halving with Bounding Parameters". M. Mathis, J. 424 Madavi, http://www.psc.edu/networking/papers/FACKnotes/971219/, Dec 425 1997. 427 [RHID] "The Rate-Halving Algorithm for TCP Congestion Control". M. 428 Mathis, J. Semke, J. Mahdavi, K. Lahey. 429 http://www.psc.edu/networking/ftp/papers/draft-ratehalving.txt, Work 430 in progress, last updated June 1999. 432 [CUBIC] "CUBIC: A new TCP-friendly high-speed TCP variant". I. Rhee, 433 L. Xu, PFLDnet, Feb 2005. 435 Authors' Addresses 437 Matt Mathis 438 Google, Inc 439 1600 Amphitheater Parkway 440 Mountain View, California 93117 441 USA 443 Email: mattmathis@google.com 445 Nandita Dukkipati 446 Google, Inc 447 1600 Amphitheater Parkway 448 Mountain View, California 93117 449 USA 451 Email: nanditad@google.com 452 Yuchung Cheng 453 Google, Inc 454 1600 Amphitheater Parkway 455 Mountain View, California 93117 456 USA 458 Email: ycheng@google.com