idnits 2.17.1 draft-ietf-tcpm-rack-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 2 characters in excess of 72. ** The abstract seems to contain references ([RFC2018], [RFC5681], [RFC6675]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: RACK-TLP intentionally decouples loss detection from congestion control. RACK-TLP only detects losses; it does not modify the congestion control algorithm [RFC5681][RFC6937]. A segment marked lost by RACK-TLP MUST not be retransmitted until congestion control deems this appropriate. -- The document date (July 13, 2020) is 1381 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC8174' is mentioned on line 116, but not defined == Missing Reference: 'RFC4653' is mentioned on line 1161, but not defined == Missing Reference: 'RFC3522' is mentioned on line 1174, but not defined == Missing Reference: 'RFC3042' is mentioned on line 897, but not defined == Unused Reference: 'RFC4737' is defined on line 1304, but no explicit reference was found in the text == Unused Reference: 'TLP' is defined on line 1375, but no explicit reference was found in the text ** Downref: Normative reference to an Experimental RFC: RFC 5827 ** Downref: Normative reference to an Experimental RFC: RFC 6937 ** Obsolete normative reference: RFC 793 (Obsoleted by RFC 9293) -- No information found for draft-ietf-quic-recovery-latest - is the name correct? Summary: 5 errors (**), 0 flaws (~~), 8 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TCP Maintenance Working Group Y. Cheng 3 Internet-Draft N. Cardwell 4 Intended status: Standards Track N. Dukkipati 5 Expires: January 14, 2021 P. Jha 6 Google, Inc 7 July 13, 2020 9 RACK-TLP: a time-based efficient loss detection for TCP 10 draft-ietf-tcpm-rack-09 12 Abstract 14 This document presents the RACK-TLP loss detection algorithm for TCP. 15 RACK-TLP uses per-segment transmit timestamp and selective 16 acknowledgement (SACK) [RFC2018] and has two parts: RACK ("Recent 17 ACKnowledgment") starts fast recovery quickly using time-based 18 inferences derived from ACK feedback. TLP ("Tail Loss Probe") 19 leverages RACK and sends a probe packet to trigger ACK feedback to 20 avoid the retransmission timeout (RTO) events. Compared to the 21 widely used DUPACK threshold approach, RACK-TLP detects losses more 22 efficiently when there are application-limited flights of data, lost 23 retransmissions, or data packet reordering events. It is intended to 24 be an alternative to the DUPACK threshold approach in 25 [RFC5681][RFC6675]. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on January 14, 2021. 44 Copyright Notice 46 Copyright (c) 2020 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (https://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 62 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 63 2.1. Background . . . . . . . . . . . . . . . . . . . . . . . 3 64 2.2. Motivation . . . . . . . . . . . . . . . . . . . . . . . 4 65 3. RACK-TLP high-level design . . . . . . . . . . . . . . . . . 5 66 3.1. RACK: time-based loss inferences from ACKs . . . . . . . 5 67 3.2. TLP: sending one segment to probe losses quickly with 68 RACK . . . . . . . . . . . . . . . . . . . . . . . . . . 6 69 3.3. RACK-TLP: reordering resilience with a time threshold . . 6 70 3.3.1. Reordering design rationale . . . . . . . . . . . . . 6 71 3.3.2. Reordering window adaptation . . . . . . . . . . . . 8 72 3.4. An Example of RACK-TLP in Action: fast recovery . . . . . 9 73 3.5. An Example of RACK-TLP in Action: RTO . . . . . . . . . . 9 74 3.6. Design Summary . . . . . . . . . . . . . . . . . . . . . 10 75 4. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 10 76 5. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 11 77 5.1. Per-packet variables . . . . . . . . . . . . . . . . . . 11 78 5.2. Per-connection variables . . . . . . . . . . . . . . . . 12 79 6. RACK Algorithm Details . . . . . . . . . . . . . . . . . . . 13 80 6.1. Upon transmitting a data segment . . . . . . . . . . . . 13 81 6.2. Upon receiving an ACK . . . . . . . . . . . . . . . . . . 13 82 6.3. Upon RTO expiration . . . . . . . . . . . . . . . . . . . 19 83 7. TLP Algorithm Details . . . . . . . . . . . . . . . . . . . . 20 84 7.1. Initializing state . . . . . . . . . . . . . . . . . . . 20 85 7.2. Scheduling a loss probe . . . . . . . . . . . . . . . . . 20 86 7.3. Sending a loss probe upon PTO expiration . . . . . . . . 21 87 7.4. Detecting losses by the ACK of the loss probe . . . . . . 22 88 7.4.1. General case: detecting packet losses using RACK . . 22 89 7.4.2. Special case: detecting a single loss repaired by the 90 loss probe . . . . . . . . . . . . . . . . . . . . . 23 91 8. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 24 92 8.1. Advantages and disadvantages . . . . . . . . . . . . . . 24 93 8.2. Relationships with other loss recovery algorithms . . . . 26 94 8.3. Interaction with congestion control . . . . . . . . . . . 26 95 8.4. TLP recovery detection with delayed ACKs . . . . . . . . 27 96 8.5. RACK for other transport protocols . . . . . . . . . . . 28 98 9. Security Considerations . . . . . . . . . . . . . . . . . . . 28 99 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28 100 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 28 101 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 28 102 12.1. Normative References . . . . . . . . . . . . . . . . . . 28 103 12.2. Informative References . . . . . . . . . . . . . . . . . 29 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 30 106 1. Terminology 108 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 109 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 110 "OPTIONAL" in this document are to be interpreted as described in BCP 111 14 [RFC2119] [RFC8174] when, and only when, they appear in all 112 capitals, as shown here. In this document, these words will appear 113 with that interpretation only when in UPPER CASE. Lower case uses of 114 these words are not to be interpreted as carrying [RFC2119] 115 significance. 117 2. Introduction 119 This document presents RACK-TLP, a TCP loss detection algorithm that 120 improves upon the widely implemented DUPACK counting approach in 121 [RFC5681][RFC6675], and that is RECOMMENDED to be used as an 122 alternative to that earlier approach. RACK-TLP has two parts: RACK 123 ("Recent ACKnowledgment") detects losses quickly using time-based 124 inferences derived from ACK feedback. TLP ("Tail Loss Probe") 125 triggers ACK feedback by quickly sending a probe segment, to avoid 126 retransmission timeout (RTO) events. 128 2.1. Background 130 In traditional TCP loss recovery algorithms [RFC5681][RFC6675], a 131 sender starts fast recovery when the number of DUPACKs received 132 exceeds a threshold (DupThresh) that defaults to 3 (this approach is 133 referred to as DUPACK-counting in the rest of the document). The 134 sender also halves the congestion window during the recovery. The 135 rationale behind the partial window reduction is that congestion does 136 not seem severe since ACK clocking is still maintained. The time 137 elapsed in fast recovery can be just one round-trip, e.g. if the 138 sender uses SACK-based recovery [RFC6675] and the number of lost 139 segments is small. 141 If fast recovery is not triggered, or triggers but fails to repair 142 all the losses, then the sender resorts to RTO recovery. The RTO 143 timer interval is conservatively the smoothed RTT (SRTT) plus four 144 times the RTT variation, and is lower bounded to 1 second [RFC6298]. 145 Upon RTO timer expiration, the sender retransmits the first 146 unacknowledged segment and resets the congestion window to the LOSS 147 WINDOW value (by default 1 full-size segment [RFC5681]). The 148 rationale behind the congestion window reset is that an entire flight 149 of data was lost, and the ACK clock was lost, so this deserves a 150 cautious response. The sender then retransmits the rest of the data 151 following the slow start algorithm [RFC5681]. The time elapsed in 152 RTO recovery is one RTO interval plus the number of round-trips 153 needed to repair all the losses. 155 2.2. Motivation 157 Fast Recovery is the preferred form of loss recovery because it can 158 potentially recover all losses in the time scale of a single round 159 trip, with only a fractional congestion window reduction. RTO 160 recovery and congestion window reset should ideally be the last 161 resort, only used when the entire flight is lost. However, in 162 addition to losing an entire flight of data, the following situations 163 can unnecessarily resort to RTO recovery with traditional TCP loss 164 recovery algorithms [RFC5681][RFC6675]: 166 1. Packet drops for short flows or at the end of an application data 167 flight. When the sender is limited by the application (e.g. 168 structured request/response traffic), segments lost at the end of 169 the application data transfer often can only be recovered by RTO. 170 Consider an example of losing only the last segment in a flight 171 of 100 segments. Lacking any DUPACK, the sender RTO expires and 172 reduces the congestion window to 1, and raises the congestion 173 window to just 2 after the loss repair is acknowledged. In 174 contrast, any single segment loss occurring between the first and 175 the 97th segment would result in fast recovery, which would only 176 cut the window in half. 178 1. Lost retransmissions. Heavy congestion or traffic policers can 179 cause retransmissions to be lost again. Lost retransmissions 180 cause a resort to RTO recovery, since DUPACK-counting does not 181 detect the loss of the retransmissions. Then the slow start 182 after RTO recovery could cause burst losses again that severely 183 degrades performance [POLICER16]. 185 2. Packet reordering. Link-layer protocols (e.g., 802.11 block 186 ACK), link bonding, or routers' internal load-balancing (e.g., 187 ECMP) can deliver TCP segments out of order. The degree of such 188 reordering is usually within the order of the path round trip 189 time. If the reordering degree is beyond DupThresh, the DUPACK- 190 counting can cause a spurious fast recovery and unnecessary 191 congestion window reduction. To mitigate the issue, [RFC4653] 192 adjusts DupThresh to half of the inflight size to tolerate higher 193 degree of reordering. However if more than half of the inflight 194 is lost, then the sender has to resort to RTO recovery. 196 3. RACK-TLP high-level design 198 RACK-TLP allows senders to recover losses more effectively in all 199 three scenarios described in the previous section. There are two 200 design principles behind RACK-TLP. The first principle is to detect 201 losses via ACK events as much as possible, to repair losses at round- 202 trip time-scales. The second principle is to gently probe the 203 network to solicit additional ACK feedback, to avoid RTO expiration 204 and subsequent congestion window reset. At a high level, the two 205 principles are implemented in RACK and TLP, respectively. 207 3.1. RACK: time-based loss inferences from ACKs 209 The rationale behind RACK is that if a segment is delivered out of 210 order, then the segments sent chronologically before that were either 211 lost or reordered. This concept is not fundamentally different from 212 [RFC5681][RFC6675][FACK]. RACK's key innovation is using per-segment 213 transmission timestamps and widely-deployed SACK options to conduct 214 time-based inferences, instead of inferring losses by counting ACKs 215 or SACKed sequences. Time-based inferences are more robust than 216 DUPACK-counting approaches because they have no dependence on flight 217 size, and thus are effective for application-limited traffic. 219 Conceptually, RACK puts a virtual timer for every data segment sent 220 (including retransmissions). Each timer expires dynamically based on 221 the latest RTT measurements plus an additional delay budget to 222 accommodate potential packet reordering (called the reordering 223 window). When a segment's timer expires, RACK marks the 224 corresponding segment lost for retransmission. 226 In reality, as an algorithm, RACK does not arm a timer for every 227 segment sent because it's not necessary. Instead the sender records 228 the most recent transmission time of every data segment sent, 229 including retransmissions. For each ACK received, the sender 230 calculates the latest RTT measurement (if eligible) and adjusts the 231 expiration time of every segment sent but not yet delivered. If a 232 segment has expired, RACK marks it lost. 234 Since the time-based logic of RACK applies equally to retransmissions 235 and original transmissions, it can detect lost retransmissions as 236 well. If a segment has been retransmitted but its most recent 237 (re)transmission timestamp has expired, then after a reordering 238 window it's marked lost. 240 3.2. TLP: sending one segment to probe losses quickly with RACK 242 RACK infers losses from ACK feedback; however, in some cases ACKs are 243 sparse, particularly when the inflight is small or when the losses 244 are high. In some challenging cases the last few segments in a 245 flight are lost. With [RFC5681] or [RFC6675] the sender's RTO would 246 expire and reset the congestion window, when in reality most of the 247 flight has been delivered. 249 Consider an example where a sender with a large congestion window 250 transmits 100 new data segments after an application write, and only 251 the last three segments are lost. Without RACK-TLP, the RTO expires, 252 the sender retransmits the first unacknowledged segment, and the 253 congestion window slow-starts from 1. After all the retransmits are 254 acknowledged the congestion window has been increased to 4. The 255 total delivery time for this application transfer is three RTTs plus 256 one RTO, a steep cost given that only a tiny fraction of the flight 257 was lost. If instead the losses had occurred three segments sooner 258 in the flight, then fast recovery would have recovered all losses 259 within one round-trip and would have avoided resetting the congestion 260 window. 262 Fast Recovery would be preferable in such scenarios; TLP is designed 263 to trigger the feedback RACK needed to enable that. After the last 264 (100th) segment was originally sent, TLP sends the next available 265 (new) segment or retransmits the last (highest-sequenced) segment in 266 two round-trips to probe the network, hence the name "Tail Loss 267 Probe". The successful delivery of the probe would solicit an ACK. 268 RACK uses this ACK to detect that the 98th and 99th segments were 269 lost, trigger fast recovery, and retransmit both successfully. The 270 total recovery time is four RTTs, and the congestion window is only 271 partially reduced instead of being fully reset. If the probe was 272 also lost then the sender would invoke RTO recovery resetting the 273 congestion window. 275 3.3. RACK-TLP: reordering resilience with a time threshold 277 3.3.1. Reordering design rationale 279 Upon receiving an ACK indicating an out-of-order data delivery, a 280 sender cannot tell immediately whether that out-of-order delivery was 281 a result of reordering or loss. It can only distinguish between the 282 two in hindsight if the missing sequence ranges are filled in later 283 without retransmission. Thus a loss detection algorithm needs to 284 budget some wait time -- a reordering window -- to try to 285 disambiguate packet reordering from packet loss. 287 The reordering window in the DUPACK-counting approach is implicitly 288 defined as the elapsed time to receive acknowledgements for 289 DupThresh-worth of out-of-order deliveries. This approach is 290 effective if the network reordering degree (in sequence distance) is 291 smaller than DupThresh and at least DupThresh segments after the loss 292 are acknowledged. For cases where the reordering degree is larger 293 than the default DupThresh of 3 packets, one alternative is to 294 dynamically adapt DupThresh based on the FlightSize (e.g. adjusts 295 DUPTRESH to half of the FlightSize). However, this does not work 296 well with the following two types of reordering: 298 1. Application-limited flights where the last non-full-sized segment 299 is delivered first and then the remaining full-sized segments in 300 the flight are delivered in order. This reordering pattern can 301 occur when segments traverse parallel forwarding paths. In such 302 scenarios the degree of reordering in packet distance is one 303 segment less than the flight size. 305 2. A flight of segments that are delivered partially out of order. 306 One cause for this pattern is wireless link-layer retransmissions 307 with an inadequate reordering buffer at the receiver. In such 308 scenarios, the wireless sender sends the data packets in order 309 initially, but some are lost and then recovered by link-layer 310 retransmissions; the wireless receiver delivers the TCP data 311 packets in the order they are received, due to the inadequate 312 reordering buffer. The random wireless transmission errors in 313 such scenarios cause the reordering degree, expressed in packet 314 distance, to have highly variable values up to the flight size. 316 In the above two cases the degree of reordering in packet distance is 317 highly variable, making DUPACK-counting approach ineffective 318 including dynamic adaptation variants like [RFC4653]. Instead the 319 degree of reordering in time difference in such cases is usually 320 within a single round-trip time. This is because the packets either 321 traverse slightly disjoint paths with similar propagation delays or 322 are repaired quickly by the local access technology. Hence, using a 323 time threshold instead of packet threshold strikes a middle ground, 324 allowing a bounded degree of reordering resilience while still 325 allowing fast recovery. This is the rationale behind the RACK-TLP 326 reordering resilience design. 328 Specifically, RACK-TLP introduces a new dynamic reordering window 329 parameter in time units, and the sender considers a data segment S 330 lost if both conditions are met: 332 1. Another data segment sent later than S has been delivered 333 2. S has not been delivered after the estimated round-trip time plus 334 the reordering window 336 Note that condition (1) implies at least one round-trip of time has 337 elapsed since S has been sent. 339 3.3.2. Reordering window adaptation 341 The RACK reordering window adapts to the measured duration of 342 reordering events, within reasonable and specific bounds in order to 343 disincentivize excessive reordering. More specifically: 345 1. If the sender has not observed any reordering since the 346 connection was established, then the RACK reordering window 347 SHOULD be zero in either of the following cases: 349 1. After learning that three segments have been delivered out of 350 order (e.g. receiving 3 DUPACKs per [RFC5681]); in turn, this 351 will cause the RACK loss detection logic to trigger fast 352 recovery. 354 2. During fast recovery or RTO recovery. 356 1. If the sender has observed some reordering since the connection 357 was established, then the RACK reordering window SHOULD be set to 358 a small fraction of the round-trip time, or zero if no round-trip 359 time estimate is available. 361 2. The RACK reordering window MUST be bounded and this bound SHOULD 362 be SRTT. 364 3. The RACK reordering window SHOULD leverage that to adaptively 365 estimate the duration of reordering events, if the receiver uses 366 Duplicate Selective Acknowledgement (DSACK) [RFC2883]. 368 For short flows, the low initial reordering window is key to recover 369 quickly by risking spurious retransmissions. The rationale is that 370 spurious retransmissions for short flows are not expected to produce 371 excessive network traffic additionally. For long flows the design 372 tolerates reordering within a round trip. This handles reordering 373 caused by path divergence in small time scales (reordering within the 374 round-trip time of the shortest path). 376 However, the fact that the initial reordering window is low, and the 377 reordering window's adaptive growth is bounded, means that there will 378 continue to be a cost to reordering to disincentivize excessive 379 network reordering over highly disjoint paths. For such networks 380 there are good alternative solutions, such as MPTCP. 382 3.4. An Example of RACK-TLP in Action: fast recovery 384 The following example in figure 1 illustrates the RACK-TLP algorithm 385 in action: 387 Event TCP DATA SENDER TCP DATA RECEIVER 388 _____ ____________________________________________________________ 389 1. Send P0, P1, P2, P3 --> 390 [P1, P2, P3 dropped by network] 392 2. <-- Receive P0, ACK P0 394 3a. 2RTTs after (2), TLP timer fires 395 3b. TLP: retransmits P3 --> 397 4. <-- Receive P3, SACK P3 399 5a. Receive SACK for P3 400 5b. RACK: marks P1, P2 lost 401 5c. Retransmit P1, P2 --> 402 [P1 retransmission dropped by network] 404 6. <-- Receive P2, SACK P2 & P3 406 7a. RACK: marks P1 retransmission lost 407 7b. Retransmit P1 --> 409 8. <-- Receive P1, ACK P3 411 Figure 1. 413 Figure 1, above, illustrates a sender sending four segments (P1, P2, 414 P3, P4) and losing the last three segments. After two round-trips, 415 TLP sends a loss probe, retransmitting the last segment, P3, to 416 solicit SACK feedback and restore the ACK clock (event 3). The 417 delivery of P3 enables RACK to infer (event 5b) that P1 and P2 were 418 likely lost, because they were sent before P3. The sender then 419 retransmits P1 and P2. Unfortunately, the retransmission of P1 is 420 lost again. However, the delivery of the retransmission of P2 allows 421 RACK to infer that the retransmission of P1 was likely lost (event 422 7a), and hence P1 should be retransmitted (event 7b). 424 3.5. An Example of RACK-TLP in Action: RTO 426 In addition to enhancing fast recovery, RACK improves the accuracy of 427 RTO recovery by reducing spurious retransmissions. 429 Without RACK, upon RTO timer expiration the sender marks all the 430 unacknowledged segments lost. This approach can lead to spurious 431 retransmissions. For example, consider a simple case where one 432 segment was sent with an RTO of 1 second, and then the application 433 writes more data, causing a second and third segment to be sent right 434 before the RTO of the first segment expires. Suppose only the first 435 segment is lost. Without RACK, upon RTO expiration the sender marks 436 all three segments as lost and retransmits the first segment. When 437 the sender receives the ACK that selectively acknowledges the second 438 segment, the sender spuriously retransmits the third segment. 440 With RACK, upon RTO timer expiration the only segment automatically 441 marked lost is the first segment (since it was sent an RTO ago); for 442 all the other segments RACK only marks the segment lost if at least 443 one round trip has elapsed since the segment was transmitted. 444 Consider the previous example scenario, this time with RACK. With 445 RACK, when the RTO expires the sender only marks the first segment as 446 lost, and retransmits that segment. The other two very recently sent 447 segments are not marked lost, because they were sent less than one 448 round trip ago and there were no ACKs providing evidence that they 449 were lost. When the sender receives the ACK that selectively 450 acknowledges the second segment, the sender would not retransmit the 451 third segment but rather would send any new segments (if allowed by 452 congestion window and receive window). 454 In the above example, if the sender were to send a large burst of 455 segments instead of two segments right before RTO, without RACK the 456 sender may spuriously retransmit almost the entire flight [RACK- 457 TCPM97]. Note that the Eifel protocol [RFC3522] cannot prevent this 458 issue because it can only detect spurious RTO episodes. In this 459 example the RTO itself was not spurious. 461 3.6. Design Summary 463 To summarize, RACK-TLP aims to adapt to small time-varying degrees of 464 reordering, quickly recover most losses within one to two round 465 trips, and avoid costly RTO recoveries. In the presence of 466 reordering, the adaptation algorithm can impose sometimes-needless 467 delays when it waits to disambiguate loss from reordering, but the 468 penalty for waiting is bounded to one round trip and such delays are 469 confined to flows long enough to have observed reordering. 471 4. Requirements 473 The reader is expected to be familiar with the definitions given in 474 the TCP congestion control [RFC5681] and selective acknowledgment 475 [RFC2018][RFC6675] RFCs. RACK-TLP has the following requirements: 477 1. The connection MUST use selective acknowledgment (SACK) options 478 [RFC2018], and the sender keeps a SACK scoreboard information on 479 a per-connection basis ([RFC6675] section 3). 481 2. For each data segment sent, the sender MUST store its most recent 482 transmission time with a timestamp whose granularity that is 483 finer than 1/4 of the minimum RTT of the connection. At the time 484 of writing, microsecond resolution is suitable for intra- 485 datacenter traffic and millisecond granularity or finer is 486 suitable for the Internet. Note that RACK-TLP can be implemented 487 with TSO (TCP Segmentation Offload) support by having multiple 488 segments in a TSO aggregate share the same timestamp. 490 3. RACK DSACK-based reordering window adaptation is RECOMMENDED but 491 is not required. 493 4. TLP requires RACK. 495 5. Definitions 497 The reader is expected to be familiar with the variables of SND.UNA, 498 SND.NXT, SEG.ACK, and SEG.SEQ in [RFC793], SMSS, FlightSize in 499 [RFC5681], DupThresh in [RFC6675], RTO and SRTT in [RFC6298]. A 500 RACK-TLP implementation needs to store new per-packet and per- 501 connection state, described below. 503 5.1. Per-packet variables 505 Theses variables indicate the status of the most recent transmission 506 of a data segment: 508 "Segment.lost" is true if the most recent (re)transmission of the 509 segment has been marked lost and needs to be retransmitted. False 510 otherwise. 512 "Segment.retransmitted" is true if it was retransmitted in the most 513 recent transmission. False otherwise. 515 "Segment.xmit_ts" is the time of the last transmission of a data 516 segment, including retransmissions, if any, with a clock granularity 517 specified in the Requirements section. 519 "Segment.end_seq" is the next sequence number after the last sequence 520 number of the data segment. 522 5.2. Per-connection variables 524 "RACK.segment". Among all the segments that have been either 525 selectively or cumulatively acknowledged, RACK.segment is the one 526 that was sent most recently (including retransmissions). 528 "RACK.xmit_ts" is the latest transmission timestamp of RACK.segment. 530 "RACK.end_seq" is the Segment.end_seq of RACK.segment. 532 "RACK.ack_ts" is the time when the full sequence range of 533 RACK.segment was selectively or cumulatively acknowledged. 535 "RACK.segs_sacked" returns the total number of segments selectively 536 acknowledged in the SACK scoreboard. 538 "RACK.fack" is the highest selectively or cumulatively acknowledged 539 sequence (i.e. forward acknowledgement). 541 "RACK.min_RTT" is the estimated minimum round-trip time (RTT) of the 542 connection. 544 "RACK.rtt" is the RTT of the most recently delivered segment on the 545 connection (either cumulatively acknowledged or selectively 546 acknowledged) that was not marked invalid as a possible spurious 547 retransmission. 549 "RACK.reordering_seen" indicates whether the sender has detected data 550 segment reordering event(s). 552 "RACK.reo_wnd" is a reordering window computed in the unit of time 553 used for recording segment transmission times. It is used to defer 554 the moment at which RACK marks a segment lost. 556 "RACK.dsack" indicates if a DSACK option has been received since the 557 last RACK.reo_wnd change. 559 "RACK.reo_wnd_mult" is the multiplier applied to adjust RACK.reo_wnd. 561 "RACK.reo_wnd_persist" is the number of loss recoveries before 562 resetting RACK.reo_wnd 564 "RACK.rtt_seq" is the SND.NXT when RACK.rtt is updated. 566 "TLP.is_retrans": a boolean indicating whether there is an 567 unacknowledged TLP retransmission. 569 "TLP.end_seq": the value of SND.NXT at the time of sending a TLP 570 retransmission. 572 "TLP.max_ack_delay": sender's maximum delayed ACK timer budget. 574 Per-connection timers 576 "RACK reordering timer": a timer that allows RACK to wait for 577 reordering to resolve, to try to disambiguate reordering from loss, 578 when some out-of-order segments are marked as SACKed. 580 "TLP PTO": a timer event indicating that an ACK is overdue and the 581 sender should transmit a TLP segment, to solicit SACK or ACK 582 feedback. 584 These timers augment the existing timers maintained by a sender, 585 including the RTO timer [RFC6298]. A RACK-TLP sender arms one of 586 these three timers -- RACK reordering timer, TLP PTO timer, or RTO 587 timer -- when it has unacknowledged segments in flight. The 588 implementation can simplify managing all three timers by multiplexing 589 a single timer among them with an additional variable to indicate the 590 event to invoke upon the next timer expiration. 592 6. RACK Algorithm Details 594 6.1. Upon transmitting a data segment 596 Upon transmitting a new segment or retransmitting an old segment, 597 record the time in Segment.xmit_ts and set Segment.lost to FALSE. 598 Upon retransmitting a segment, set Segment.retransmitted to TRUE. 600 RACK_transmit_data(Segment): 601 Segment.xmit_ts = Now() 602 Segment.lost = FALSE 604 RACK_retransmit_data(Segment): 605 Segment.retransmitted = TRUE 606 RACK_transmit_data(Segment) 608 6.2. Upon receiving an ACK 610 Step 1: Update RACK.min_RTT. 612 Use the RTT measurements obtained via [RFC6298] or [RFC7323] to 613 update the estimated minimum RTT in RACK.min_RTT. The sender SHOULD 614 track a simple global minimum of all RTT measurements from the 615 connection, or a windowed min-filtered estimate of recent RTT 616 measurements. 618 Step 2: Update state for most recently sent segment that has been 619 delivered 621 In this step, RACK updates its states that tracks the most recently 622 sent segment that has been delivered: RACK.segment; RACK maintains 623 its latest transmission timestamp in RACK.xmit_ts and its highest 624 sequence number in RACK.end_seq. These two variables are used, in 625 later steps, to estimate if some segments not yet delivered were 626 likely lost. Given the information provided in an ACK, each segment 627 cumulatively ACKed or SACKed is marked as delivered in the 628 scoreboard. Since an ACK can also acknowledge retransmitted data 629 segments, and retransmissions can be spurious, the sender needs to 630 take care to avoid spurious inferences. For example, if the sender 631 were to use timing information from a spurious retransmission, the 632 RACK.rtt could be vastly underestimated. 634 To avoid spurious inferences, ignore a segment as invalid if any of 635 its sequence range has been retransmitted before and either of two 636 conditions is true: 638 1. The Timestamp Echo Reply field (TSecr) of the ACK's timestamp 639 option [RFC7323], if available, indicates the ACK was not 640 acknowledging the last retransmission of the segment. 642 2. The segment was last retransmitted less than RACK.min_rtt ago. 644 The second check is a heuristic when the TCP Timestamp option is not 645 available, or when the round trip time is less than the TCP Timestamp 646 clock granularity. 648 Among all the segments newly ACKed or SACKed by this ACK that pass 649 the checks above, update the RACK.rtt to be the RTT sample calculated 650 using this ACK. Furthermore, record the most recent Segment.xmit_ts 651 in RACK.xmit_ts if it is ahead of RACK.xmit_ts. If Segment.xmit_ts 652 equals RACK.xmit_ts (e.g. due to clock granularity limits) then 653 compare Segment.end_seq and RACK.end_seq to break the tie. 655 Step 2 may be summarized in pseudocode as: 657 RACK_sent_after(t1, seq1, t2, seq2): 658 If t1 > t2: 659 Return true 660 Else if t1 == t2 AND seq1 > seq2: 661 Return true 662 Else: 663 Return false 665 RACK_update(): 666 For each Segment newly acknowledged cumulatively or selectively: 667 rtt = Now() - Segment.xmit_ts 668 If Segment.retransmitted is TRUE: 669 If ACK.ts_option.echo_reply < Segment.xmit_ts: 670 Return 671 If rtt < RACK.min_rtt: 672 Return 674 RACK.rtt = rtt 675 If RACK_sent_after(Segment.xmit_ts, Segment.end_seq 676 RACK.xmit_ts, RACK.end_seq): 677 RACK.xmit_ts = Segment.xmit_ts 679 Step 3: Detect data segment reordering 681 To detect reordering, the sender looks for original data segments 682 being delivered out of order. To detect such cases, the sender 683 tracks the highest sequence selectively or cumulatively acknowledged 684 in the RACK.fack variable. The name "fack" stands for the most 685 "Forward ACK" (this term is adopted from [FACK]). If a never- 686 retransmitted segment that's below RACK.fack is (selectively or 687 cumulatively) acknowledged, it has been delivered out of order. The 688 sender sets RACK.reordering_seen to TRUE if such segment is 689 identified. 691 RACK_detect_reordering(): 692 For each Segment newly acknowledged cumulatively or selectively: 693 If Segment.end_seq > RACK.fack: 694 RACK.fack = Segment.end_seq 695 Else if Segment.end_seq < RACK.fack AND 696 Segment.retransmitted is FALSE: 697 RACK.reordering_seen = TRUE 699 Step 4: Update RACK reordering window 701 The RACK reordering window, RACK.reo_wnd, serves as an adaptive 702 allowance for settling time before marking a segment lost. This step 703 documents a detailed algorithm that follows the principles outlined 704 in the ``RACK reordering window adaptation'' section. 706 If the sender has not yet observed any reordering based on the 707 previous step, then RACK prioritizes quick loss recovery by using 708 setting RACK.reo_wnd to 0 when the number of SACKed segments exceeds 709 DupThresh, or during loss recovery. 711 Aside from those special conditions, RACK starts with a conservative 712 reordering window of RACK.min_RTT/4. This value was chosen because 713 Linux TCP used the same factor in its implementation to delay Early 714 Retransmit [RFC5827] to reduce spurious loss detections in the 715 presence of reordering, and experience showed this worked reasonably 716 well [DMCG11]. 718 However, the reordering detection in the previous step, Step 3, has a 719 self-reinforcing drawback when the reordering window is too small to 720 cope with the actual reordering. When that happens, RACK could 721 spuriously mark reordered segments lost, causing them to be 722 retransmitted. In turn, the retransmissions can prevent the 723 necessary conditions for Step 3 to detect reordering, since this 724 mechanism requires ACKs or SACKs for only segments that have never 725 been retransmitted. In some cases such scenarios can persist, 726 causing RACK to continue to spuriously mark segments lost without 727 realizing the reordering window is too small. 729 To avoid the issue above, RACK dynamically adapts to higher degrees 730 of reordering using DSACK options from the receiver. Receiving an 731 ACK with a DSACK option indicates a spurious retransmission, 732 suggesting that RACK.reo_wnd may be too small. The RACK.reo_wnd 733 increases linearly for every round trip in which the sender receives 734 some DSACK option, so that after N distinct round trips in which a 735 DSACK is received, the RACK.reo_wnd becomes (N+1) * min_RTT / 4, with 736 an upper-bound of SRTT. 738 If the reordering is temporary then a large adapted reordering window 739 would unnecessarily delay loss recovery later. Therefore, RACK 740 persists the inflated RACK.reo_wnd for only 16 loss recoveries, after 741 which it resets RACK.reo_wnd to its starting value, min_RTT / 4. The 742 downside of resetting the reordering window is the risk of triggering 743 spurious fast recovery episodes if the reordering remains high. The 744 rationale for this approach is to bound such spurious recoveries to 745 approximately once every 16 recoveries (less than 7%). 747 To track the linear scaling factor for the adaptive reordering 748 window, RACK uses the variable RACK.reo_wnd_mult, which is 749 initialized to 1 and adapts with the following pseudocode, which 750 implements the above algorithm: 752 RACK_update_reo_wnd(): 754 /* DSACK-based reordering window adaptation */ 755 If RACK.dsack_round is not None AND 756 SND.UNA >= RACK.dsack_round: 757 RACK.dsack_round = None 758 /* Grow the reordering window per round that sees DSACK. 759 Reset the window after 16 DSACK-free recoveries */ 760 If RACK.dsack_round is None AND 761 any DSACK option is present on latest received ACK: 762 RACK.dsack_round = SND.NXT 763 RACK.reo_wnd_mult += 1 764 RACK.reo_wnd_persist = 16 765 Else if exiting Fast or RTO recovery: 766 RACK.reo_wnd_persist -= 1 767 If RACK.reo_wnd_persist <= 0: 768 RACK.reo_wnd_mult = 1 770 If RACK.reordering_seen is FALSE: 771 If in Fast or RTO recovery: 772 Return 0 773 Else if RACK.segs_sacked >= DupThresh: 774 Return 0 775 Return min(RACK.min_RTT / 4 * RACK.reo_wnd_mult, SRTT) 777 Step 5: Detect losses. 779 For each segment that has not been SACKed, RACK considers that 780 segment lost if another segment that was sent later has been 781 delivered, and the reordering window has passed. RACK considers the 782 reordering window to have passed if the RACK.segment was sent 783 sufficiently after the segment in question, or a sufficient time has 784 elapsed since the RACK.segment was S/ACKed, or some combination of 785 the two. More precisely, RACK marks a segment lost if: 787 RACK.xmit_ts >= Segment.xmit_ts 788 AND 789 (RACK.xmit_ts - Segment.xmit_ts) + (now - RACK.ack_ts) >= RACK.reo_wnd 791 Solving this second condition for "now", the moment at which a 792 segment is marked lost, yields: 794 now >= Segment.xmit_ts + RACK.reo_wnd + (RACK.ack_ts - RACK.xmit_ts) 796 Then (RACK.ack_ts - RACK.xmit_ts) is the round trip time of the most 797 recently (re)transmitted segment that's been delivered. When 798 segments are delivered in order, the most recently (re)transmitted 799 segment that's been delivered is also the most recently delivered, 800 hence RACK.rtt == RACK.ack_ts - RACK.xmit_ts. But if segments were 801 reordered, then the segment delivered most recently was sent before 802 the most recently (re)transmitted segment. Hence RACK.rtt > 803 (RACK.ack_ts - RACK.xmit_ts). 805 Since RACK.RTT >= (RACK.ack_ts - RACK.xmit_ts), the previous equation 806 reduces to saying that the sender can declare a segment lost when: 808 now >= Segment.xmit_ts + RACK.reo_wnd + RACK.rtt 810 In turn, that is equivalent to stating that a RACK sender should 811 declare a segment lost when: 813 Segment.xmit_ts + RACK.rtt + RACK.reo_wnd - now <= 0 815 Note that if the value on the left hand side is positive, it 816 represents the remaining wait time before the segment is deemed lost. 817 But this risks a timeout (RTO) if no more ACKs come back (e.g, due to 818 losses or application-limited transmissions) to trigger the marking. 819 For timely loss detection, the sender is RECOMMENDED to install a 820 reordering timer. This timer expires at the earliest moment when 821 RACK would conclude that all the unacknowledged segments within the 822 reordering window were lost. 824 The following pseudocode implements the algorithm above. When an ACK 825 is received or the RACK reordering timer expires, call 826 RACK_detect_loss_and_arm_timer(). The algorithm breaks timestamp 827 ties by using the TCP sequence space, since high-speed networks often 828 have multiple segments with identical timestamps. 830 RACK_detect_loss(): 831 timeout = 0 832 RACK.reo_wnd = RACK_update_reo_wnd() 833 For each segment, Segment, not acknowledged yet: 834 If Segment.lost is TRUE AND Segment.retransmitted is FALSE: 835 Continue /* Segment lost but not yet retransmitted */ 837 If RACK_sent_after(RACK.xmit_ts, RACK.end_seq, 838 Segment.xmit_ts, Segment.end_seq): 839 remaining = Segment.xmit_ts + RACK.rtt + 840 RACK.reo_wnd - Now() 841 If remaining <= 0: 842 Segment.lost = TRUE 843 Else: 844 timeout = max(remaining, timeout) 845 Return timeout 847 RACK_detect_loss_and_arm_timer(): 848 timeout = RACK_detect_loss() 849 If timeout != 0 850 Arm the RACK timer to call 851 RACK_detect_loss_and_arm_timer() after timeout 853 As an optimization, an implementation can choose to check only 854 segments that have been sent before RACK.xmit_ts. This can be more 855 efficient than scanning the entire SACK scoreboard, especially when 856 there are many segments in flight. The implementation can use a 857 separate doubly-linked list ordered by Segment.xmit_ts and inserts a 858 segment at the tail of the list when it is (re)transmitted, and 859 removes a segment from the list when it is delivered or marked lost. 860 In Linux TCP this optimization improved CPU usage by orders of 861 magnitude during some fast recovery episodes on high-speed WAN 862 networks. 864 6.3. Upon RTO expiration 866 Upon RTO timer expiration, RACK marks the first outstanding segment 867 as lost (since it was sent an RTO ago); for all the other segments 868 RACK only marks the segment lost if the time elapsed since the 869 segment was transmitted is at least the sum of the recent RTT and the 870 reordering window. 872 RACK_mark_losses_on_RTO(): 873 For each segment, Segment, not acknowledged yet: 874 If SEG.SEQ == SND.UNA OR 875 Segment.xmit_ts + RACK.rtt + RACK.reo_wnd - Now() <= 0: 876 Segment.lost = TRUE 878 7. TLP Algorithm Details 880 7.1. Initializing state 882 Reset TLP.is_retrans and TLP.end_seq when initiating a connection, 883 fast recovery, or RTO recovery. 885 TLP.is_retrans = false 887 7.2. Scheduling a loss probe 889 The sender schedules a loss probe timeout (PTO) to transmit a segment 890 during the normal transmission process. The sender SHOULD start or 891 restart a loss probe PTO timer after transmitting new data (that was 892 not itself a loss probe) or upon receiving an ACK that cumulatively 893 acknowledges new data, unless it is already in fast recovery, RTO 894 recovery, or the sender has segments delivered out-of-order (i.e. 895 RACK.segs_sacked is not zero). These conditions are excluded because 896 they are addressed by similar mechanisms, like Limited Transmit 897 [RFC3042], the RACK reordering timer, and F-RTO [RFC5682]. Further, 898 prior to scheduling a PTO the sender SHOULD cancel any pending PTO, 899 RTO, RACK reordering timer, or zero window probe (ZWP) timer 900 [RFC793]. 902 The sender calculates the PTO interval by taking into account a 903 number of factors. 905 First, the default PTO interval is 2*SRTT. By that time, it is 906 prudent to declare that an ACK is overdue, since under normal 907 circumstances, i.e. no losses, an ACK typically arrives in one SRTT. 908 Choosing PTO to be exactly an SRTT would risk causing spurious 909 probes, given that network and end-host delay variance can cause an 910 ACK to be delayed beyond SRTT. Hence the PTO is conservatively 911 chosen to be the next integral multiple of SRTT. 913 Second, when there is no SRTT estimate available, the PTO SHOULD be 1 914 second. This conservative value corresponds to the RTO value when no 915 SRTT is available, per [RFC6298]. 917 Third, when FlightSize is one segment, the sender MAY inflate PTO by 918 TLP.max_ack_delay to accommodate a potential delayed acknowledgment 919 and reduce the risk of spurious retransmissions. The actual value of 920 TLP.max_ack_delay is implementation-specific. 922 Finally, if the time at which an RTO would fire (here denoted 923 "TCP_RTO_expiration()") is sooner than the computed time for the PTO, 924 then the sender schedules a TLP to be sent at that RTO time. 926 Summarizing these considerations in pseudocode form, a sender SHOULD 927 use the following logic to select the duration of a PTO: 929 TLP_calc_PTO(): 930 If SRTT is available: 931 PTO = 2 * SRTT 932 If FlightSize is one segment: 933 PTO += TLP.max_ack_delay 934 Else: 935 PTO = 1 sec 937 If Now() + PTO > TCP_RTO_expiration(): 938 PTO = TCP_RTO_expiration() - Now() 940 7.3. Sending a loss probe upon PTO expiration 942 When the PTO timer expires, the sender SHOULD transmit a previously 943 unsent data segment, if the receive window allows, and increment the 944 FlightSize accordingly. Note that FlightSize could be one packet 945 greater than the congestion window temporarily until the next ACK 946 arrives. 948 If such a segment is not available, then the sender SHOULD retransmit 949 the highest-sequence segment sent so far and set TLP.is_retrans to 950 true. This segment is chosen in order to deal with the 951 retransmission ambiguity problem in TCP. Suppose a sender sends N 952 segments, and then retransmits the last segment (segment N) as a loss 953 probe, and then the sender receives a SACK for segment N. As long as 954 the sender waits for the RACK reordering window expire, it doesn't 955 matter if that SACK was for the original transmission of segment N or 956 the TLP retransmission; in either case the arrival of the SACK for 957 segment N provides evidence that the N-1 segments preceding segment N 958 were likely lost. 960 In the case where there is only one original outstanding segment of 961 data (N=1), the same logic (trivially) applies: an ACK for a single 962 outstanding segment tells the sender the N-1=0 segments preceding 963 that segment were lost. Furthermore, whether there are N>1 or N=1 964 outstanding segments, there is a question about whether the original 965 last segment or its TLP retransmission were lost; the sender 966 estimates whether there was such a loss using TLP recovery detection 967 (see below). 969 The sender MUST follow the RACK transmission procedures in the ''Upon 970 Transmitting a Data Segment'' section (see above) upon sending either 971 a retransmission or new data loss probe. This is critical for 972 detecting losses using the ACK for the loss probe. Furthermore, 973 prior to sending a loss probe, the sender MUST check that there is no 974 other previous loss probe still in flight. This ensures that at any 975 given time the sender has at most one additional packet in flight 976 beyond the congestion window limit. This invariant is maintained 977 using the state variable TLP.end_seq, which indicates the latest 978 unacknowledged TLP loss probe's ending sequence. It is reset when 979 the loss probe has been acknowledged or is deemed lost or irrelevant. 980 After attempting to send a loss probe, regardless of whether a loss 981 probe was sent, the sender MUST re-arm the RTO timer, not the PTO 982 timer, if FlightSize is not zero. This ensures RTO recovery remains 983 the last resort if TLP fails. The following pseudo code summarizes 984 the operations. 986 TLP_send_probe(): 988 If TLP.end_seq is None: 989 TLP.is_retrans = false 990 Segment = send buffer segment starting at SND.NXT 991 If Segment exists and fits the peer receive window limit: 992 /* Transmit the lowest-sequence unsent Segment */ 993 Transmit Segment 994 RACK_transmit_data(Segment) 995 TLP.end_seq = SND.NXT 996 Increase FlightSize by Segment length 997 Else: 998 /* Retransmit the highest-sequence Segment sent */ 999 Segment = send buffer segment ending at SND.NXT 1000 Transmit Segment 1001 RACK_retransmit_data(Segment) 1002 TLP.end_seq = SND.NXT 1003 TLP.is_retrans = true 1005 7.4. Detecting losses by the ACK of the loss probe 1007 When there is packet loss in a flight ending with a loss probe, the 1008 feedback solicited by a loss probe will reveal one of two scenarios, 1009 depending on the pattern of losses. 1011 7.4.1. General case: detecting packet losses using RACK 1013 If the loss probe and the ACK that acknowledges the probe are 1014 delivered successfully, RACK-TLP uses this ACK -- just as it would 1015 with any other ack -- to detect if any segments sent prior to the 1016 probe were dropped. RACK would typically infer that any 1017 unacknowledged data segments sent before the loss probe were lost, 1018 since they were sent sufficiently far in the past (at least one PTO 1019 has elapsed, plus one round-trip for the loss probe to be ACKed). 1020 More specifically, RACK_detect_loss() (step 5) would mark those 1021 earlier segments as lost. Then the sender would trigger a fast 1022 recovery to recover those losses. 1024 7.4.2. Special case: detecting a single loss repaired by the loss probe 1026 If the TLP retransmission repairs all the lost in-flight sequence 1027 ranges (i.e. only the last segment in the flight was lost), the ACK 1028 for the loss probe appears to be a regular cumulative ACK, which 1029 would not normally trigger the congestion control response to this 1030 packet loss event. The following TLP recovery detection mechanism 1031 examines ACKs to detect this special case to make congestion control 1032 respond properly [RFC5681]. 1034 After a TLP retransmission, the sender checks for this special case 1035 of a single loss that is recovered by the loss probe itself. To 1036 accomplish this, the sender checks for a duplicate ACK or DSACK 1037 indicating that both the original segment and TLP retransmission 1038 arrived at the receiver, meaning there was no loss. If the TLP 1039 sender does not receive such an indication, then it SHOULD assume 1040 that either the original data segment or the TLP retransmission were 1041 lost, for congestion control purposes. 1043 If the TLP retransmission is spurious, a receiver that uses DSACK 1044 would return an ACK that covers TLP.end_seq with a DSACK option (Case 1045 1). If the receiver does not support DSACK, it would return a DUPACK 1046 without any SACK option (Case 2). If the sender receives an ACK 1047 matching either case, then the sender estimates that the receiver 1048 received both the original data segment and the TLP probe 1049 retransmission, and so the sender considers the TLP episode to be 1050 done, and records that fact by setting TLP.end_seq to None. 1052 Upon receiving an ACK that covers some sequence number after 1053 TLP.end_seq, the sender should have received any ACKs for the 1054 original segment and TLP probe retransmission segment. At that time, 1055 if the TLP.end_seq is still set, and thus indicates that the TLP 1056 probe retransmission remains unacknowledged, then the sender should 1057 presume that at least one of its data segments was lost. The sender 1058 then SHOULD invoke a congestion control response equivalent to a fast 1059 recovery. 1061 More precisely, on each ACK the sender executes the following: 1063 TLP_process_ack(ACK): 1064 If TLP.end_seq is not None AND ACK.seq >= TLP.end_seq: 1065 If not TLP.is_retrans: 1066 TLP.end_seq = None /* TLP of new data delivered */ 1067 Else if ACK has a DSACK option matching TLP.end_seq: 1068 TLP.end_seq = None /* Case 1, above */ 1069 Else If SEG.ACK > TLP.end_seq: 1070 TLP.end_seq = None /* Repaired the single loss */ 1071 (Invoke congestion control to react on 1072 the loss event the probe has repaired) 1073 Else If ACK is a DUPACK without any SACK option: 1074 TLP.end_seq = None /* Case 2, above */ 1076 8. Discussion 1078 8.1. Advantages and disadvantages 1080 The biggest advantage of RACK-TLP is that every data segment, whether 1081 it is an original data transmission or a retransmission, can be used 1082 to detect losses of the segments sent chronologically prior to it. 1083 This enables RACK-TLP to use fast recovery in cases with application- 1084 limited flights of data, lost retransmissions, or data segment 1085 reordering events. Consider the following examples: 1087 1. Packet drops at the end of an application data flight: Consider a 1088 sender that transmits an application-limited flight of three data 1089 segments (P1, P2, P3), and P1 and P3 are lost. Suppose the 1090 transmission of each segment is at least RACK.reo_wnd after the 1091 transmission of the previous segment. RACK will mark P1 as lost 1092 when the SACK of P2 is received, and this will trigger the 1093 retransmission of P1 as R1. When R1 is cumulatively 1094 acknowledged, RACK will mark P3 as lost and the sender will 1095 retransmit P3 as R3. This example illustrates how RACK is able 1096 to repair certain drops at the tail of a transaction without an 1097 RTO recovery. Notice that neither the conventional duplicate ACK 1098 threshold [RFC5681], nor [RFC6675], nor the Forward 1099 Acknowledgment [FACK] algorithm can detect such losses, because 1100 of the required segment or sequence count. 1102 2. Lost retransmission: Consider a flight of three data segments 1103 (P1, P2, P3) that are sent; P1 and P2 are dropped. Suppose the 1104 transmission of each segment is at least RACK.reo_wnd after the 1105 transmission of the previous segment. When P3 is SACKed, RACK 1106 will mark P1 and P2 lost and they will be retransmitted as R1 and 1107 R2. Suppose R1 is lost again but R2 is SACKed; RACK will mark R1 1108 lost and trigger retransmission again. Again, neither the 1109 conventional three duplicate ACK threshold approach, nor 1110 [RFC6675], nor the Forward Acknowledgment [FACK] algorithm can 1111 detect such losses. And such a lost retransmission can happen 1112 when TCP is being rate-limited, particularly by token bucket 1113 policers with large bucket depth and low rate limit; in such 1114 cases retransmissions are often lost repeatedly because standard 1115 congestion control requires multiple round trips to reduce the 1116 rate below the policed rate. 1118 3. Packet reordering: Consider a simple reordering event where a 1119 flight of segments are sent as (P1, P2, P3). P1 and P2 carry a 1120 full payload of MSS octets, but P3 has only a 1-octet payload. 1121 Suppose the sender has detected reordering previously and thus 1122 RACK.reo_wnd is min_RTT/4. Now P3 is reordered and delivered 1123 first, before P1 and P2. As long as P1 and P2 are delivered 1124 within min_RTT/4, RACK will not consider P1 and P2 lost. But if 1125 P1 and P2 are delivered outside the reordering window, then RACK 1126 will still spuriously mark P1 and P2 lost. 1128 The examples above show that RACK-TLP is particularly useful when the 1129 sender is limited by the application, which can happen with 1130 interactive or request/response traffic. Similarly, RACK still works 1131 when the sender is limited by the receive window, which can happen 1132 with applications that use the receive window to throttle the sender. 1134 RACK-TLP works more efficiently with TCP Segmentation Offload (TSO) 1135 compared to DUPACK-counting. RACK always marks the entire TSO 1136 aggregate lost because the segments in the same TSO aggregate have 1137 the same transmission timestamp. By contrast, the algorithms based 1138 on sequence counting (e.g., [RFC6675][RFC5681]) may mark only a 1139 subset of segments in the TSO aggregate lost, forcing the stack to 1140 perform expensive fragmentation of the TSO aggregate, or to 1141 selectively tag individual segments lost in the scoreboard. 1143 The main drawback of RACK-TLP is the additional states required 1144 compared to DUPACK-counting. RACK requires the sender to record the 1145 transmission time of each segment sent at a clock granularity that is 1146 finer than 1/4 of the minimum RTT of the connection. TCP 1147 implementations that record this already for RTT estimation do not 1148 require any new per-packet state. But implementations that are not 1149 yet recording segment transmission times will need to add per-packet 1150 internal state (expected to be either 4 or 8 octets per segment or 1151 TSO aggregate) to track transmission times. In contrast, [RFC6675] 1152 loss detection approach does not require any per-packet state beyond 1153 the SACK scoreboard; this is particularly useful on ultra-low RTT 1154 networks where the RTT may be less than the sender TCP clock 1155 granularity (e.g. inside data-centers). 1157 8.2. Relationships with other loss recovery algorithms 1159 The primary motivation of RACK-TLP is to provide a general 1160 alternative to some of the standard loss recovery algorithms 1161 [RFC5681][RFC6675][RFC5827][RFC4653]. [RFC5827][RFC4653] dynamically 1162 adjusts the duplicate ACK threshold based on the current or previous 1163 flight sizes. RACK-TLP takes a different approach by using a time- 1164 based reordering window. RACK-TLP can be seen as an extended Early 1165 Retransmit [RFC5827] without a FlightSize limit but with an 1166 additional reordering window. [FACK] considers an original segment 1167 to be lost when its sequence range is sufficiently far below the 1168 highest SACKed sequence. In some sense RACK-TLP can be seen as a 1169 generalized form of FACK that operates in time space instead of 1170 sequence space, enabling it to better handle reordering, application- 1171 limited traffic, and lost retransmissions. 1173 RACK-TLP is compatible with the standard RTO [RFC6298], RTO-restart 1174 [RFC7765], F-RTO [RFC5682] and Eifel algorithms [RFC3522]. This is 1175 because RACK-TLP only detects loss by using ACK events. It neither 1176 changes the RTO timer calculation nor detects spurious RTO. 1178 8.3. Interaction with congestion control 1180 RACK-TLP intentionally decouples loss detection from congestion 1181 control. RACK-TLP only detects losses; it does not modify the 1182 congestion control algorithm [RFC5681][RFC6937]. A segment marked 1183 lost by RACK-TLP MUST not be retransmitted until congestion control 1184 deems this appropriate. 1186 The only exception -- the only way in which RACK-TLP modulates the 1187 congestion control algorithm -- is that one outstanding loss probe 1188 can be sent even if the congestion window is full. However, this 1189 temporary over-commit is accounted for and credited in the in-flight 1190 data tracked for congestion control, so that congestion control will 1191 erase the over-commit upon the next ACK. 1193 If packet losses happen after the reordering window has been 1194 increased by DSACK, RACK-TLP may take longer to detect losses than 1195 the pure DUPACK-counting approach. In this case TCP may continue to 1196 increase the congestion window upon receiving ACKs during this time, 1197 making the sender more aggressive. 1199 The following simple example compares how RACK-TLP and non-RACK-TLP 1200 loss detection interacts with congestion control: suppose a sender 1201 has a congestion window (cwnd) of 20 segments on a SACK-enabled 1202 connection. It sends 10 data segments and all of them are lost. 1204 Without RACK-TLP, the sender would time out, reset cwnd to 1, and 1205 retransmit the first segment. It would take four round trips (1 + 2 1206 + 4 + 3 = 10) to retransmit all the 10 lost segments using slow 1207 start. The recovery latency would be RTO + 4*RTT, with an ending 1208 cwnd of 4 segments due to congestion window validation. 1210 With RACK-TLP, a sender would send the TLP after 2*RTT and get a 1211 DUPACK, enabling RACK to detect the losses and trigger fast recovery. 1212 If the sender implements Proportional Rate Reduction [RFC6937] it 1213 would slow start to retransmit the remaining 9 lost segments since 1214 the number of segments in flight (0) is lower than the slow start 1215 threshold (10). The slow start would again take four round trips (1 1216 + 2 + 4 + 3 = 10) to retransmit all the lost segments. The recovery 1217 latency would be 2*RTT + 4*RTT, with an ending cwnd set to the slow 1218 start threshold of 10 segments. 1220 The difference in recovery latency (RTO + 4*RTT vs 6*RTT) can be 1221 significant if the RTT is much smaller than the minimum RTO (1 second 1222 in [RFC6298]) or if the RTT is large. The former case can happen in 1223 local area networks, data-center networks, or content distribution 1224 networks with deep deployments. The latter case can happen in 1225 developing regions with highly congested and/or high-latency 1226 networks. 1228 8.4. TLP recovery detection with delayed ACKs 1230 Delayed ACKs complicate the detection of repairs done by TLP, since 1231 with a delayed ACK the sender receives one fewer ACK than would 1232 normally be expected. To mitigate this complication, before sending 1233 a TLP loss probe retransmission, the sender should attempt to wait 1234 long enough that the receiver has sent any delayed ACKs that it is 1235 withholding. The sender algorithm described above features such a 1236 delay, in the form of TLP.max_ack_delay. Furthermore, if the 1237 receiver supports DSACK then in the case of a delayed ACK the 1238 sender's TLP recovery detection mechanism (see above) can use the 1239 DSACK information to infer that the original and TLP retransmission 1240 both arrived at the receiver. 1242 If there is ACK loss or a delayed ACK without a DSACK, then this 1243 algorithm is conservative, because the sender will reduce the 1244 congestion window when in fact there was no packet loss. In practice 1245 this is acceptable, and potentially even desirable: if there is 1246 reverse path congestion then reducing the congestion window can be 1247 prudent. 1249 8.5. RACK for other transport protocols 1251 RACK can be implemented in other transport protocols (e.g., [QUIC- 1252 LR]). The [Sprout] loss detection algorithm was also independently 1253 designed to use a 10ms reordering window to improve its loss 1254 detection. 1256 9. Security Considerations 1258 RACK-TLP algorithm behavior is based on information conveyed in SACK 1259 options, so it has security considerations similar to those described 1260 in the Security Considerations section of [RFC6675]. 1262 Additionally, RACK-TLP has a lower risk profile than [RFC6675] 1263 because it is not vulnerable to ACK-splitting attacks [SCWA99]: for 1264 an MSS-size segment sent, the receiver or the attacker might send MSS 1265 ACKs that SACK or acknowledge one additional byte per ACK. This 1266 would not fool RACK. In such a scenario, RACK.xmit_ts would not 1267 advance, because all the sequence ranges within the segment were 1268 transmitted at the same time, and thus carry the same transmission 1269 timestamp. In other words, SACKing only one byte of a segment or 1270 SACKing the segment in entirety have the same effect with RACK. 1272 10. IANA Considerations 1274 This document makes no request of IANA. 1276 Note to RFC Editor: this section may be removed on publication as an 1277 RFC. 1279 11. Acknowledgments 1281 The authors thank Matt Mathis for his insights in FACK and Michael 1282 Welzl for his per-packet timer idea that inspired this work. Eric 1283 Dumazet, Randy Stewart, Van Jacobson, Ian Swett, Rick Jones, Jana 1284 Iyengar, Hiren Panchasara, Praveen Balasubramanian, Yoshifumi 1285 Nishida, Bob Briscoe, Felix Weinrank, Michael Tuexen, Martin Duke, 1286 Ilpo Jarvinen, Theresa Enghardt, Mirja Kuehlewind, Gorry Fairhurst, 1287 and Yi Huang contributed to the draft or the implementations in 1288 Linux, FreeBSD, Windows, and QUIC. 1290 12. References 1292 12.1. Normative References 1294 [RFC2018] Mathis, M. and J. Mahdavi, "TCP Selective Acknowledgment 1295 Options", RFC 2018, October 1996. 1297 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1298 Requirement Levels", RFC 2119, March 1997. 1300 [RFC2883] Floyd, S., Mahdavi, J., Mathis, M., and M. Podolsky, "An 1301 Extension to the Selective Acknowledgement (SACK) Option 1302 for TCP", RFC 2883, July 2000. 1304 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 1305 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 1306 November 2006. 1308 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1309 Control", RFC 5681, September 2009. 1311 [RFC5682] Sarolahti, P., Kojo, M., Yamamoto, K., and M. Hata, 1312 "Forward RTO-Recovery (F-RTO): An Algorithm for Detecting 1313 Spurious Retransmission Timeouts with TCP", RFC 5682, 1314 September 2009. 1316 [RFC5827] Allman, M., Ayesta, U., Wang, L., Blanton, J., and P. 1317 Hurtig, "Early Retransmit for TCP and Stream Control 1318 Transmission Protocol (SCTP)", RFC 5827, April 2010. 1320 [RFC6298] Paxson, V., Allman, M., Chu, J., and M. Sargent, 1321 "Computing TCP's Retransmission Timer", RFC 6298, June 1322 2011. 1324 [RFC6675] Blanton, E., Allman, M., Wang, L., Jarvinen, I., Kojo, M., 1325 and Y. Nishida, "A Conservative Loss Recovery Algorithm 1326 Based on Selective Acknowledgment (SACK) for TCP", 1327 RFC 6675, August 2012. 1329 [RFC6937] Mathis, M., Dukkipati, N., and Y. Cheng, "Proportional 1330 Rate Reduction for TCP", May 2013. 1332 [RFC7323] Borman, D., Braden, B., Jacobson, V., and R. 1333 Scheffenegger, "TCP Extensions for High Performance", 1334 September 2014. 1336 [RFC793] Postel, J., "Transmission Control Protocol", September 1337 1981. 1339 12.2. Informative References 1341 [DMCG11] Dukkipati, N., Mathis, M., Cheng, Y., and M. Ghobadi, 1342 "Proportional Rate Reduction for TCP", May 2013. 1344 [FACK] Mathis, M. and M. Jamshid, "Forward acknowledgement: 1345 refining TCP congestion control", ACM SIGCOMM Computer 1346 Communication Review, Volume 26, Issue 4, Oct. 1996. , 1347 1996. 1349 [POLICER16] 1350 Flach, T., Papageorge, P., Terzis, A., Pedrosa, L., Cheng, 1351 Y., Karim, T., Katz-Bassett, E., and R. Govindan, "An 1352 Analysis of Traffic Policing in the Web", ACM SIGCOMM , 1353 2016. 1355 [QUIC-LR] Iyengar, J. and I. Swett, "QUIC Loss Recovery And 1356 Congestion Control", draft-ietf-quic-recovery-latest (work 1357 in progress), March 2020. 1359 [RACK-TCPM97] 1360 Cheng, Y., "RACK: a time-based fast loss recovery", IETF97 1361 TCPM meeting , 2016. 1363 [RFC7765] Hurtig, P., Brunstrom, A., Petlund, A., and M. Welzl, "TCP 1364 and SCTP RTO Restart", February 2016. 1366 [SCWA99] Savage, S., Cardwell, N., Wetherall, D., and T. Anderson, 1367 "TCP Congestion Control With a Misbehaving Receiver", ACM 1368 Computer Communication Review, 29(5) , 1999. 1370 [Sprout] Winstein, K., Sivaraman, A., and H. Balakrishnan, 1371 "Stochastic Forecasts Achieve High Throughput and Low 1372 Delay over Cellular Networks", USENIX Symposium on 1373 Networked Systems Design and Implementation (NSDI) , 2013. 1375 [TLP] Dukkipati, N., Cardwell, N., Cheng, Y., and M. Mathis, 1376 "Tail Loss Probe (TLP): An Algorithm for Fast Recovery of 1377 Tail Drops", draft-dukkipati-tcpm-tcp-loss-probe-01 (work 1378 in progress), August 2013. 1380 Authors' Addresses 1382 Yuchung Cheng 1383 Google, Inc 1385 Email: ycheng@google.com 1387 Neal Cardwell 1388 Google, Inc 1390 Email: ncardwell@google.com 1391 Nandita Dukkipati 1392 Google, Inc 1394 Email: nanditad@google.com 1396 Priyaranjan Jha 1397 Google, Inc 1399 Email: priyarjha@google.com