idnits 2.17.1 draft-ietf-tcpm-rack-13.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 2, 2020) is 1264 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 793 (Obsoleted by RFC 9293) -- Duplicate reference: RFC2119, mentioned in 'RFC8174', was also mentioned in 'RFC2119'. Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TCP Maintenance Working Group Y. Cheng 3 Internet-Draft N. Cardwell 4 Intended status: Standards Track N. Dukkipati 5 Expires: May 6, 2021 P. Jha 6 Google, Inc 7 November 2, 2020 9 The RACK-TLP loss detection algorithm for TCP 10 draft-ietf-tcpm-rack-13 12 Abstract 14 This document presents the RACK-TLP loss detection algorithm for TCP. 15 RACK-TLP uses per-segment transmit timestamps and selective 16 acknowledgements (SACK) and has two parts: RACK ("Recent 17 ACKnowledgment") starts fast recovery quickly using time-based 18 inferences derived from ACK feedback. TLP ("Tail Loss Probe") 19 leverages RACK and sends a probe packet to trigger ACK feedback to 20 avoid retransmission timeout (RTO) events. Compared to the widely 21 used DUPACK threshold approach, RACK-TLP detects losses more 22 efficiently when there are application-limited flights of data, lost 23 retransmissions, or data packet reordering events. It is intended to 24 be an alternative to the DUPACK threshold approach. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at https://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on May 6, 2021. 43 Copyright Notice 45 Copyright (c) 2020 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (https://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 61 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 62 2.1. Background . . . . . . . . . . . . . . . . . . . . . . . 3 63 2.2. Motivation . . . . . . . . . . . . . . . . . . . . . . . 4 64 3. RACK-TLP high-level design . . . . . . . . . . . . . . . . . 5 65 3.1. RACK: time-based loss inferences from ACKs . . . . . . . 5 66 3.2. TLP: sending one segment to probe losses quickly with 67 RACK . . . . . . . . . . . . . . . . . . . . . . . . . . 6 68 3.3. RACK-TLP: reordering resilience with a time threshold . . 6 69 3.3.1. Reordering design rationale . . . . . . . . . . . . . 6 70 3.3.2. Reordering window adaptation . . . . . . . . . . . . 8 71 3.4. An Example of RACK-TLP in Action: fast recovery . . . . . 9 72 3.5. An Example of RACK-TLP in Action: RTO . . . . . . . . . . 10 73 3.6. Design Summary . . . . . . . . . . . . . . . . . . . . . 10 74 4. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 11 75 5. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 11 76 5.1. Per-segment variables . . . . . . . . . . . . . . . . . . 11 77 5.2. Per-connection variables . . . . . . . . . . . . . . . . 12 78 6. RACK Algorithm Details . . . . . . . . . . . . . . . . . . . 13 79 6.1. Upon transmitting a data segment . . . . . . . . . . . . 13 80 6.2. Upon receiving an ACK . . . . . . . . . . . . . . . . . . 14 81 6.3. Upon RTO expiration . . . . . . . . . . . . . . . . . . . 19 82 7. TLP Algorithm Details . . . . . . . . . . . . . . . . . . . . 20 83 7.1. Initializing state . . . . . . . . . . . . . . . . . . . 20 84 7.2. Scheduling a loss probe . . . . . . . . . . . . . . . . . 20 85 7.3. Sending a loss probe upon PTO expiration . . . . . . . . 21 86 7.4. Detecting losses using the ACK of the loss probe . . . . 22 87 7.4.1. General case: detecting packet losses using RACK . . 22 88 7.4.2. Special case: detecting a single loss repaired by the 89 loss probe . . . . . . . . . . . . . . . . . . . . . 23 90 8. Managing RACK-TLP timers . . . . . . . . . . . . . . . . . . 24 91 9. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 24 92 9.1. Advantages and disadvantages . . . . . . . . . . . . . . 24 93 9.2. Relationships with other loss recovery algorithms . . . . 26 94 9.3. Interaction with congestion control . . . . . . . . . . . 26 95 9.4. TLP recovery detection with delayed ACKs . . . . . . . . 27 96 9.5. RACK for other transport protocols . . . . . . . . . . . 28 97 10. Security Considerations . . . . . . . . . . . . . . . . . . . 28 98 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28 99 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 28 100 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 29 101 13.1. Normative References . . . . . . . . . . . . . . . . . . 29 102 13.2. Informative References . . . . . . . . . . . . . . . . . 29 103 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 31 105 1. Terminology 107 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 108 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 109 "OPTIONAL" in this document are to be interpreted as described in BCP 110 14 [RFC2119] [RFC8174] when, and only when, they appear in all 111 capitals, as shown here. In this document, these words will appear 112 with that interpretation only when in UPPER CASE. Lower case uses of 113 these words are not to be interpreted as carrying [RFC2119] 114 significance. 116 2. Introduction 118 This document presents RACK-TLP, a TCP loss detection algorithm that 119 improves upon the widely implemented DUPACK counting approach in 120 [RFC5681][RFC6675], and that is RECOMMENDED to be used as an 121 alternative to that earlier approach. RACK-TLP has two parts: RACK 122 ("Recent ACKnowledgment") detects losses quickly using time-based 123 inferences derived from ACK feedback. TLP ("Tail Loss Probe") 124 triggers ACK feedback by quickly sending a probe segment, to avoid 125 retransmission timeout (RTO) events. 127 2.1. Background 129 In traditional TCP loss recovery algorithms [RFC5681][RFC6675], a 130 sender starts fast recovery when the number of DUPACKs received 131 reaches a threshold (DupThresh) that defaults to 3 (this approach is 132 referred to as DUPACK-counting in the rest of the document). The 133 sender also halves the congestion window during the recovery. The 134 rationale behind the partial window reduction is that congestion does 135 not seem severe since ACK clocking is still maintained. The time 136 elapsed in fast recovery can be just one round-trip, e.g. if the 137 sender uses SACK-based recovery [RFC6675] and the number of lost 138 segments is small. 140 If fast recovery is not triggered, or triggers but fails to repair 141 all the losses, then the sender resorts to RTO recovery. The RTO 142 timer interval is conservatively the smoothed RTT (SRTT) plus four 143 times the RTT variation, and is lower bounded to 1 second [RFC6298]. 145 Upon RTO timer expiration, the sender retransmits the first 146 unacknowledged segment and resets the congestion window to the LOSS 147 WINDOW value (by default 1 full-size segment [RFC5681]). The 148 rationale behind the congestion window reset is that an entire flight 149 of data was lost, and the ACK clock was lost, so this deserves a 150 cautious response. The sender then retransmits the rest of the data 151 following the slow start algorithm [RFC5681]. The time elapsed in 152 RTO recovery is one RTO interval plus the number of round-trips 153 needed to repair all the losses. 155 2.2. Motivation 157 Fast Recovery is the preferred form of loss recovery because it can 158 potentially recover all losses in the time scale of a single round 159 trip, with only a fractional congestion window reduction. RTO 160 recovery and congestion window reset should ideally be the last 161 resort, only used when the entire flight is lost. However, in 162 addition to losing an entire flight of data, the following situations 163 can unnecessarily resort to RTO recovery with traditional TCP loss 164 recovery algorithms [RFC5681][RFC6675]: 166 1. Packet drops for short flows or at the end of an application data 167 flight. When the sender is limited by the application (e.g. 168 structured request/response traffic), segments lost at the end of 169 the application data transfer often can only be recovered by RTO. 170 Consider an example of losing only the last segment in a flight 171 of 100 segments. Lacking any DUPACK, the sender RTO expires and 172 reduces the congestion window to 1, and raises the congestion 173 window to just 2 after the loss repair is acknowledged. In 174 contrast, any single segment loss occurring between the first and 175 the 97th segment would result in fast recovery, which would only 176 cut the window in half. 178 2. Lost retransmissions. Heavy congestion or traffic policers can 179 cause retransmissions to be lost. Lost retransmissions cause a 180 resort to RTO recovery, since DUPACK-counting does not detect the 181 loss of the retransmissions. Then the slow start after RTO 182 recovery could cause burst losses again that severely degrades 183 performance [POLICER16]. 185 3. Packet reordering. Link-layer protocols (e.g., 802.11 block 186 ACK), link bonding, or routers' internal load-balancing (e.g., 187 ECMP) can deliver TCP segments out of order. The degree of such 188 reordering is usually within the order of the path round trip 189 time. If the reordering degree is beyond DupThresh, the DUPACK- 190 counting can cause a spurious fast recovery and unnecessary 191 congestion window reduction. To mitigate the issue, [RFC4653] 192 adjusts DupThresh to half of the inflight size to tolerate the 193 higher degree of reordering. However if more than half of the 194 inflight is lost, then the sender has to resort to RTO recovery. 196 3. RACK-TLP high-level design 198 RACK-TLP allows senders to recover losses more effectively in all 199 three scenarios described in the previous section. There are two 200 design principles behind RACK-TLP. The first principle is to detect 201 losses via ACK events as much as possible, to repair losses at round- 202 trip time-scales. The second principle is to gently probe the 203 network to solicit additional ACK feedback, to avoid RTO expiration 204 and subsequent congestion window reset. At a high level, the two 205 principles are implemented in RACK and TLP, respectively. 207 3.1. RACK: time-based loss inferences from ACKs 209 The rationale behind RACK is that if a segment is delivered out of 210 order, then the segments sent chronologically before that were either 211 lost or reordered. This concept is not fundamentally different from 212 [RFC5681][RFC6675][FACK]. RACK's key innovation is using per-segment 213 transmission timestamps and widely-deployed SACK [RFC2018] options to 214 conduct time-based inferences, instead of inferring losses by 215 counting ACKs or SACKed sequences. Time-based inferences are more 216 robust than DUPACK-counting approaches because they have no 217 dependence on flight size, and thus are effective for application- 218 limited traffic. 220 Conceptually, RACK puts a virtual timer for every data segment sent 221 (including retransmissions). Each timer expires dynamically based on 222 the latest RTT measurements plus an additional delay budget to 223 accommodate potential packet reordering (called the reordering 224 window). When a segment's timer expires, RACK marks the 225 corresponding segment lost for retransmission. 227 In reality, as an algorithm, RACK does not arm a timer for every 228 segment sent because it's not necessary. Instead the sender records 229 the most recent transmission time of every data segment sent, 230 including retransmissions. For each ACK received, the sender 231 calculates the latest RTT measurement (if eligible) and adjusts the 232 expiration time of every segment sent but not yet delivered. If a 233 segment has expired, RACK marks it lost. 235 Since the time-based logic of RACK applies equally to retransmissions 236 and original transmissions, it can detect lost retransmissions as 237 well. If a segment has been retransmitted but its most recent 238 (re)transmission timestamp has expired, then after a reordering 239 window it's marked lost. 241 3.2. TLP: sending one segment to probe losses quickly with RACK 243 RACK infers losses from ACK feedback; however, in some cases ACKs are 244 sparse, particularly when the inflight is small or when the losses 245 are high. In some challenging cases the last few segments in a 246 flight are lost. With [RFC5681] or [RFC6675] the sender's RTO would 247 expire and reset the congestion window, when in reality most of the 248 flight has been delivered. 250 Consider an example where a sender with a large congestion window 251 transmits 100 new data segments after an application write, and only 252 the last three segments are lost. Without RACK-TLP, the RTO expires, 253 the sender retransmits the first unacknowledged segment, and the 254 congestion window slow-starts from 1. After all the retransmits are 255 acknowledged the congestion window has been increased to 4. The 256 total delivery time for this application transfer is three RTTs plus 257 one RTO, a steep cost given that only a tiny fraction of the flight 258 was lost. If instead the losses had occurred three segments sooner 259 in the flight, then fast recovery would have recovered all losses 260 within one round-trip and would have avoided resetting the congestion 261 window. 263 Fast Recovery would be preferable in such scenarios; TLP is designed 264 to trigger the feedback RACK needed to enable that. After the last 265 (100th) segment was originally sent, TLP sends the next available 266 (new) segment or retransmits the last (highest-sequenced) segment in 267 two round-trips to probe the network, hence the name "Tail Loss 268 Probe". The successful delivery of the probe would solicit an ACK. 269 RACK uses this ACK to detect that the 98th and 99th segments were 270 lost, trigger fast recovery, and retransmit both successfully. The 271 total recovery time is four RTTs, and the congestion window is only 272 partially reduced instead of being fully reset. If the probe was 273 also lost then the sender would invoke RTO recovery resetting the 274 congestion window. 276 3.3. RACK-TLP: reordering resilience with a time threshold 278 3.3.1. Reordering design rationale 280 Upon receiving an ACK indicating an out-of-order data delivery, a 281 sender cannot tell immediately whether that out-of-order delivery was 282 a result of reordering or loss. It can only distinguish between the 283 two in hindsight if the missing sequence ranges are filled in later 284 without retransmission. Thus a loss detection algorithm needs to 285 budget some wait time -- a reordering window -- to try to 286 disambiguate packet reordering from packet loss. 288 The reordering window in the DUPACK-counting approach is implicitly 289 defined as the elapsed time to receive acknowledgements for 290 DupThresh-worth of out-of-order deliveries. This approach is 291 effective if the network reordering degree (in sequence distance) is 292 smaller than DupThresh and at least DupThresh segments after the loss 293 are acknowledged. For cases where the reordering degree is larger 294 than the default DupThresh of 3 packets, one alternative is to 295 dynamically adapt DupThresh based on the FlightSize (e.g., the sender 296 adjusts the DUPTRESH to half of the FlightSize). However, this does 297 not work well with the following two types of reordering: 299 1. Application-limited flights where the last non-full-sized segment 300 is delivered first and then the remaining full-sized segments in 301 the flight are delivered in order. This reordering pattern can 302 occur when segments traverse parallel forwarding paths. In such 303 scenarios the degree of reordering in packet distance is one 304 segment less than the flight size. 306 2. A flight of segments that are delivered partially out of order. 307 One cause for this pattern is wireless link-layer retransmissions 308 with an inadequate reordering buffer at the receiver. In such 309 scenarios, the wireless sender sends the data packets in order 310 initially, but some are lost and then recovered by link-layer 311 retransmissions; the wireless receiver delivers the TCP data 312 packets in the order they are received, due to the inadequate 313 reordering buffer. The random wireless transmission errors in 314 such scenarios cause the reordering degree, expressed in packet 315 distance, to have highly variable values up to the flight size. 317 In the above two cases the degree of reordering in packet distance is 318 highly variable, making DUPACK-counting approach ineffective 319 including dynamic adaptation variants like [RFC4653]. Instead the 320 degree of reordering in time difference in such cases is usually 321 within a single round-trip time. This is because the packets either 322 traverse slightly disjoint paths with similar propagation delays or 323 are repaired quickly by the local access technology. Hence, using a 324 time threshold instead of packet threshold strikes a middle ground, 325 allowing a bounded degree of reordering resilience while still 326 allowing fast recovery. This is the rationale behind the RACK-TLP 327 reordering resilience design. 329 Specifically, RACK-TLP introduces a new dynamic reordering window 330 parameter in time units, and the sender considers a data segment S 331 lost if both conditions are met: 333 1. Another data segment sent later than S has been delivered 334 2. S has not been delivered after the estimated round-trip time plus 335 the reordering window 337 Note that condition (1) implies at least one round-trip of time has 338 elapsed since S has been sent. 340 3.3.2. Reordering window adaptation 342 The RACK reordering window adapts to the measured duration of 343 reordering events, within reasonable and specific bounds to 344 disincentivize excessive reordering. More specifically, the sender 345 sets the reordering window as follows: 347 1. The reordering window SHOULD be set to zero if no reordering has 348 been observed on the connection so far, and either (a) three 349 segments have been delivered out of order since the last recovery 350 or (b) the sender is already in fast or RTO recovery. Otherwise, 351 the reordering window SHOULD start from a small fraction of the 352 round trip time, or zero if no round trip time estimate is 353 available. 355 2. The RACK reordering window SHOULD adaptively increase (using the 356 algorithm in "Step 4: Update RACK reordering window", below) if 357 the sender receives a Duplicate Selective Acknowledgement (DSACK) 358 option [RFC2883]. Receiving a DSACK suggests the sender made a 359 spurious retransmission, which may have been due to the 360 reordering window being too small. 362 3. The RACK reordering window MUST be bounded and this bound SHOULD 363 be SRTT. 365 Rules 2 and 3 are required to adapt to reordering caused by dynamics 366 such as the prolonged link-layer loss recovery episodes described 367 earlier. Each increase in the reordering window requires a new round 368 trip where the sender receives a DSACK; thus, depending on the extent 369 of reordering, it may take multiple round trips to fully adapt. 371 For short flows, the low initial reordering window helps recover 372 losses quickly, at the risk of spurious retransmissions. The 373 rationale is that spurious retransmissions for short flows are not 374 expected to produce excessive additional network traffic. For long 375 flows the design tolerates reordering within a round trip. This 376 handles reordering in small time scales (reordering within the round- 377 trip time of the shortest path). 379 However, the fact that the initial reordering window is low, and the 380 reordering window's adaptive growth is bounded, means that there will 381 continue to be a cost to reordering that disincentivizes excessive 382 reordering. 384 3.4. An Example of RACK-TLP in Action: fast recovery 386 The following example in figure 1 illustrates the RACK-TLP algorithm 387 in action: 389 Event TCP DATA SENDER TCP DATA RECEIVER 390 _____ ____________________________________________________________ 391 1. Send P0, P1, P2, P3 --> 392 [P1, P2, P3 dropped by network] 394 2. <-- Receive P0, ACK P0 396 3a. 2RTTs after (2), TLP timer fires 397 3b. TLP: retransmits P3 --> 399 4. <-- Receive P3, SACK P3 401 5a. Receive SACK for P3 402 5b. RACK: marks P1, P2 lost 403 5c. Retransmit P1, P2 --> 404 [P1 retransmission dropped by network] 406 6. <-- Receive P2, SACK P2 & P3 408 7a. RACK: marks P1 retransmission lost 409 7b. Retransmit P1 --> 411 8. <-- Receive P1, ACK P3 413 Figure 1. RACK-TLP protocol example 415 Figure 1, above, illustrates a sender sending four segments (P1, P2, 416 P3, P4) and losing the last three segments. After two round-trips, 417 TLP sends a loss probe, retransmitting the last segment, P3, to 418 solicit SACK feedback and restore the ACK clock (event 3). The 419 delivery of P3 enables RACK to infer (event 5b) that P1 and P2 were 420 likely lost, because they were sent before P3. The sender then 421 retransmits P1 and P2. Unfortunately, the retransmission of P1 is 422 lost again. However, the delivery of the retransmission of P2 allows 423 RACK to infer that the retransmission of P1 was likely lost (event 424 7a), and hence P1 should be retransmitted (event 7b). 426 3.5. An Example of RACK-TLP in Action: RTO 428 In addition to enhancing fast recovery, RACK improves the accuracy of 429 RTO recovery by reducing spurious retransmissions. 431 Without RACK, upon RTO timer expiration the sender marks all the 432 unacknowledged segments lost. This approach can lead to spurious 433 retransmissions. For example, consider a simple case where one 434 segment was sent with an RTO of 1 second, and then the application 435 writes more data, causing a second and third segment to be sent right 436 before the RTO of the first segment expires. Suppose only the first 437 segment is lost. Without RACK, upon RTO expiration the sender marks 438 all three segments as lost and retransmits the first segment. When 439 the sender receives the ACK that selectively acknowledges the second 440 segment, the sender spuriously retransmits the third segment. 442 With RACK, upon RTO timer expiration the only segment automatically 443 marked lost is the first segment (since it was sent an RTO ago); for 444 all the other segments RACK only marks the segment lost if at least 445 one round trip has elapsed since the segment was transmitted. 446 Consider the previous example scenario, this time with RACK. With 447 RACK, when the RTO expires the sender only marks the first segment as 448 lost, and retransmits that segment. The other two very recently sent 449 segments are not marked lost, because they were sent less than one 450 round trip ago and there were no ACKs providing evidence that they 451 were lost. When the sender receives the ACK that selectively 452 acknowledges the second segment, the sender would not retransmit the 453 third segment but rather would send any new segments (if allowed by 454 congestion window and receive window). 456 In the above example, if the sender were to send a large burst of 457 segments instead of two segments right before RTO, without RACK the 458 sender may spuriously retransmit almost the entire flight. Note that 459 the Eifel protocol [RFC3522] cannot prevent this issue because it can 460 only detect spurious RTO episodes. In this example the RTO itself 461 was not spurious. 463 3.6. Design Summary 465 To summarize, RACK-TLP aims to adapt to small time-varying degrees of 466 reordering, quickly recover most losses within one to two round 467 trips, and avoid costly RTO recoveries. In the presence of 468 reordering, the adaptation algorithm can impose sometimes-needless 469 delays when it waits to disambiguate loss from reordering, but the 470 penalty for waiting is bounded to one round trip and such delays are 471 confined to flows long enough to have observed reordering. 473 4. Requirements 475 The reader is expected to be familiar with the definitions given in 476 the TCP congestion control [RFC5681] and selective acknowledgment 477 [RFC2018][RFC6675] RFCs. RACK-TLP has the following requirements: 479 1. The connection MUST use selective acknowledgment (SACK) options 480 [RFC2018], and the sender MUST keep SACK scoreboard information 481 on a per-connection basis ("SACK scoreboard" has the same meaning 482 here as in [RFC6675] section 3). 484 2. For each data segment sent, the sender MUST store its most recent 485 transmission time with a timestamp whose granularity that is 486 finer than 1/4 of the minimum RTT of the connection. At the time 487 of writing, microsecond resolution is suitable for intra- 488 datacenter traffic and millisecond granularity or finer is 489 suitable for the Internet. Note that RACK-TLP can be implemented 490 with TSO (TCP Segmentation Offload) support by having multiple 491 segments in a TSO aggregate share the same timestamp. 493 3. RACK DSACK-based reordering window adaptation is RECOMMENDED but 494 is not required. 496 4. TLP requires RACK. 498 5. Definitions 500 The reader is expected to be familiar with the variables of SND.UNA, 501 SND.NXT, SEG.ACK, and SEG.SEQ in [RFC793], SMSS, FlightSize in 502 [RFC5681], DupThresh in [RFC6675], RTO and SRTT in [RFC6298]. A 503 RACK-TLP implementation needs to store new per-segment and per- 504 connection state, described below. 506 5.1. Per-segment variables 508 Theses variables indicate the status of the most recent transmission 509 of a data segment: 511 "Segment.lost" is true if the most recent (re)transmission of the 512 segment has been marked lost and needs to be retransmitted. False 513 otherwise. 515 "Segment.retransmitted" is true if the segment has ever been 516 retransmitted. False otherwise. 518 "Segment.xmit_ts" is the time of the last transmission of a data 519 segment, including retransmissions, if any, with a clock granularity 520 specified in the Requirements section. A maximum value INFINITE_TS 521 indicates an invalid timestamp that represents that the Segment is 522 not currently in flight. 524 "Segment.end_seq" is the next sequence number after the last sequence 525 number of the data segment. 527 5.2. Per-connection variables 529 "RACK.segment". Among all the segments that have been either 530 selectively or cumulatively acknowledged, RACK.segment is the one 531 that was sent most recently (including retransmissions). 533 "RACK.xmit_ts" is the latest transmission timestamp of RACK.segment. 535 "RACK.end_seq" is the Segment.end_seq of RACK.segment. 537 "RACK.ack_ts" is the time when the full sequence range of 538 RACK.segment was selectively or cumulatively acknowledged. 540 "RACK.segs_sacked" returns the total number of segments selectively 541 acknowledged in the SACK scoreboard. 543 "RACK.fack" is the highest selectively or cumulatively acknowledged 544 sequence (i.e. forward acknowledgement). 546 "RACK.min_RTT" is the estimated minimum round-trip time (RTT) of the 547 connection. 549 "RACK.rtt" is the RTT of the most recently delivered segment on the 550 connection (either cumulatively acknowledged or selectively 551 acknowledged) that was not marked invalid as a possible spurious 552 retransmission. 554 "RACK.reordering_seen" indicates whether the sender has detected data 555 segment reordering event(s). 557 "RACK.reo_wnd" is a reordering window computed in the unit of time 558 used for recording segment transmission times. It is used to defer 559 the moment at which RACK marks a segment lost. 561 "RACK.dsack" indicates if a DSACK option has been received since the 562 last RACK.reo_wnd change. 564 "RACK.reo_wnd_mult" is the multiplier applied to adjust RACK.reo_wnd. 566 "RACK.reo_wnd_persist" is the number of loss recoveries before 567 resetting RACK.reo_wnd. 569 "RACK.rtt_seq" is the SND.NXT when RACK.rtt is updated. 571 "TLP.is_retrans": a boolean indicating whether there is an 572 unacknowledged TLP retransmission. 574 "TLP.end_seq": the value of SND.NXT at the time of sending a TLP 575 retransmission. 577 "TLP.max_ack_delay": sender's maximum delayed ACK timer budget. 579 Per-connection timers 581 "RACK reordering timer": a timer that allows RACK to wait for 582 reordering to resolve, to try to disambiguate reordering from loss, 583 when some out-of-order segments are marked as SACKed. 585 "TLP PTO": a timer event indicating that an ACK is overdue and the 586 sender should transmit a TLP segment, to solicit SACK or ACK 587 feedback. 589 These timers augment the existing timers maintained by a sender, 590 including the RTO timer [RFC6298]. A RACK-TLP sender arms one of 591 these three timers -- RACK reordering timer, TLP PTO timer, or RTO 592 timer -- when it has unacknowledged segments in flight. The 593 implementation can simplify managing all three timers by multiplexing 594 a single timer among them with an additional variable to indicate the 595 event to invoke upon the next timer expiration. 597 6. RACK Algorithm Details 599 6.1. Upon transmitting a data segment 601 Upon transmitting a new segment or retransmitting an old segment, 602 record the time in Segment.xmit_ts and set Segment.lost to FALSE. 603 Upon retransmitting a segment, set Segment.retransmitted to TRUE. 605 RACK_transmit_new_data(Segment): 606 Segment.xmit_ts = Now() 607 Segment.lost = FALSE 609 RACK_retransmit_data(Segment): 610 Segment.retransmitted = TRUE 611 Segment.xmit_ts = Now() 612 Segment.lost = FALSE 614 6.2. Upon receiving an ACK 616 Step 1: Update RACK.min_RTT. 618 Use the RTT measurements obtained via [RFC6298] or [RFC7323] to 619 update the estimated minimum RTT in RACK.min_RTT. The sender SHOULD 620 track a windowed min-filtered estimate of recent RTT measurements 621 that can adapt when migrating to significantly longer paths, rather 622 than a simple global minimum of all RTT measurements. 624 Step 2: Update state for most recently sent segment that has been 625 delivered 627 In this step, RACK updates the states that track the most recently 628 sent segment that has been delivered: RACK.segment; RACK maintains 629 its latest transmission timestamp in RACK.xmit_ts and its highest 630 sequence number in RACK.end_seq. These two variables are used, in 631 later steps, to estimate if some segments not yet delivered were 632 likely lost. Given the information provided in an ACK, each segment 633 cumulatively ACKed or SACKed is marked as delivered in the 634 scoreboard. Since an ACK can also acknowledge retransmitted data 635 segments, and retransmissions can be spurious, the sender needs to 636 take care to avoid spurious inferences. For example, if the sender 637 were to use timing information from a spurious retransmission, the 638 RACK.rtt could be vastly underestimated. 640 To avoid spurious inferences, ignore a segment as invalid if any of 641 its sequence range has been retransmitted before and either of two 642 conditions is true: 644 1. The Timestamp Echo Reply field (TSecr) of the ACK's timestamp 645 option [RFC7323], if available, indicates the ACK was not 646 acknowledging the last retransmission of the segment. 648 2. The segment was last retransmitted less than RACK.min_rtt ago. 650 The second check is a heuristic when the TCP Timestamp option is not 651 available, or when the round trip time is less than the TCP Timestamp 652 clock granularity. 654 Among all the segments newly ACKed or SACKed by this ACK that pass 655 the checks above, update the RACK.rtt to be the RTT sample calculated 656 using this ACK. Furthermore, record the most recent Segment.xmit_ts 657 in RACK.xmit_ts if it is ahead of RACK.xmit_ts. If Segment.xmit_ts 658 equals RACK.xmit_ts (e.g. due to clock granularity limits) then 659 compare Segment.end_seq and RACK.end_seq to break the tie. 661 Step 2 may be summarized in pseudocode as: 663 RACK_sent_after(t1, seq1, t2, seq2): 664 If t1 > t2: 665 Return true 666 Else if t1 == t2 AND seq1 > seq2: 667 Return true 668 Else: 669 Return false 671 RACK_update(): 672 For each Segment newly acknowledged cumulatively or selectively: 673 rtt = Now() - Segment.xmit_ts 674 If Segment.retransmitted is TRUE: 675 If ACK.ts_option.echo_reply < Segment.xmit_ts: 676 Return 677 If rtt < RACK.min_rtt: 678 Return 680 RACK.rtt = rtt 681 If RACK_sent_after(Segment.xmit_ts, Segment.end_seq 682 RACK.xmit_ts, RACK.end_seq): 683 RACK.xmit_ts = Segment.xmit_ts 685 Step 3: Detect data segment reordering 687 To detect reordering, the sender looks for original data segments 688 being delivered out of order. To detect such cases, the sender 689 tracks the highest sequence selectively or cumulatively acknowledged 690 in the RACK.fack variable. The name "fack" stands for the most 691 "Forward ACK" (this term is adopted from [FACK]). If a never- 692 retransmitted segment that's below RACK.fack is (selectively or 693 cumulatively) acknowledged, it has been delivered out of order. The 694 sender sets RACK.reordering_seen to TRUE if such segment is 695 identified. 697 RACK_detect_reordering(): 698 For each Segment newly acknowledged cumulatively or selectively: 699 If Segment.end_seq > RACK.fack: 700 RACK.fack = Segment.end_seq 701 Else if Segment.end_seq < RACK.fack AND 702 Segment.retransmitted is FALSE: 703 RACK.reordering_seen = TRUE 705 Step 4: Update RACK reordering window 707 The RACK reordering window, RACK.reo_wnd, serves as an adaptive 708 allowance for settling time before marking a segment lost. This step 709 documents a detailed algorithm that follows the principles outlined 710 in the ``Reordering window adaptation'' section. 712 If no reordering has been observed, based on the previous step, then 713 one way the sender can enter Fast Recovery is when the number of 714 SACKed segments matches or exceeds DupThresh (similar to RFC6675). 715 Furthermore, when no reordering has been observed the RACK.reo_wnd is 716 set to 0 both upon entering and during Fast Recovery or RTO recovery. 718 Otherwise, if some reordering has been observed, then RACK does not 719 trigger Fast Recovery based on DupThresh. 721 Whether or not reordering has been observed, RACK uses the reordering 722 window to assess whether any segments can be marked lost. As a 723 consequence, the sender also enters Fast Recovery when there are any 724 number of SACKed segments as long as the reorder window has passed 725 for some non-SACKed segments. 727 When the reordering window is not set to 0, it starts with a 728 conservative RACK.reo_wnd of RACK.min_RTT/4. This value was chosen 729 because Linux TCP used the same factor in its implementation to delay 730 Early Retransmit [RFC5827] to reduce spurious loss detections in the 731 presence of reordering, and experience showed this worked reasonably 732 well [DMCG11]. 734 However, the reordering detection in the previous step, Step 3, has a 735 self-reinforcing drawback when the reordering window is too small to 736 cope with the actual reordering. When that happens, RACK could 737 spuriously mark reordered segments lost, causing them to be 738 retransmitted. In turn, the retransmissions can prevent the 739 necessary conditions for Step 3 to detect reordering, since this 740 mechanism requires ACKs or SACKs for only segments that have never 741 been retransmitted. In some cases such scenarios can persist, 742 causing RACK to continue to spuriously mark segments lost without 743 realizing the reordering window is too small. 745 To avoid the issue above, RACK dynamically adapts to higher degrees 746 of reordering using DSACK options from the receiver. Receiving an 747 ACK with a DSACK option indicates a possible spurious retransmission, 748 suggesting that RACK.reo_wnd may be too small. The RACK.reo_wnd 749 increases linearly for every round trip in which the sender receives 750 some DSACK option, so that after N distinct round trips in which a 751 DSACK is received, the RACK.reo_wnd becomes (N+1) * min_RTT / 4, with 752 an upper-bound of SRTT. 754 If the reordering is temporary then a large adapted reordering window 755 would unnecessarily delay loss recovery later. Therefore, RACK 756 persists using the inflated RACK.reo_wnd for up to 16 loss 757 recoveries, after which it resets RACK.reo_wnd to its starting value, 758 min_RTT / 4. The downside of resetting the reordering window is the 759 risk of triggering spurious fast recovery episodes if the reordering 760 remains high. The rationale for this approach is to bound such 761 spurious recoveries to approximately once every 16 recoveries (less 762 than 7%). 764 To track the linear scaling factor for the adaptive reordering 765 window, RACK uses the variable RACK.reo_wnd_mult, which is 766 initialized to 1 and adapts with observed reordering. 768 The following pseudocode implements the above algorithm for updating 769 the RACK reordering window: 771 RACK_update_reo_wnd(): 773 /* DSACK-based reordering window adaptation */ 774 If RACK.dsack_round is not None AND 775 SND.UNA >= RACK.dsack_round: 776 RACK.dsack_round = None 777 /* Grow the reordering window per round that sees DSACK. 778 Reset the window after 16 DSACK-free recoveries */ 779 If RACK.dsack_round is None AND 780 any DSACK option is present on latest received ACK: 781 RACK.dsack_round = SND.NXT 782 RACK.reo_wnd_mult += 1 783 RACK.reo_wnd_persist = 16 784 Else if exiting Fast or RTO recovery: 785 RACK.reo_wnd_persist -= 1 786 If RACK.reo_wnd_persist <= 0: 787 RACK.reo_wnd_mult = 1 789 If RACK.reordering_seen is FALSE: 790 If in Fast or RTO recovery: 791 Return 0 792 Else if RACK.segs_sacked >= DupThresh: 793 Return 0 794 Return min(RACK.min_RTT / 4 * RACK.reo_wnd_mult, SRTT) 796 Step 5: Detect losses. 798 For each segment that has not been SACKed, RACK considers that 799 segment lost if another segment that was sent later has been 800 delivered, and the reordering window has passed. RACK considers the 801 reordering window to have passed if the RACK.segment was sent 802 sufficiently after the segment in question, or a sufficient time has 803 elapsed since the RACK.segment was S/ACKed, or some combination of 804 the two. More precisely, RACK marks a segment lost if: 806 RACK.xmit_ts >= Segment.xmit_ts 807 AND 808 RACK.xmit_ts - Segment.xmit_ts + (now - RACK.ack_ts) >= RACK.reo_wnd 810 Solving this second condition for "now", the moment at which a 811 segment is marked lost, yields: 813 now >= Segment.xmit_ts + RACK.reo_wnd + (RACK.ack_ts - RACK.xmit_ts) 815 Then (RACK.ack_ts - RACK.xmit_ts) is the round trip time of the most 816 recently (re)transmitted segment that's been delivered. When 817 segments are delivered in order, the most recently (re)transmitted 818 segment that's been delivered is also the most recently delivered, 819 hence RACK.rtt == RACK.ack_ts - RACK.xmit_ts. But if segments were 820 reordered, then the segment delivered most recently was sent before 821 the most recently (re)transmitted segment. Hence RACK.rtt > 822 (RACK.ack_ts - RACK.xmit_ts). 824 Since RACK.RTT >= (RACK.ack_ts - RACK.xmit_ts), the previous equation 825 reduces to saying that the sender can declare a segment lost when: 827 now >= Segment.xmit_ts + RACK.reo_wnd + RACK.rtt 829 In turn, that is equivalent to stating that a RACK sender should 830 declare a segment lost when: 832 Segment.xmit_ts + RACK.rtt + RACK.reo_wnd - now <= 0 834 Note that if the value on the left hand side is positive, it 835 represents the remaining wait time before the segment is deemed lost. 836 But this risks a timeout (RTO) if no more ACKs come back (e.g., due 837 to losses or application-limited transmissions) to trigger the 838 marking. For timely loss detection, the sender is RECOMMENDED to 839 install a reordering timer. This timer expires at the earliest 840 moment when RACK would conclude that all the unacknowledged segments 841 within the reordering window were lost. 843 The following pseudocode implements the algorithm above. When an ACK 844 is received or the RACK reordering timer expires, call 845 RACK_detect_loss_and_arm_timer(). The algorithm breaks timestamp 846 ties by using the TCP sequence space, since high-speed networks often 847 have multiple segments with identical timestamps. 849 RACK_detect_loss(): 850 timeout = 0 851 RACK.reo_wnd = RACK_update_reo_wnd() 852 For each segment, Segment, not acknowledged yet: 853 If RACK_sent_after(RACK.xmit_ts, RACK.end_seq, 854 Segment.xmit_ts, Segment.end_seq): 855 remaining = Segment.xmit_ts + RACK.rtt + 856 RACK.reo_wnd - Now() 857 If remaining <= 0: 858 Segment.lost = TRUE 859 Segment.xmit_ts = INFINITE_TS 860 Else: 861 timeout = max(remaining, timeout) 862 Return timeout 864 RACK_detect_loss_and_arm_timer(): 865 timeout = RACK_detect_loss() 866 If timeout != 0 867 Arm the RACK timer to call 868 RACK_detect_loss_and_arm_timer() after timeout 870 As an optimization, an implementation can choose to check only 871 segments that have been sent before RACK.xmit_ts. This can be more 872 efficient than scanning the entire SACK scoreboard, especially when 873 there are many segments in flight. The implementation can use a 874 separate doubly-linked list ordered by Segment.xmit_ts and inserts a 875 segment at the tail of the list when it is (re)transmitted, and 876 removes a segment from the list when it is delivered or marked lost. 877 In Linux TCP this optimization improved CPU usage by orders of 878 magnitude during some fast recovery episodes on high-speed WAN 879 networks. 881 6.3. Upon RTO expiration 883 Upon RTO timer expiration, RACK marks the first outstanding segment 884 as lost (since it was sent an RTO ago); for all the other segments 885 RACK only marks the segment lost if the time elapsed since the 886 segment was transmitted is at least the sum of the recent RTT and the 887 reordering window. 889 RACK_mark_losses_on_RTO(): 890 For each segment, Segment, not acknowledged yet: 891 If SEG.SEQ == SND.UNA OR 892 Segment.xmit_ts + RACK.rtt + RACK.reo_wnd - Now() <= 0: 893 Segment.lost = TRUE 895 7. TLP Algorithm Details 897 7.1. Initializing state 899 Reset TLP.is_retrans and TLP.end_seq when initiating a connection, 900 fast recovery, or RTO recovery. 902 TLP_init(): 903 TLP.end_seq = None 904 TLP.is_retrans = false 906 7.2. Scheduling a loss probe 908 The sender schedules a loss probe timeout (PTO) to transmit a segment 909 during the normal transmission process. The sender SHOULD start or 910 restart a loss probe PTO timer after transmitting new data (that was 911 not itself a loss probe) or upon receiving an ACK that cumulatively 912 acknowledges new data, unless it is already in fast recovery, RTO 913 recovery, or the sender has segments delivered out-of-order (i.e. 914 RACK.segs_sacked is not zero). These conditions are excluded because 915 they are addressed by similar mechanisms, like Limited Transmit 916 [RFC3042], the RACK reordering timer, and F-RTO [RFC5682]. 918 The sender calculates the PTO interval by taking into account a 919 number of factors. 921 First, the default PTO interval is 2*SRTT. By that time, it is 922 prudent to declare that an ACK is overdue, since under normal 923 circumstances, i.e. no losses, an ACK typically arrives in one SRTT. 924 Choosing PTO to be exactly an SRTT would risk causing spurious 925 probes, given that network and end-host delay variance can cause an 926 ACK to be delayed beyond SRTT. Hence the PTO is conservatively 927 chosen to be the next integral multiple of SRTT. 929 Second, when there is no SRTT estimate available, the PTO SHOULD be 1 930 second. This conservative value corresponds to the RTO value when no 931 SRTT is available, per [RFC6298]. 933 Third, when FlightSize is one segment, the sender MAY inflate PTO by 934 TLP.max_ack_delay to accommodate a potential delayed acknowledgment 935 and reduce the risk of spurious retransmissions. The actual value of 936 TLP.max_ack_delay is implementation-specific. 938 Finally, if the time at which an RTO would fire (here denoted 939 "TCP_RTO_expiration()") is sooner than the computed time for the PTO, 940 then the sender schedules a TLP to be sent at that RTO time. 942 Summarizing these considerations in pseudocode form, a sender SHOULD 943 use the following logic to select the duration of a PTO: 945 TLP_calc_PTO(): 946 If SRTT is available: 947 PTO = 2 * SRTT 948 If FlightSize is one segment: 949 PTO += TLP.max_ack_delay 950 Else: 951 PTO = 1 sec 953 If Now() + PTO > TCP_RTO_expiration(): 954 PTO = TCP_RTO_expiration() - Now() 956 7.3. Sending a loss probe upon PTO expiration 958 When the PTO timer expires, the sender SHOULD transmit a previously 959 unsent data segment, if the receive window allows, and increment the 960 FlightSize accordingly. Note that FlightSize could be one packet 961 greater than the congestion window temporarily until the next ACK 962 arrives. 964 If such a segment is not available, then the sender SHOULD retransmit 965 the highest-sequence segment sent so far and set TLP.is_retrans to 966 true. This segment is chosen to deal with the retransmission 967 ambiguity problem in TCP. Suppose a sender sends N segments, and 968 then retransmits the last segment (segment N) as a loss probe, and 969 then the sender receives a SACK for segment N. As long as the sender 970 waits for the RACK reordering window to expire, it doesn't matter if 971 that SACK was for the original transmission of segment N or the TLP 972 retransmission; in either case the arrival of the SACK for segment N 973 provides evidence that the N-1 segments preceding segment N were 974 likely lost. 976 In the case where there is only one original outstanding segment of 977 data (N=1), the same logic (trivially) applies: an ACK for a single 978 outstanding segment tells the sender the N-1=0 segments preceding 979 that segment were lost. Furthermore, whether there are N>1 or N=1 980 outstanding segments, there is a question about whether the original 981 last segment or its TLP retransmission were lost; the sender 982 estimates whether there was such a loss using TLP recovery detection 983 (see below). 985 The sender MUST follow the RACK transmission procedures in the ''Upon 986 Transmitting a Data Segment'' section (see above) upon sending either 987 a retransmission or new data loss probe. This is critical for 988 detecting losses using the ACK for the loss probe. Furthermore, 989 prior to sending a loss probe, the sender MUST check that there is no 990 other previous loss probe still in flight. This ensures that at any 991 given time the sender has at most one additional packet in flight 992 beyond the congestion window limit. This invariant is maintained 993 using the state variable TLP.end_seq, which indicates the latest 994 unacknowledged TLP loss probe's ending sequence. It is reset when 995 the loss probe has been acknowledged or is deemed lost or irrelevant. 996 After attempting to send a loss probe, regardless of whether a loss 997 probe was sent, the sender MUST re-arm the RTO timer, not the PTO 998 timer, if FlightSize is not zero. This ensures RTO recovery remains 999 the last resort if TLP fails. The following pseudo code summarizes 1000 the operations. 1002 TLP_send_probe(): 1004 If TLP.end_seq is None: 1005 TLP.is_retrans = false 1006 Segment = send buffer segment starting at SND.NXT 1007 If Segment exists and fits the peer receive window limit: 1008 /* Transmit the lowest-sequence unsent Segment */ 1009 Transmit Segment 1010 RACK_transmit_data(Segment) 1011 TLP.end_seq = SND.NXT 1012 Increase FlightSize by Segment length 1013 Else: 1014 /* Retransmit the highest-sequence Segment sent */ 1015 Segment = send buffer segment ending at SND.NXT 1016 Transmit Segment 1017 RACK_retransmit_data(Segment) 1018 TLP.end_seq = SND.NXT 1020 7.4. Detecting losses using the ACK of the loss probe 1022 When there is packet loss in a flight ending with a loss probe, the 1023 feedback solicited by a loss probe will reveal one of two scenarios, 1024 depending on the pattern of losses. 1026 7.4.1. General case: detecting packet losses using RACK 1028 If the loss probe and the ACK that acknowledges the probe are 1029 delivered successfully, RACK-TLP uses this ACK -- just as it would 1030 with any other ACK -- to detect if any segments sent prior to the 1031 probe were dropped. RACK would typically infer that any 1032 unacknowledged data segments sent before the loss probe were lost, 1033 since they were sent sufficiently far in the past (at least one PTO 1034 has elapsed, plus one round-trip for the loss probe to be ACKed). 1035 More specifically, RACK_detect_loss() (step 5) would mark those 1036 earlier segments as lost. Then the sender would trigger a fast 1037 recovery to recover those losses. 1039 7.4.2. Special case: detecting a single loss repaired by the loss probe 1041 If the TLP retransmission repairs all the lost in-flight sequence 1042 ranges (i.e. only the last segment in the flight was lost), the ACK 1043 for the loss probe appears to be a regular cumulative ACK, which 1044 would not normally trigger the congestion control response to this 1045 packet loss event. The following TLP recovery detection mechanism 1046 examines ACKs to detect this special case to make congestion control 1047 respond properly [RFC5681]. 1049 After a TLP retransmission, the sender checks for this special case 1050 of a single loss that is recovered by the loss probe itself. To 1051 accomplish this, the sender checks for a duplicate ACK or DSACK 1052 indicating that both the original segment and TLP retransmission 1053 arrived at the receiver, meaning there was no loss. If the TLP 1054 sender does not receive such an indication, then it MUST assume that 1055 either the original data segment, the TLP retransmission, or a 1056 corresponding ACK were lost, for congestion control purposes. 1058 If the TLP retransmission is spurious, a receiver that uses DSACK 1059 would return an ACK that covers TLP.end_seq with a DSACK option (Case 1060 1). If the receiver does not support DSACK, it would return a DUPACK 1061 without any SACK option (Case 2). If the sender receives an ACK 1062 matching either case, then the sender estimates that the receiver 1063 received both the original data segment and the TLP probe 1064 retransmission, and so the sender considers the TLP episode to be 1065 done, and records that fact by setting TLP.end_seq to None. 1067 Upon receiving an ACK that covers some sequence number after 1068 TLP.end_seq, the sender should have received any ACKs for the 1069 original segment and TLP probe retransmission segment. At that time, 1070 if the TLP.end_seq is still set, and thus indicates that the TLP 1071 probe retransmission remains unacknowledged, then the sender should 1072 presume that at least one of its data segments was lost. The sender 1073 then SHOULD invoke a congestion control response equivalent to a fast 1074 recovery. 1076 More precisely, on each ACK the sender executes the following: 1078 TLP_process_ack(ACK): 1079 If TLP.end_seq is not None AND ACK's ack. number >= TLP.end_seq: 1080 If not TLP.is_retrans: 1081 TLP.end_seq = None /* TLP of new data delivered */ 1082 Else if ACK has a DSACK option matching TLP.end_seq: 1083 TLP.end_seq = None /* Case 1, above */ 1084 Else If ACK's ack. number > TLP.end_seq: 1085 TLP.end_seq = None /* Repaired the single loss */ 1086 (Invoke congestion control to react to 1087 the loss event the probe has repaired) 1088 Else If ACK is a DUPACK without any SACK option: 1089 TLP.end_seq = None /* Case 2, above */ 1091 8. Managing RACK-TLP timers 1093 The RACK reordering, the TLP PTO timer, the RTO and Zero Window Probe 1094 (ZWP) timer [RFC793] are mutually exclusive and used in different 1095 scenarios. When arming a RACK reordering timer or TLP PTO timer, the 1096 sender SHOULD cancel any other pending timer(s). An implementation 1097 is to have one timer with an additional state variable indicating the 1098 type of the timer. 1100 9. Discussion 1102 9.1. Advantages and disadvantages 1104 The biggest advantage of RACK-TLP is that every data segment, whether 1105 it is an original data transmission or a retransmission, can be used 1106 to detect losses of the segments sent chronologically prior to it. 1107 This enables RACK-TLP to use fast recovery in cases with application- 1108 limited flights of data, lost retransmissions, or data segment 1109 reordering events. Consider the following examples: 1111 1. Packet drops at the end of an application data flight: Consider a 1112 sender that transmits an application-limited flight of three data 1113 segments (P1, P2, P3), and P1 and P3 are lost. Suppose the 1114 transmission of each segment is at least RACK.reo_wnd after the 1115 transmission of the previous segment. RACK will mark P1 as lost 1116 when the SACK of P2 is received, and this will trigger the 1117 retransmission of P1 as R1. When R1 is cumulatively 1118 acknowledged, RACK will mark P3 as lost and the sender will 1119 retransmit P3 as R3. This example illustrates how RACK is able 1120 to repair certain drops at the tail of a transaction without an 1121 RTO recovery. Notice that neither the conventional duplicate ACK 1122 threshold [RFC5681], nor [RFC6675], nor the Forward 1123 Acknowledgment [FACK] algorithm can detect such losses, because 1124 of the required segment or sequence count. 1126 2. Lost retransmission: Consider a flight of three data segments 1127 (P1, P2, P3) that are sent; P1 and P2 are dropped. Suppose the 1128 transmission of each segment is at least RACK.reo_wnd after the 1129 transmission of the previous segment. When P3 is SACKed, RACK 1130 will mark P1 and P2 lost and they will be retransmitted as R1 and 1131 R2. Suppose R1 is lost again but R2 is SACKed; RACK will mark R1 1132 lost and trigger retransmission again. Again, neither the 1133 conventional three duplicate ACK threshold approach, nor 1134 [RFC6675], nor the Forward Acknowledgment [FACK] algorithm can 1135 detect such losses. And such a lost retransmission can happen 1136 when TCP is being rate-limited, particularly by token bucket 1137 policers with large bucket depth and low rate limit; in such 1138 cases retransmissions are often lost repeatedly because standard 1139 congestion control requires multiple round trips to reduce the 1140 rate below the policed rate. 1142 3. Packet reordering: Consider a simple reordering event where a 1143 flight of segments are sent as (P1, P2, P3). P1 and P2 carry a 1144 full payload of MSS octets, but P3 has only a 1-octet payload. 1145 Suppose the sender has detected reordering previously and thus 1146 RACK.reo_wnd is min_RTT/4. Now P3 is reordered and delivered 1147 first, before P1 and P2. As long as P1 and P2 are delivered 1148 within min_RTT/4, RACK will not consider P1 and P2 lost. But if 1149 P1 and P2 are delivered outside the reordering window, then RACK 1150 will still spuriously mark P1 and P2 lost. 1152 The examples above show that RACK-TLP is particularly useful when the 1153 sender is limited by the application, which can happen with 1154 interactive or request/response traffic. Similarly, RACK still works 1155 when the sender is limited by the receive window, which can happen 1156 with applications that use the receive window to throttle the sender. 1158 RACK-TLP works more efficiently with TCP Segmentation Offload (TSO) 1159 compared to DUPACK-counting. RACK always marks the entire TSO 1160 aggregate lost because the segments in the same TSO aggregate have 1161 the same transmission timestamp. By contrast, the algorithms based 1162 on sequence counting (e.g., [RFC6675][RFC5681]) may mark only a 1163 subset of segments in the TSO aggregate lost, forcing the stack to 1164 perform expensive fragmentation of the TSO aggregate, or to 1165 selectively tag individual segments lost in the scoreboard. 1167 The main drawback of RACK-TLP is the additional states required 1168 compared to DUPACK-counting. RACK requires the sender to record the 1169 transmission time of each segment sent at a clock granularity that is 1170 finer than 1/4 of the minimum RTT of the connection. TCP 1171 implementations that record this already for RTT estimation do not 1172 require any new per-packet state. But implementations that are not 1173 yet recording segment transmission times will need to add per-packet 1174 internal state (expected to be either 4 or 8 octets per segment or 1175 TSO aggregate) to track transmission times. In contrast, [RFC6675] 1176 loss detection approach does not require any per-packet state beyond 1177 the SACK scoreboard; this is particularly useful on ultra-low RTT 1178 networks where the RTT may be less than the sender TCP clock 1179 granularity (e.g. inside data-centers). Another disadvantage is the 1180 reordering timer may expire prematurely (like any other 1181 retransmission timer) to cause higher spurious retransmission 1182 especially if DSACK is not supported. 1184 9.2. Relationships with other loss recovery algorithms 1186 The primary motivation of RACK-TLP is to provide a general 1187 alternative to some of the standard loss recovery algorithms 1188 [RFC5681][RFC6675][RFC5827][RFC4653]. [RFC5827][RFC4653] dynamically 1189 adjusts the duplicate ACK threshold based on the current or previous 1190 flight sizes. RACK-TLP takes a different approach by using a time- 1191 based reordering window. RACK-TLP can be seen as an extended Early 1192 Retransmit [RFC5827] without a FlightSize limit but with an 1193 additional reordering window. [FACK] considers an original segment 1194 to be lost when its sequence range is sufficiently far below the 1195 highest SACKed sequence. In some sense RACK-TLP can be seen as a 1196 generalized form of FACK that operates in time space instead of 1197 sequence space, enabling it to better handle reordering, application- 1198 limited traffic, and lost retransmissions. 1200 RACK-TLP is compatible with the standard RTO [RFC6298], RTO-restart 1201 [RFC7765], F-RTO [RFC5682] and Eifel algorithms [RFC3522]. This is 1202 because RACK-TLP only detects loss by using ACK events. It neither 1203 changes the RTO timer calculation nor detects spurious RTO. 1205 9.3. Interaction with congestion control 1207 RACK-TLP intentionally decouples loss detection from congestion 1208 control. RACK-TLP only detects losses; it does not modify the 1209 congestion control algorithm [RFC5681][RFC6937]. A segment marked 1210 lost by RACK-TLP MUST NOT be retransmitted until congestion control 1211 deems this appropriate. 1213 The only exception -- the only way in which RACK-TLP modulates the 1214 congestion control algorithm -- is that one outstanding loss probe 1215 can be sent even if the congestion window is fully used. However, 1216 this temporary over-commit is accounted for and credited in the in- 1217 flight data tracked for congestion control, so that congestion 1218 control will erase the over-commit upon the next ACK. 1220 If packet losses happen after the reordering window has been 1221 increased by DSACK, RACK-TLP may take longer to detect losses than 1222 the pure DUPACK-counting approach. In this case TCP may continue to 1223 increase the congestion window upon receiving ACKs during this time, 1224 making the sender more aggressive. 1226 The following simple example compares how RACK-TLP and non-RACK-TLP 1227 loss detection interacts with congestion control: suppose a sender 1228 has a congestion window (cwnd) of 20 segments on a SACK-enabled 1229 connection. It sends 10 data segments and all of them are lost. 1231 Without RACK-TLP, the sender would time out, reset cwnd to 1, and 1232 retransmit the first segment. It would take four round trips (1 + 2 1233 + 4 + 3 = 10) to retransmit all the 10 lost segments using slow 1234 start. The recovery latency would be RTO + 4*RTT, with an ending 1235 cwnd of 4 segments due to congestion window validation. 1237 With RACK-TLP, a sender would send the TLP after 2*RTT and get a 1238 DUPACK, enabling RACK to detect the losses and trigger fast recovery. 1239 If the sender implements Proportional Rate Reduction [RFC6937] it 1240 would slow start to retransmit the remaining 9 lost segments since 1241 the number of segments in flight (0) is lower than the slow start 1242 threshold (10). The slow start would again take four round trips (1 1243 + 2 + 4 + 3 = 10) to retransmit all the lost segments. The recovery 1244 latency would be 2*RTT + 4*RTT, with an ending cwnd set to the slow 1245 start threshold of 10 segments. 1247 The difference in recovery latency (RTO + 4*RTT vs 6*RTT) can be 1248 significant if the RTT is much smaller than the minimum RTO (1 second 1249 in [RFC6298]) or if the RTT is large. The former case can happen in 1250 local area networks, data-center networks, or content distribution 1251 networks with deep deployments. The latter case can happen in 1252 developing regions with highly congested and/or high-latency 1253 networks. 1255 9.4. TLP recovery detection with delayed ACKs 1257 Delayed or stretched ACKs complicate the detection of repairs done by 1258 TLP, since with such ACKs the sender takes longer time to receive 1259 fewer ACKs than would normally be expected. To mitigate this 1260 complication, before sending a TLP loss probe retransmission, the 1261 sender should attempt to wait long enough that the receiver has sent 1262 any delayed ACKs that it is withholding. The sender algorithm 1263 described above features such a delay, in the form of 1264 TLP.max_ack_delay. Furthermore, if the receiver supports DSACK then 1265 in the case of a delayed ACK the sender's TLP recovery detection 1266 mechanism (see above) can use the DSACK information to infer that the 1267 original and TLP retransmission both arrived at the receiver. 1269 If there is ACK loss or a delayed ACK without a DSACK, then this 1270 algorithm is conservative, because the sender will reduce the 1271 congestion window when in fact there was no packet loss. In practice 1272 this is acceptable, and potentially even desirable: if there is 1273 reverse path congestion then reducing the congestion window can be 1274 prudent. 1276 9.5. RACK for other transport protocols 1278 RACK can be implemented in other transport protocols (e.g., [QUIC- 1279 LR]). The [Sprout] loss detection algorithm was also independently 1280 designed to use a 10ms reordering window to improve its loss 1281 detection. 1283 10. Security Considerations 1285 RACK-TLP algorithm behavior is based on information conveyed in SACK 1286 options, so it has security considerations similar to those described 1287 in the Security Considerations section of [RFC6675]. 1289 Additionally, RACK-TLP has a lower risk profile than [RFC6675] 1290 because it is not vulnerable to ACK-splitting attacks [SCWA99]: for 1291 an MSS-size segment sent, the receiver or the attacker might send MSS 1292 ACKs that SACK or acknowledge one additional byte per ACK. This 1293 would not fool RACK. In such a scenario, RACK.xmit_ts would not 1294 advance, because all the sequence ranges within the segment were 1295 transmitted at the same time, and thus carry the same transmission 1296 timestamp. In other words, SACKing only one byte of a segment or 1297 SACKing the segment in entirety have the same effect with RACK. 1299 11. IANA Considerations 1301 This document makes no request of IANA. 1303 Note to RFC Editor: this section may be removed on publication as an 1304 RFC. 1306 12. Acknowledgments 1308 The authors thank Matt Mathis for his insights in FACK and Michael 1309 Welzl for his per-packet timer idea that inspired this work. Eric 1310 Dumazet, Randy Stewart, Van Jacobson, Ian Swett, Rick Jones, Jana 1311 Iyengar, Hiren Panchasara, Praveen Balasubramanian, Yoshifumi 1312 Nishida, Bob Briscoe, Felix Weinrank, Michael Tuexen, Martin Duke, 1313 Ilpo Jarvinen, Theresa Enghardt, Mirja Kuehlewind, Gorry Fairhurst, 1314 and Yi Huang contributed to the draft or the implementations in 1315 Linux, FreeBSD, Windows, and QUIC. 1317 13. References 1319 13.1. Normative References 1321 [RFC2018] Mathis, M. and J. Mahdavi, "TCP Selective Acknowledgment 1322 Options", RFC 2018, October 1996. 1324 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1325 Requirement Levels", RFC 2119, March 1997. 1327 [RFC2883] Floyd, S., Mahdavi, J., Mathis, M., and M. Podolsky, "An 1328 Extension to the Selective Acknowledgement (SACK) Option 1329 for TCP", RFC 2883, July 2000. 1331 [RFC3042] Allman, M., Balakrishnan, H., and S. Floyd, "Enhancing 1332 TCP's Loss Recovery Using Limited Transmit", January 2001. 1334 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1335 Control", RFC 5681, September 2009. 1337 [RFC6298] Paxson, V., Allman, M., Chu, J., and M. Sargent, 1338 "Computing TCP's Retransmission Timer", RFC 6298, June 1339 2011. 1341 [RFC6675] Blanton, E., Allman, M., Wang, L., Jarvinen, I., Kojo, M., 1342 and Y. Nishida, "A Conservative Loss Recovery Algorithm 1343 Based on Selective Acknowledgment (SACK) for TCP", 1344 RFC 6675, August 2012. 1346 [RFC7323] Borman, D., Braden, B., Jacobson, V., and R. 1347 Scheffenegger, "TCP Extensions for High Performance", 1348 September 2014. 1350 [RFC793] Postel, J., "Transmission Control Protocol", September 1351 1981. 1353 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1354 2119 Key Words", May 2017. 1356 13.2. Informative References 1358 [DMCG11] Dukkipati, N., Matthis, M., Cheng, Y., and M. Ghobadi, 1359 "Proportional Rate Reduction for TCP", ACM SIGCOMM 1360 Conference on Internet Measurement , 2011. 1362 [FACK] Mathis, M. and M. Jamshid, "Forward acknowledgement: 1363 refining TCP congestion control", ACM SIGCOMM Computer 1364 Communication Review, Volume 26, Issue 4, Oct. 1996. , 1365 1996. 1367 [POLICER16] 1368 Flach, T., Papageorge, P., Terzis, A., Pedrosa, L., Cheng, 1369 Y., Karim, T., Katz-Bassett, E., and R. Govindan, "An 1370 Analysis of Traffic Policing in the Web", ACM SIGCOMM , 1371 2016. 1373 [QUIC-LR] Iyengar, J. and I. Swett, "QUIC Loss Detection and 1374 Congestion Control", draft-ietf-quic-recovery (work in 1375 progress), Octobor 2020. 1377 [RFC3522] Ludwig, R. and M. Meyer, "The Eifel Detection Algorithm 1378 for TCP", April 2003. 1380 [RFC4653] Bhandarkar, S., Reddy, A., Allman, M., and E. Blanton, 1381 "Improving the Robustness of TCP to Non-Congestion 1382 Events", August 2006. 1384 [RFC5682] Sarolahti, P., Kojo, M., Yamamoto, K., and M. Hata, 1385 "Forward RTO-Recovery (F-RTO): An Algorithm for Detecting 1386 Spurious Retransmission Timeouts with TCP", RFC 5682, 1387 September 2009. 1389 [RFC5827] Allman, M., Ayesta, U., Wang, L., Blanton, J., and P. 1390 Hurtig, "Early Retransmit for TCP and Stream Control 1391 Transmission Protocol (SCTP)", RFC 5827, April 2010. 1393 [RFC6937] Mathis, M., Dukkipati, N., and Y. Cheng, "Proportional 1394 Rate Reduction for TCP", May 2013. 1396 [RFC7765] Hurtig, P., Brunstrom, A., Petlund, A., and M. Welzl, "TCP 1397 and SCTP RTO Restart", February 2016. 1399 [SCWA99] Savage, S., Cardwell, N., Wetherall, D., and T. Anderson, 1400 "TCP Congestion Control With a Misbehaving Receiver", ACM 1401 Computer Communication Review, 29(5) , 1999. 1403 [Sprout] Winstein, K., Sivaraman, A., and H. Balakrishnan, 1404 "Stochastic Forecasts Achieve High Throughput and Low 1405 Delay over Cellular Networks", USENIX Symposium on 1406 Networked Systems Design and Implementation (NSDI) , 2013. 1408 Authors' Addresses 1410 Yuchung Cheng 1411 Google, Inc 1413 Email: ycheng@google.com 1415 Neal Cardwell 1416 Google, Inc 1418 Email: ncardwell@google.com 1420 Nandita Dukkipati 1421 Google, Inc 1423 Email: nanditad@google.com 1425 Priyaranjan Jha 1426 Google, Inc 1428 Email: priyarjha@google.com