idnits 2.17.1 draft-ietf-tcpm-rack-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (December 1, 2020) is 1236 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 793 (Obsoleted by RFC 9293) -- Duplicate reference: RFC2119, mentioned in 'RFC8174', was also mentioned in 'RFC2119'. Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TCP Maintenance Working Group Y. Cheng 3 Internet-Draft N. Cardwell 4 Intended status: Standards Track N. Dukkipati 5 Expires: June 4, 2021 P. Jha 6 Google, Inc 7 December 1, 2020 9 The RACK-TLP loss detection algorithm for TCP 10 draft-ietf-tcpm-rack-14 12 Abstract 14 This document presents the RACK-TLP loss detection algorithm for TCP. 15 RACK-TLP uses per-segment transmit timestamps and selective 16 acknowledgements (SACK) and has two parts: RACK ("Recent 17 ACKnowledgment") starts fast recovery quickly using time-based 18 inferences derived from ACK feedback. TLP ("Tail Loss Probe") 19 leverages RACK and sends a probe packet to trigger ACK feedback to 20 avoid retransmission timeout (RTO) events. Compared to the widely 21 used DUPACK threshold approach, RACK-TLP detects losses more 22 efficiently when there are application-limited flights of data, lost 23 retransmissions, or data packet reordering events. It is intended to 24 be an alternative to the DUPACK threshold approach. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at https://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on June 4, 2021. 43 Copyright Notice 45 Copyright (c) 2020 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (https://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 61 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 62 2.1. Background . . . . . . . . . . . . . . . . . . . . . . . 3 63 2.2. Motivation . . . . . . . . . . . . . . . . . . . . . . . 4 64 3. RACK-TLP high-level design . . . . . . . . . . . . . . . . . 5 65 3.1. RACK: time-based loss inferences from ACKs . . . . . . . 5 66 3.2. TLP: sending one segment to probe losses quickly with 67 RACK . . . . . . . . . . . . . . . . . . . . . . . . . . 6 68 3.3. RACK-TLP: reordering resilience with a time threshold . . 6 69 3.3.1. Reordering design rationale . . . . . . . . . . . . . 6 70 3.3.2. Reordering window adaptation . . . . . . . . . . . . 8 71 3.4. An Example of RACK-TLP in Action: fast recovery . . . . . 9 72 3.5. An Example of RACK-TLP in Action: RTO . . . . . . . . . . 10 73 3.6. Design Summary . . . . . . . . . . . . . . . . . . . . . 10 74 4. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 11 75 5. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 11 76 5.1. Per-segment variables . . . . . . . . . . . . . . . . . . 11 77 5.2. Per-connection variables . . . . . . . . . . . . . . . . 12 78 6. RACK Algorithm Details . . . . . . . . . . . . . . . . . . . 13 79 6.1. Upon transmitting a data segment . . . . . . . . . . . . 13 80 6.2. Upon receiving an ACK . . . . . . . . . . . . . . . . . . 14 81 6.3. Upon RTO expiration . . . . . . . . . . . . . . . . . . . 19 82 7. TLP Algorithm Details . . . . . . . . . . . . . . . . . . . . 20 83 7.1. Initializing state . . . . . . . . . . . . . . . . . . . 20 84 7.2. Scheduling a loss probe . . . . . . . . . . . . . . . . . 20 85 7.3. Sending a loss probe upon PTO expiration . . . . . . . . 21 86 7.4. Detecting losses using the ACK of the loss probe . . . . 22 87 7.4.1. General case: detecting packet losses using RACK . . 22 88 7.4.2. Special case: detecting a single loss repaired by the 89 loss probe . . . . . . . . . . . . . . . . . . . . . 23 90 8. Managing RACK-TLP timers . . . . . . . . . . . . . . . . . . 24 91 9. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 24 92 9.1. Advantages and disadvantages . . . . . . . . . . . . . . 24 93 9.2. Relationships with other loss recovery algorithms . . . . 26 94 9.3. Interaction with congestion control . . . . . . . . . . . 26 95 9.4. TLP recovery detection with delayed ACKs . . . . . . . . 27 96 9.5. RACK for other transport protocols . . . . . . . . . . . 28 97 10. Security Considerations . . . . . . . . . . . . . . . . . . . 28 98 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28 99 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 28 100 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 29 101 13.1. Normative References . . . . . . . . . . . . . . . . . . 29 102 13.2. Informative References . . . . . . . . . . . . . . . . . 29 103 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 31 105 1. Terminology 107 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 108 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 109 "OPTIONAL" in this document are to be interpreted as described in BCP 110 14 [RFC2119] [RFC8174] when, and only when, they appear in all 111 capitals, as shown here. In this document, these words will appear 112 with that interpretation only when in UPPER CASE. Lower case uses of 113 these words are not to be interpreted as carrying [RFC2119] 114 significance. 116 2. Introduction 118 This document presents RACK-TLP, a TCP loss detection algorithm that 119 improves upon the widely implemented DUPACK counting approach in 120 [RFC5681] [RFC6675], and that is RECOMMENDED to be used as an 121 alternative to that earlier approach. RACK-TLP has two parts: RACK 122 ("Recent ACKnowledgment") detects losses quickly using time-based 123 inferences derived from ACK feedback. TLP ("Tail Loss Probe") 124 triggers ACK feedback by quickly sending a probe segment, to avoid 125 retransmission timeout (RTO) events. 127 2.1. Background 129 In traditional TCP loss recovery algorithms [RFC5681] [RFC6675], a 130 sender starts fast recovery when the number of DUPACKs received 131 reaches a threshold (DupThresh) that defaults to 3 (this approach is 132 referred to as DUPACK-counting in the rest of the document). The 133 sender also halves the congestion window during the recovery. The 134 rationale behind the partial window reduction is that congestion does 135 not seem severe since ACK clocking is still maintained. The time 136 elapsed in fast recovery can be just one round-trip, e.g. if the 137 sender uses SACK-based recovery [RFC6675] and the number of lost 138 segments is small. 140 If fast recovery is not triggered, or triggers but fails to repair 141 all the losses, then the sender resorts to RTO recovery. The RTO 142 timer interval is conservatively the smoothed RTT (SRTT) plus four 143 times the RTT variation, and is lower bounded to 1 second [RFC6298]. 145 Upon RTO timer expiration, the sender retransmits the first 146 unacknowledged segment and resets the congestion window to the LOSS 147 WINDOW value (by default 1 full-size segment [RFC5681]). The 148 rationale behind the congestion window reset is that an entire flight 149 of data was lost, and the ACK clock was lost, so this deserves a 150 cautious response. The sender then retransmits the rest of the data 151 following the slow start algorithm [RFC5681]. The time elapsed in 152 RTO recovery is one RTO interval plus the number of round-trips 153 needed to repair all the losses. 155 2.2. Motivation 157 Fast Recovery is the preferred form of loss recovery because it can 158 potentially recover all losses in the time scale of a single round 159 trip, with only a fractional congestion window reduction. RTO 160 recovery and congestion window reset should ideally be the last 161 resort, only used when the entire flight is lost. However, in 162 addition to losing an entire flight of data, the following situations 163 can unnecessarily resort to RTO recovery with traditional TCP loss 164 recovery algorithms [RFC5681] [RFC6675]: 166 1. Packet drops for short flows or at the end of an application data 167 flight. When the sender is limited by the application (e.g. 168 structured request/response traffic), segments lost at the end of 169 the application data transfer often can only be recovered by RTO. 170 Consider an example of losing only the last segment in a flight 171 of 100 segments. Lacking any DUPACK, the sender RTO expires and 172 reduces the congestion window to 1, and raises the congestion 173 window to just 2 after the loss repair is acknowledged. In 174 contrast, any single segment loss occurring between the first and 175 the 97th segment would result in fast recovery, which would only 176 cut the window in half. 178 2. Lost retransmissions. Heavy congestion or traffic policers can 179 cause retransmissions to be lost. Lost retransmissions cause a 180 resort to RTO recovery, since DUPACK-counting does not detect the 181 loss of the retransmissions. Then the slow start after RTO 182 recovery could cause burst losses again that severely degrades 183 performance [POLICER16]. 185 3. Packet reordering. Link-layer protocols (e.g., 802.11 block 186 ACK), link bonding, or routers' internal load-balancing (e.g., 187 ECMP) can deliver TCP segments out of order. The degree of such 188 reordering is usually within the order of the path round trip 189 time. If the reordering degree is beyond DupThresh, the DUPACK- 190 counting can cause a spurious fast recovery and unnecessary 191 congestion window reduction. To mitigate the issue, [RFC4653] 192 adjusts DupThresh to half of the inflight size to tolerate the 193 higher degree of reordering. However if more than half of the 194 inflight is lost, then the sender has to resort to RTO recovery. 196 3. RACK-TLP high-level design 198 RACK-TLP allows senders to recover losses more effectively in all 199 three scenarios described in the previous section. There are two 200 design principles behind RACK-TLP. The first principle is to detect 201 losses via ACK events as much as possible, to repair losses at round- 202 trip time-scales. The second principle is to gently probe the 203 network to solicit additional ACK feedback, to avoid RTO expiration 204 and subsequent congestion window reset. At a high level, the two 205 principles are implemented in RACK and TLP, respectively. 207 3.1. RACK: time-based loss inferences from ACKs 209 The rationale behind RACK is that if a segment is delivered out of 210 order, then the segments sent chronologically before that were either 211 lost or reordered. This concept is not fundamentally different from 212 [RFC5681] [RFC6675] [FACK]. RACK's key innovation is using per- 213 segment transmission timestamps and widely-deployed SACK [RFC2018] 214 options to conduct time-based inferences, instead of inferring losses 215 by counting ACKs or SACKed sequences. Time-based inferences are more 216 robust than DUPACK-counting approaches because they have no 217 dependence on flight size, and thus are effective for application- 218 limited traffic. 220 Conceptually, RACK puts a virtual timer for every data segment sent 221 (including retransmissions). Each timer expires dynamically based on 222 the latest RTT measurements plus an additional delay budget to 223 accommodate potential packet reordering (called the reordering 224 window). When a segment's timer expires, RACK marks the 225 corresponding segment lost for retransmission. 227 In reality, as an algorithm, RACK does not arm a timer for every 228 segment sent because it's not necessary. Instead the sender records 229 the most recent transmission time of every data segment sent, 230 including retransmissions. For each ACK received, the sender 231 calculates the latest RTT measurement (if eligible) and adjusts the 232 expiration time of every segment sent but not yet delivered. If a 233 segment has expired, RACK marks it lost. 235 Since the time-based logic of RACK applies equally to retransmissions 236 and original transmissions, it can detect lost retransmissions as 237 well. If a segment has been retransmitted but its most recent 238 (re)transmission timestamp has expired, then after a reordering 239 window it's marked lost. 241 3.2. TLP: sending one segment to probe losses quickly with RACK 243 RACK infers losses from ACK feedback; however, in some cases ACKs are 244 sparse, particularly when the inflight is small or when the losses 245 are high. In some challenging cases the last few segments in a 246 flight are lost. With [RFC5681] or [RFC6675] the sender's RTO would 247 expire and reset the congestion window, when in reality most of the 248 flight has been delivered. 250 Consider an example where a sender with a large congestion window 251 transmits 100 new data segments after an application write, and only 252 the last three segments are lost. Without RACK-TLP, the RTO expires, 253 the sender retransmits the first unacknowledged segment, and the 254 congestion window slow-starts from 1. After all the retransmits are 255 acknowledged the congestion window has been increased to 4. The 256 total delivery time for this application transfer is three RTTs plus 257 one RTO, a steep cost given that only a tiny fraction of the flight 258 was lost. If instead the losses had occurred three segments sooner 259 in the flight, then fast recovery would have recovered all losses 260 within one round-trip and would have avoided resetting the congestion 261 window. 263 Fast Recovery would be preferable in such scenarios; TLP is designed 264 to trigger the feedback RACK needed to enable that. After the last 265 (100th) segment was originally sent, TLP sends the next available 266 (new) segment or retransmits the last (highest-sequenced) segment in 267 two round-trips to probe the network, hence the name "Tail Loss 268 Probe". The successful delivery of the probe would solicit an ACK. 269 RACK uses this ACK to detect that the 98th and 99th segments were 270 lost, trigger fast recovery, and retransmit both successfully. The 271 total recovery time is four RTTs, and the congestion window is only 272 partially reduced instead of being fully reset. If the probe was 273 also lost then the sender would invoke RTO recovery resetting the 274 congestion window. 276 3.3. RACK-TLP: reordering resilience with a time threshold 278 3.3.1. Reordering design rationale 280 Upon receiving an ACK indicating an out-of-order data delivery, a 281 sender cannot tell immediately whether that out-of-order delivery was 282 a result of reordering or loss. It can only distinguish between the 283 two in hindsight if the missing sequence ranges are filled in later 284 without retransmission. Thus a loss detection algorithm needs to 285 budget some wait time -- a reordering window -- to try to 286 disambiguate packet reordering from packet loss. 288 The reordering window in the DUPACK-counting approach is implicitly 289 defined as the elapsed time to receive acknowledgements for 290 DupThresh-worth of out-of-order deliveries. This approach is 291 effective if the network reordering degree (in sequence distance) is 292 smaller than DupThresh and at least DupThresh segments after the loss 293 are acknowledged. For cases where the reordering degree is larger 294 than the default DupThresh of 3 packets, one alternative is to 295 dynamically adapt DupThresh based on the FlightSize (e.g., the sender 296 adjusts the DUPTHRESH to half of the FlightSize). However, this does 297 not work well with the following two types of reordering: 299 1. Application-limited flights where the last non-full-sized segment 300 is delivered first and then the remaining full-sized segments in 301 the flight are delivered in order. This reordering pattern can 302 occur when segments traverse parallel forwarding paths. In such 303 scenarios the degree of reordering in packet distance is one 304 segment less than the flight size. 306 2. A flight of segments that are delivered partially out of order. 307 One cause for this pattern is wireless link-layer retransmissions 308 with an inadequate reordering buffer at the receiver. In such 309 scenarios, the wireless sender sends the data packets in order 310 initially, but some are lost and then recovered by link-layer 311 retransmissions; the wireless receiver delivers the TCP data 312 packets in the order they are received, due to the inadequate 313 reordering buffer. The random wireless transmission errors in 314 such scenarios cause the reordering degree, expressed in packet 315 distance, to have highly variable values up to the flight size. 317 In the above two cases the degree of reordering in packet distance is 318 highly variable. This makes DUPACK-counting approach ineffective 319 including dynamic adaptation variants like [RFC4653]. Instead the 320 degree of reordering in time difference in such cases is usually 321 within a single round-trip time. This is because the packets either 322 traverse slightly disjoint paths with similar propagation delays or 323 are repaired quickly by the local access technology. Hence, using a 324 time threshold instead of packet threshold strikes a middle ground, 325 allowing a bounded degree of reordering resilience while still 326 allowing fast recovery. This is the rationale behind the RACK-TLP 327 reordering resilience design. 329 Specifically, RACK-TLP introduces a new dynamic reordering window 330 parameter in time units, and the sender considers a data segment S 331 lost if both conditions are met: 333 1. Another data segment sent later than S has been delivered 334 2. S has not been delivered after the estimated round-trip time plus 335 the reordering window 337 Note that condition (1) implies at least one round-trip of time has 338 elapsed since S has been sent. 340 3.3.2. Reordering window adaptation 342 The RACK reordering window adapts to the measured duration of 343 reordering events, within reasonable and specific bounds to 344 disincentivize excessive reordering. More specifically, the sender 345 sets the reordering window as follows: 347 1. The reordering window SHOULD be set to zero if no reordering has 348 been observed on the connection so far, and either (a) three 349 segments have been delivered out of order since the last recovery 350 or (b) the sender is already in fast or RTO recovery. Otherwise, 351 the reordering window SHOULD start from a small fraction of the 352 round trip time, or zero if no round trip time estimate is 353 available. 355 2. The RACK reordering window SHOULD adaptively increase (using the 356 algorithm in "Step 4: Update RACK reordering window", below) if 357 the sender receives a Duplicate Selective Acknowledgement (DSACK) 358 option [RFC2883]. Receiving a DSACK suggests the sender made a 359 spurious retransmission, which may have been due to the 360 reordering window being too small. 362 3. The RACK reordering window MUST be bounded and this bound SHOULD 363 be SRTT. 365 Rules 2 and 3 are required to adapt to reordering caused by dynamics 366 such as the prolonged link-layer loss recovery episodes described 367 earlier. Each increase in the reordering window requires a new round 368 trip where the sender receives a DSACK; thus, depending on the extent 369 of reordering, it may take multiple round trips to fully adapt. 371 For short flows, the low initial reordering window helps recover 372 losses quickly, at the risk of spurious retransmissions. The 373 rationale is that spurious retransmissions for short flows are not 374 expected to produce excessive additional network traffic. For long 375 flows the design tolerates reordering within a round trip. This 376 handles reordering in small time scales (reordering within the round- 377 trip time of the shortest path). 379 However, the fact that the initial reordering window is low, and the 380 reordering window's adaptive growth is bounded, means that there will 381 continue to be a cost to reordering that disincentivizes excessive 382 reordering. 384 3.4. An Example of RACK-TLP in Action: fast recovery 386 The following example in figure 1 illustrates the RACK-TLP algorithm 387 in action: 389 Event TCP DATA SENDER TCP DATA RECEIVER 390 _____ ____________________________________________________________ 391 1. Send P0, P1, P2, P3 --> 392 [P1, P2, P3 dropped by network] 394 2. <-- Receive P0, ACK P0 396 3a. 2RTTs after (2), TLP timer fires 397 3b. TLP: retransmits P3 --> 399 4. <-- Receive P3, SACK P3 401 5a. Receive SACK for P3 402 5b. RACK: marks P1, P2 lost 403 5c. Retransmit P1, P2 --> 404 [P1 retransmission dropped by network] 406 6. <-- Receive P2, SACK P2 & P3 408 7a. RACK: marks P1 retransmission lost 409 7b. Retransmit P1 --> 411 8. <-- Receive P1, ACK P3 413 Figure 1. RACK-TLP protocol example 415 Figure 1, above, illustrates a sender sending four segments (P1, P2, 416 P3, P4) and losing the last three segments. After two round-trips, 417 TLP sends a loss probe, retransmitting the last segment, P3, to 418 solicit SACK feedback and restore the ACK clock (event 3). The 419 delivery of P3 enables RACK to infer (event 5b) that P1 and P2 were 420 likely lost, because they were sent before P3. The sender then 421 retransmits P1 and P2. Unfortunately, the retransmission of P1 is 422 lost again. However, the delivery of the retransmission of P2 allows 423 RACK to infer that the retransmission of P1 was likely lost (event 424 7a), and hence P1 should be retransmitted (event 7b). 426 3.5. An Example of RACK-TLP in Action: RTO 428 In addition to enhancing fast recovery, RACK improves the accuracy of 429 RTO recovery by reducing spurious retransmissions. 431 Without RACK, upon RTO timer expiration the sender marks all the 432 unacknowledged segments lost. This approach can lead to spurious 433 retransmissions. For example, consider a simple case where one 434 segment was sent with an RTO of 1 second, and then the application 435 writes more data, causing a second and third segment to be sent right 436 before the RTO of the first segment expires. Suppose only the first 437 segment is lost. Without RACK, upon RTO expiration the sender marks 438 all three segments as lost and retransmits the first segment. When 439 the sender receives the ACK that selectively acknowledges the second 440 segment, the sender spuriously retransmits the third segment. 442 With RACK, upon RTO timer expiration the only segment automatically 443 marked lost is the first segment (since it was sent an RTO ago); for 444 all the other segments RACK only marks the segment lost if at least 445 one round trip has elapsed since the segment was transmitted. 446 Consider the previous example scenario, this time with RACK. With 447 RACK, when the RTO expires the sender only marks the first segment as 448 lost, and retransmits that segment. The other two very recently sent 449 segments are not marked lost, because they were sent less than one 450 round trip ago and there were no ACKs providing evidence that they 451 were lost. When the sender receives the ACK that selectively 452 acknowledges the second segment, the sender would not retransmit the 453 third segment but rather would send any new segments (if allowed by 454 congestion window and receive window). 456 In the above example, if the sender were to send a large burst of 457 segments instead of two segments right before RTO, without RACK the 458 sender may spuriously retransmit almost the entire flight. Note that 459 the Eifel protocol [RFC3522] cannot prevent this issue because it can 460 only detect spurious RTO episodes. In this example the RTO itself 461 was not spurious. 463 3.6. Design Summary 465 To summarize, RACK-TLP aims to adapt to small time-varying degrees of 466 reordering, quickly recover most losses within one to two round 467 trips, and avoid costly RTO recoveries. In the presence of 468 reordering, the adaptation algorithm can impose sometimes-needless 469 delays when it waits to disambiguate loss from reordering, but the 470 penalty for waiting is bounded to one round trip and such delays are 471 confined to flows long enough to have observed reordering. 473 4. Requirements 475 The reader is expected to be familiar with the definitions given in 476 the TCP congestion control [RFC5681] and selective acknowledgment 477 [RFC2018] and loss recovery [RFC6675] RFCs. RACK-TLP has the 478 following requirements: 480 1. The connection MUST use selective acknowledgment (SACK) options 481 [RFC2018], and the sender MUST keep SACK scoreboard information 482 on a per-connection basis ("SACK scoreboard" has the same meaning 483 here as in [RFC6675] section 3). 485 2. For each data segment sent, the sender MUST store its most recent 486 transmission time with a timestamp whose granularity that is 487 finer than 1/4 of the minimum RTT of the connection. At the time 488 of writing, microsecond resolution is suitable for intra- 489 datacenter traffic and millisecond granularity or finer is 490 suitable for the Internet. Note that RACK-TLP can be implemented 491 with TSO (TCP Segmentation Offload) support by having multiple 492 segments in a TSO aggregate share the same timestamp. 494 3. RACK DSACK-based reordering window adaptation is RECOMMENDED but 495 is not required. 497 4. TLP requires RACK. 499 5. Definitions 501 The reader is expected to be familiar with the variables of SND.UNA, 502 SND.NXT, SEG.ACK, and SEG.SEQ in [RFC793], SMSS, FlightSize in 503 [RFC5681], DupThresh in [RFC6675], RTO and SRTT in [RFC6298]. A 504 RACK-TLP implementation needs to store new per-segment and per- 505 connection state, described below. 507 5.1. Per-segment variables 509 Theses variables indicate the status of the most recent transmission 510 of a data segment: 512 "Segment.lost" is true if the most recent (re)transmission of the 513 segment has been marked lost and needs to be retransmitted. False 514 otherwise. 516 "Segment.retransmitted" is true if the segment has ever been 517 retransmitted. False otherwise. 519 "Segment.xmit_ts" is the time of the last transmission of a data 520 segment, including retransmissions, if any, with a clock granularity 521 specified in the Requirements section. A maximum value INFINITE_TS 522 indicates an invalid timestamp that represents that the Segment is 523 not currently in flight. 525 "Segment.end_seq" is the next sequence number after the last sequence 526 number of the data segment. 528 5.2. Per-connection variables 530 "RACK.segment". Among all the segments that have been either 531 selectively or cumulatively acknowledged, RACK.segment is the one 532 that was sent most recently (including retransmissions). 534 "RACK.xmit_ts" is the latest transmission timestamp of RACK.segment. 536 "RACK.end_seq" is the Segment.end_seq of RACK.segment. 538 "RACK.ack_ts" is the time when the full sequence range of 539 RACK.segment was selectively or cumulatively acknowledged. 541 "RACK.segs_sacked" returns the total number of segments selectively 542 acknowledged in the SACK scoreboard. 544 "RACK.fack" is the highest selectively or cumulatively acknowledged 545 sequence (i.e. forward acknowledgement). 547 "RACK.min_RTT" is the estimated minimum round-trip time (RTT) of the 548 connection. 550 "RACK.rtt" is the RTT of the most recently delivered segment on the 551 connection (either cumulatively acknowledged or selectively 552 acknowledged) that was not marked invalid as a possible spurious 553 retransmission. 555 "RACK.reordering_seen" indicates whether the sender has detected data 556 segment reordering event(s). 558 "RACK.reo_wnd" is a reordering window computed in the unit of time 559 used for recording segment transmission times. It is used to defer 560 the moment at which RACK marks a segment lost. 562 "RACK.dsack_round" indicates if a DSACK option has been received in 563 the lastest round trip. 565 "RACK.reo_wnd_mult" is the multiplier applied to adjust RACK.reo_wnd. 567 "RACK.reo_wnd_persist" is the number of loss recoveries before 568 resetting RACK.reo_wnd. 570 "RACK.rtt_seq" is the SND.NXT when RACK.rtt is updated. 572 "TLP.is_retrans": a boolean indicating whether there is an 573 unacknowledged TLP retransmission. 575 "TLP.end_seq": the value of SND.NXT at the time of sending a TLP 576 retransmission. 578 "TLP.max_ack_delay": sender's maximum delayed ACK timer budget. 580 Per-connection timers 582 "RACK reordering timer": a timer that allows RACK to wait for 583 reordering to resolve, to try to disambiguate reordering from loss, 584 when some out-of-order segments are marked as SACKed. 586 "TLP PTO": a timer event indicating that an ACK is overdue and the 587 sender should transmit a TLP segment, to solicit SACK or ACK 588 feedback. 590 These timers augment the existing timers maintained by a sender, 591 including the RTO timer [RFC6298]. A RACK-TLP sender arms one of 592 these three timers -- RACK reordering timer, TLP PTO timer, or RTO 593 timer -- when it has unacknowledged segments in flight. The 594 implementation can simplify managing all three timers by multiplexing 595 a single timer among them with an additional variable to indicate the 596 event to invoke upon the next timer expiration. 598 6. RACK Algorithm Details 600 6.1. Upon transmitting a data segment 602 Upon transmitting a new segment or retransmitting an old segment, 603 record the time in Segment.xmit_ts and set Segment.lost to FALSE. 604 Upon retransmitting a segment, set Segment.retransmitted to TRUE. 606 RACK_transmit_new_data(Segment): 607 Segment.xmit_ts = Now() 608 Segment.lost = FALSE 610 RACK_retransmit_data(Segment): 611 Segment.retransmitted = TRUE 612 Segment.xmit_ts = Now() 613 Segment.lost = FALSE 615 6.2. Upon receiving an ACK 617 Step 1: Update RACK.min_RTT. 619 Use the RTT measurements obtained via [RFC6298] or [RFC7323] to 620 update the estimated minimum RTT in RACK.min_RTT. The sender SHOULD 621 track a windowed min-filtered estimate of recent RTT measurements 622 that can adapt when migrating to significantly longer paths, rather 623 than a simple global minimum of all RTT measurements. 625 Step 2: Update state for most recently sent segment that has been 626 delivered 628 In this step, RACK updates the states that track the most recently 629 sent segment that has been delivered: RACK.segment; RACK maintains 630 its latest transmission timestamp in RACK.xmit_ts and its highest 631 sequence number in RACK.end_seq. These two variables are used, in 632 later steps, to estimate if some segments not yet delivered were 633 likely lost. Given the information provided in an ACK, each segment 634 cumulatively ACKed or SACKed is marked as delivered in the 635 scoreboard. Since an ACK can also acknowledge retransmitted data 636 segments, and retransmissions can be spurious, the sender needs to 637 take care to avoid spurious inferences. For example, if the sender 638 were to use timing information from a spurious retransmission, the 639 RACK.rtt could be vastly underestimated. 641 To avoid spurious inferences, ignore a segment as invalid if any of 642 its sequence range has been retransmitted before and either of two 643 conditions is true: 645 1. The Timestamp Echo Reply field (TSecr) of the ACK's timestamp 646 option [RFC7323], if available, indicates the ACK was not 647 acknowledging the last retransmission of the segment. 649 2. The segment was last retransmitted less than RACK.min_rtt ago. 651 The second check is a heuristic when the TCP Timestamp option is not 652 available, or when the round trip time is less than the TCP Timestamp 653 clock granularity. 655 Among all the segments newly ACKed or SACKed by this ACK that pass 656 the checks above, update the RACK.rtt to be the RTT sample calculated 657 using this ACK. Furthermore, record the most recent Segment.xmit_ts 658 in RACK.xmit_ts if it is ahead of RACK.xmit_ts. If Segment.xmit_ts 659 equals RACK.xmit_ts (e.g. due to clock granularity limits) then 660 compare Segment.end_seq and RACK.end_seq to break the tie. 662 Step 2 may be summarized in pseudocode as: 664 RACK_sent_after(t1, seq1, t2, seq2): 665 If t1 > t2: 666 Return true 667 Else if t1 == t2 AND seq1 > seq2: 668 Return true 669 Else: 670 Return false 672 RACK_update(): 673 For each Segment newly acknowledged cumulatively or selectively 674 in ascending order of Segment.xmit_ts: 675 rtt = Now() - Segment.xmit_ts 676 If Segment.retransmitted is TRUE: 677 If ACK.ts_option.echo_reply < Segment.xmit_ts: 678 Continue 679 If rtt < RACK.min_rtt: 680 Continue 682 RACK.rtt = rtt 683 If RACK_sent_after(Segment.xmit_ts, Segment.end_seq 684 RACK.xmit_ts, RACK.end_seq): 685 RACK.xmit_ts = Segment.xmit_ts 686 RACK.end_seq = Segment.end_seq 688 Step 3: Detect data segment reordering 690 To detect reordering, the sender looks for original data segments 691 being delivered out of order. To detect such cases, the sender 692 tracks the highest sequence selectively or cumulatively acknowledged 693 in the RACK.fack variable. The name "fack" stands for the most 694 "Forward ACK" (this term is adopted from [FACK]). If a never- 695 retransmitted segment that's below RACK.fack is (selectively or 696 cumulatively) acknowledged, it has been delivered out of order. The 697 sender sets RACK.reordering_seen to TRUE if such segment is 698 identified. 700 RACK_detect_reordering(): 701 For each Segment newly acknowledged cumulatively or selectively 702 in ascending order of Segment.end_seq: 703 If Segment.end_seq > RACK.fack: 704 RACK.fack = Segment.end_seq 705 Else if Segment.end_seq < RACK.fack AND 706 Segment.retransmitted is FALSE: 707 RACK.reordering_seen = TRUE 709 Step 4: Update RACK reordering window 710 The RACK reordering window, RACK.reo_wnd, serves as an adaptive 711 allowance for settling time before marking a segment lost. This step 712 documents a detailed algorithm that follows the principles outlined 713 in the ``Reordering window adaptation'' section. 715 If no reordering has been observed, based on the previous step, then 716 one way the sender can enter Fast Recovery is when the number of 717 SACKed segments matches or exceeds DupThresh (similar to RFC6675). 718 Furthermore, when no reordering has been observed the RACK.reo_wnd is 719 set to 0 both upon entering and during Fast Recovery or RTO recovery. 721 Otherwise, if some reordering has been observed, then RACK does not 722 trigger Fast Recovery based on DupThresh. 724 Whether or not reordering has been observed, RACK uses the reordering 725 window to assess whether any segments can be marked lost. As a 726 consequence, the sender also enters Fast Recovery when there are any 727 number of SACKed segments as long as the reorder window has passed 728 for some non-SACKed segments. 730 When the reordering window is not set to 0, it starts with a 731 conservative RACK.reo_wnd of RACK.min_RTT/4. This value was chosen 732 because Linux TCP used the same factor in its implementation to delay 733 Early Retransmit [RFC5827] to reduce spurious loss detections in the 734 presence of reordering, and experience showed this worked reasonably 735 well [DMCG11]. 737 However, the reordering detection in the previous step, Step 3, has a 738 self-reinforcing drawback when the reordering window is too small to 739 cope with the actual reordering. When that happens, RACK could 740 spuriously mark reordered segments lost, causing them to be 741 retransmitted. In turn, the retransmissions can prevent the 742 necessary conditions for Step 3 to detect reordering, since this 743 mechanism requires ACKs or SACKs for only segments that have never 744 been retransmitted. In some cases such scenarios can persist, 745 causing RACK to continue to spuriously mark segments lost without 746 realizing the reordering window is too small. 748 To avoid the issue above, RACK dynamically adapts to higher degrees 749 of reordering using DSACK options from the receiver. Receiving an 750 ACK with a DSACK option indicates a possible spurious retransmission, 751 suggesting that RACK.reo_wnd may be too small. The RACK.reo_wnd 752 increases linearly for every round trip in which the sender receives 753 some DSACK option, so that after N distinct round trips in which a 754 DSACK is received, the RACK.reo_wnd becomes (N+1) * min_RTT / 4, with 755 an upper-bound of SRTT. 757 If the reordering is temporary then a large adapted reordering window 758 would unnecessarily delay loss recovery later. Therefore, RACK 759 persists using the inflated RACK.reo_wnd for up to 16 loss 760 recoveries, after which it resets RACK.reo_wnd to its starting value, 761 min_RTT / 4. The downside of resetting the reordering window is the 762 risk of triggering spurious fast recovery episodes if the reordering 763 remains high. The rationale for this approach is to bound such 764 spurious recoveries to approximately once every 16 recoveries (less 765 than 7%). 767 To track the linear scaling factor for the adaptive reordering 768 window, RACK uses the variable RACK.reo_wnd_mult, which is 769 initialized to 1 and adapts with observed reordering. 771 The following pseudocode implements the above algorithm for updating 772 the RACK reordering window: 774 RACK_update_reo_wnd(): 776 /* DSACK-based reordering window adaptation */ 777 If RACK.dsack_round is not None AND 778 SND.UNA >= RACK.dsack_round: 779 RACK.dsack_round = None 780 /* Grow the reordering window per round that sees DSACK. 781 Reset the window after 16 DSACK-free recoveries */ 782 If RACK.dsack_round is None AND 783 any DSACK option is present on latest received ACK: 784 RACK.dsack_round = SND.NXT 785 RACK.reo_wnd_mult += 1 786 RACK.reo_wnd_persist = 16 787 Else if exiting Fast or RTO recovery: 788 RACK.reo_wnd_persist -= 1 789 If RACK.reo_wnd_persist <= 0: 790 RACK.reo_wnd_mult = 1 792 If RACK.reordering_seen is FALSE: 793 If in Fast or RTO recovery: 794 Return 0 795 Else if RACK.segs_sacked >= DupThresh: 796 Return 0 797 Return min(RACK.min_RTT / 4 * RACK.reo_wnd_mult, SRTT) 799 Step 5: Detect losses. 801 For each segment that has not been SACKed, RACK considers that 802 segment lost if another segment that was sent later has been 803 delivered, and the reordering window has passed. RACK considers the 804 reordering window to have passed if the RACK.segment was sent 805 sufficiently after the segment in question, or a sufficient time has 806 elapsed since the RACK.segment was S/ACKed, or some combination of 807 the two. More precisely, RACK marks a segment lost if: 809 RACK.xmit_ts >= Segment.xmit_ts 810 AND 811 RACK.xmit_ts - Segment.xmit_ts + (now - RACK.ack_ts) >= RACK.reo_wnd 813 Solving this second condition for "now", the moment at which a 814 segment is marked lost, yields: 816 now >= Segment.xmit_ts + RACK.reo_wnd + (RACK.ack_ts - RACK.xmit_ts) 818 Then (RACK.ack_ts - RACK.xmit_ts) is the round trip time of the most 819 recently (re)transmitted segment that's been delivered. When 820 segments are delivered in order, the most recently (re)transmitted 821 segment that's been delivered is also the most recently delivered, 822 hence RACK.rtt == RACK.ack_ts - RACK.xmit_ts. But if segments were 823 reordered, then the segment delivered most recently was sent before 824 the most recently (re)transmitted segment. Hence RACK.rtt > 825 (RACK.ack_ts - RACK.xmit_ts). 827 Since RACK.RTT >= (RACK.ack_ts - RACK.xmit_ts), the previous equation 828 reduces to saying that the sender can declare a segment lost when: 830 now >= Segment.xmit_ts + RACK.reo_wnd + RACK.rtt 832 In turn, that is equivalent to stating that a RACK sender should 833 declare a segment lost when: 835 Segment.xmit_ts + RACK.rtt + RACK.reo_wnd - now <= 0 837 Note that if the value on the left hand side is positive, it 838 represents the remaining wait time before the segment is deemed lost. 839 But this risks a timeout (RTO) if no more ACKs come back (e.g., due 840 to losses or application-limited transmissions) to trigger the 841 marking. For timely loss detection, the sender is RECOMMENDED to 842 install a reordering timer. This timer expires at the earliest 843 moment when RACK would conclude that all the unacknowledged segments 844 within the reordering window were lost. 846 The following pseudocode implements the algorithm above. When an ACK 847 is received or the RACK reordering timer expires, call 848 RACK_detect_loss_and_arm_timer(). The algorithm breaks timestamp 849 ties by using the TCP sequence space, since high-speed networks often 850 have multiple segments with identical timestamps. 852 RACK_detect_loss(): 853 timeout = 0 854 RACK.reo_wnd = RACK_update_reo_wnd() 855 For each segment, Segment, not acknowledged yet: 856 If RACK_sent_after(RACK.xmit_ts, RACK.end_seq, 857 Segment.xmit_ts, Segment.end_seq): 858 remaining = Segment.xmit_ts + RACK.rtt + 859 RACK.reo_wnd - Now() 860 If remaining <= 0: 861 Segment.lost = TRUE 862 Segment.xmit_ts = INFINITE_TS 863 Else: 864 timeout = max(remaining, timeout) 865 Return timeout 867 RACK_detect_loss_and_arm_timer(): 868 timeout = RACK_detect_loss() 869 If timeout != 0 870 Arm the RACK timer to call 871 RACK_detect_loss_and_arm_timer() after timeout 873 As an optimization, an implementation can choose to check only 874 segments that have been sent before RACK.xmit_ts. This can be more 875 efficient than scanning the entire SACK scoreboard, especially when 876 there are many segments in flight. The implementation can use a 877 separate doubly-linked list ordered by Segment.xmit_ts and inserts a 878 segment at the tail of the list when it is (re)transmitted, and 879 removes a segment from the list when it is delivered or marked lost. 880 In Linux TCP this optimization improved CPU usage by orders of 881 magnitude during some fast recovery episodes on high-speed WAN 882 networks. 884 6.3. Upon RTO expiration 886 Upon RTO timer expiration, RACK marks the first outstanding segment 887 as lost (since it was sent an RTO ago); for all the other segments 888 RACK only marks the segment lost if the time elapsed since the 889 segment was transmitted is at least the sum of the recent RTT and the 890 reordering window. 892 RACK_mark_losses_on_RTO(): 893 For each segment, Segment, not acknowledged yet: 894 If SEG.SEQ == SND.UNA OR 895 Segment.xmit_ts + RACK.rtt + RACK.reo_wnd - Now() <= 0: 896 Segment.lost = TRUE 898 7. TLP Algorithm Details 900 7.1. Initializing state 902 Reset TLP.is_retrans and TLP.end_seq when initiating a connection, 903 fast recovery, or RTO recovery. 905 TLP_init(): 906 TLP.end_seq = None 907 TLP.is_retrans = false 909 7.2. Scheduling a loss probe 911 The sender schedules a loss probe timeout (PTO) to transmit a segment 912 during the normal transmission process. The sender SHOULD start or 913 restart a loss probe PTO timer after transmitting new data (that was 914 not itself a loss probe) or upon receiving an ACK that cumulatively 915 acknowledges new data, unless it is already in fast recovery, RTO 916 recovery, or the sender has segments delivered out-of-order (i.e. 917 RACK.segs_sacked is not zero). These conditions are excluded because 918 they are addressed by similar mechanisms, like Limited Transmit 919 [RFC3042], the RACK reordering timer, and F-RTO [RFC5682]. 921 The sender calculates the PTO interval by taking into account a 922 number of factors. 924 First, the default PTO interval is 2*SRTT. By that time, it is 925 prudent to declare that an ACK is overdue, since under normal 926 circumstances, i.e. no losses, an ACK typically arrives in one SRTT. 927 Choosing PTO to be exactly an SRTT would risk causing spurious 928 probes, given that network and end-host delay variance can cause an 929 ACK to be delayed beyond SRTT. Hence the PTO is conservatively 930 chosen to be the next integral multiple of SRTT. 932 Second, when there is no SRTT estimate available, the PTO SHOULD be 1 933 second. This conservative value corresponds to the RTO value when no 934 SRTT is available, per [RFC6298]. 936 Third, when FlightSize is one segment, the sender MAY inflate PTO by 937 TLP.max_ack_delay to accommodate a potential delayed acknowledgment 938 and reduce the risk of spurious retransmissions. The actual value of 939 TLP.max_ack_delay is implementation-specific. 941 Finally, if the time at which an RTO would fire (here denoted 942 "TCP_RTO_expiration()") is sooner than the computed time for the PTO, 943 then the sender schedules a TLP to be sent at that RTO time. 945 Summarizing these considerations in pseudocode form, a sender SHOULD 946 use the following logic to select the duration of a PTO: 948 TLP_calc_PTO(): 949 If SRTT is available: 950 PTO = 2 * SRTT 951 If FlightSize is one segment: 952 PTO += TLP.max_ack_delay 953 Else: 954 PTO = 1 sec 956 If Now() + PTO > TCP_RTO_expiration(): 957 PTO = TCP_RTO_expiration() - Now() 959 7.3. Sending a loss probe upon PTO expiration 961 When the PTO timer expires, the sender SHOULD transmit a previously 962 unsent data segment, if the receive window allows, and increment the 963 FlightSize accordingly. Note that FlightSize could be one packet 964 greater than the congestion window temporarily until the next ACK 965 arrives. 967 If such a segment is not available, then the sender SHOULD retransmit 968 the highest-sequence segment sent so far and set TLP.is_retrans to 969 true. This segment is chosen to deal with the retransmission 970 ambiguity problem in TCP. Suppose a sender sends N segments, and 971 then retransmits the last segment (segment N) as a loss probe, and 972 then the sender receives a SACK for segment N. As long as the sender 973 waits for the RACK reordering window to expire, it doesn't matter if 974 that SACK was for the original transmission of segment N or the TLP 975 retransmission; in either case the arrival of the SACK for segment N 976 provides evidence that the N-1 segments preceding segment N were 977 likely lost. 979 In the case where there is only one original outstanding segment of 980 data (N=1), the same logic (trivially) applies: an ACK for a single 981 outstanding segment tells the sender the N-1=0 segments preceding 982 that segment were lost. Furthermore, whether there are N>1 or N=1 983 outstanding segments, there is a question about whether the original 984 last segment or its TLP retransmission were lost; the sender 985 estimates whether there was such a loss using TLP recovery detection 986 (see below). 988 The sender MUST follow the RACK transmission procedures in the ''Upon 989 Transmitting a Data Segment'' section (see above) upon sending either 990 a retransmission or new data loss probe. This is critical for 991 detecting losses using the ACK for the loss probe. Furthermore, 992 prior to sending a loss probe, the sender MUST check that there is no 993 other previous loss probe still in flight. This ensures that at any 994 given time the sender has at most one additional packet in flight 995 beyond the congestion window limit. This invariant is maintained 996 using the state variable TLP.end_seq, which indicates the latest 997 unacknowledged TLP loss probe's ending sequence. It is reset when 998 the loss probe has been acknowledged or is deemed lost or irrelevant. 999 After attempting to send a loss probe, regardless of whether a loss 1000 probe was sent, the sender MUST re-arm the RTO timer, not the PTO 1001 timer, if FlightSize is not zero. This ensures RTO recovery remains 1002 the last resort if TLP fails. The following pseudo code summarizes 1003 the operations. 1005 TLP_send_probe(): 1007 If TLP.end_seq is None: 1008 TLP.is_retrans = false 1009 Segment = send buffer segment starting at SND.NXT 1010 If Segment exists and fits the peer receive window limit: 1011 /* Transmit the lowest-sequence unsent Segment */ 1012 Transmit Segment 1013 RACK_transmit_data(Segment) 1014 TLP.end_seq = SND.NXT 1015 Increase FlightSize by Segment length 1016 Else: 1017 /* Retransmit the highest-sequence Segment sent */ 1018 Segment = send buffer segment ending at SND.NXT 1019 Transmit Segment 1020 RACK_retransmit_data(Segment) 1021 TLP.end_seq = SND.NXT 1023 7.4. Detecting losses using the ACK of the loss probe 1025 When there is packet loss in a flight ending with a loss probe, the 1026 feedback solicited by a loss probe will reveal one of two scenarios, 1027 depending on the pattern of losses. 1029 7.4.1. General case: detecting packet losses using RACK 1031 If the loss probe and the ACK that acknowledges the probe are 1032 delivered successfully, RACK-TLP uses this ACK -- just as it would 1033 with any other ACK -- to detect if any segments sent prior to the 1034 probe were dropped. RACK would typically infer that any 1035 unacknowledged data segments sent before the loss probe were lost, 1036 since they were sent sufficiently far in the past (at least one PTO 1037 has elapsed, plus one round-trip for the loss probe to be ACKed). 1038 More specifically, RACK_detect_loss() (step 5) would mark those 1039 earlier segments as lost. Then the sender would trigger a fast 1040 recovery to recover those losses. 1042 7.4.2. Special case: detecting a single loss repaired by the loss probe 1044 If the TLP retransmission repairs all the lost in-flight sequence 1045 ranges (i.e. only the last segment in the flight was lost), the ACK 1046 for the loss probe appears to be a regular cumulative ACK, which 1047 would not normally trigger the congestion control response to this 1048 packet loss event. The following TLP recovery detection mechanism 1049 examines ACKs to detect this special case to make congestion control 1050 respond properly [RFC5681]. 1052 After a TLP retransmission, the sender checks for this special case 1053 of a single loss that is recovered by the loss probe itself. To 1054 accomplish this, the sender checks for a duplicate ACK or DSACK 1055 indicating that both the original segment and TLP retransmission 1056 arrived at the receiver, meaning there was no loss. If the TLP 1057 sender does not receive such an indication, then it MUST assume that 1058 either the original data segment, the TLP retransmission, or a 1059 corresponding ACK were lost, for congestion control purposes. 1061 If the TLP retransmission is spurious, a receiver that uses DSACK 1062 would return an ACK that covers TLP.end_seq with a DSACK option (Case 1063 1). If the receiver does not support DSACK, it would return a DUPACK 1064 without any SACK option (Case 2). If the sender receives an ACK 1065 matching either case, then the sender estimates that the receiver 1066 received both the original data segment and the TLP probe 1067 retransmission, and so the sender considers the TLP episode to be 1068 done, and records that fact by setting TLP.end_seq to None. 1070 Upon receiving an ACK that covers some sequence number after 1071 TLP.end_seq, the sender should have received any ACKs for the 1072 original segment and TLP probe retransmission segment. At that time, 1073 if the TLP.end_seq is still set, and thus indicates that the TLP 1074 probe retransmission remains unacknowledged, then the sender should 1075 presume that at least one of its data segments was lost. The sender 1076 then SHOULD invoke a congestion control response equivalent to a fast 1077 recovery. 1079 More precisely, on each ACK the sender executes the following: 1081 TLP_process_ack(ACK): 1082 If TLP.end_seq is not None AND ACK's ack. number >= TLP.end_seq: 1083 If not TLP.is_retrans: 1084 TLP.end_seq = None /* TLP of new data delivered */ 1085 Else if ACK has a DSACK option matching TLP.end_seq: 1086 TLP.end_seq = None /* Case 1, above */ 1087 Else If ACK's ack. number > TLP.end_seq: 1088 TLP.end_seq = None /* Repaired the single loss */ 1089 (Invoke congestion control to react to 1090 the loss event the probe has repaired) 1091 Else If ACK is a DUPACK without any SACK option: 1092 TLP.end_seq = None /* Case 2, above */ 1094 8. Managing RACK-TLP timers 1096 The RACK reordering, the TLP PTO timer, the RTO and Zero Window Probe 1097 (ZWP) timer [RFC793] are mutually exclusive and used in different 1098 scenarios. When arming a RACK reordering timer or TLP PTO timer, the 1099 sender SHOULD cancel any other pending timer(s). An implementation 1100 is to have one timer with an additional state variable indicating the 1101 type of the timer. 1103 9. Discussion 1105 9.1. Advantages and disadvantages 1107 The biggest advantage of RACK-TLP is that every data segment, whether 1108 it is an original data transmission or a retransmission, can be used 1109 to detect losses of the segments sent chronologically prior to it. 1110 This enables RACK-TLP to use fast recovery in cases with application- 1111 limited flights of data, lost retransmissions, or data segment 1112 reordering events. Consider the following examples: 1114 1. Packet drops at the end of an application data flight: Consider a 1115 sender that transmits an application-limited flight of three data 1116 segments (P1, P2, P3), and P1 and P3 are lost. Suppose the 1117 transmission of each segment is at least RACK.reo_wnd after the 1118 transmission of the previous segment. RACK will mark P1 as lost 1119 when the SACK of P2 is received, and this will trigger the 1120 retransmission of P1 as R1. When R1 is cumulatively 1121 acknowledged, RACK will mark P3 as lost and the sender will 1122 retransmit P3 as R3. This example illustrates how RACK is able 1123 to repair certain drops at the tail of a transaction without an 1124 RTO recovery. Notice that neither the conventional duplicate ACK 1125 threshold [RFC5681], nor [RFC6675], nor the Forward 1126 Acknowledgment [FACK] algorithm can detect such losses, because 1127 of the required segment or sequence count. 1129 2. Lost retransmission: Consider a flight of three data segments 1130 (P1, P2, P3) that are sent; P1 and P2 are dropped. Suppose the 1131 transmission of each segment is at least RACK.reo_wnd after the 1132 transmission of the previous segment. When P3 is SACKed, RACK 1133 will mark P1 and P2 lost and they will be retransmitted as R1 and 1134 R2. Suppose R1 is lost again but R2 is SACKed; RACK will mark R1 1135 lost and trigger retransmission again. Again, neither the 1136 conventional three duplicate ACK threshold approach, nor 1137 [RFC6675], nor the Forward Acknowledgment [FACK] algorithm can 1138 detect such losses. And such a lost retransmission can happen 1139 when TCP is being rate-limited, particularly by token bucket 1140 policers with large bucket depth and low rate limit; in such 1141 cases retransmissions are often lost repeatedly because standard 1142 congestion control requires multiple round trips to reduce the 1143 rate below the policed rate. 1145 3. Packet reordering: Consider a simple reordering event where a 1146 flight of segments are sent as (P1, P2, P3). P1 and P2 carry a 1147 full payload of MSS octets, but P3 has only a 1-octet payload. 1148 Suppose the sender has detected reordering previously and thus 1149 RACK.reo_wnd is min_RTT/4. Now P3 is reordered and delivered 1150 first, before P1 and P2. As long as P1 and P2 are delivered 1151 within min_RTT/4, RACK will not consider P1 and P2 lost. But if 1152 P1 and P2 are delivered outside the reordering window, then RACK 1153 will still spuriously mark P1 and P2 lost. 1155 The examples above show that RACK-TLP is particularly useful when the 1156 sender is limited by the application, which can happen with 1157 interactive or request/response traffic. Similarly, RACK still works 1158 when the sender is limited by the receive window, which can happen 1159 with applications that use the receive window to throttle the sender. 1161 RACK-TLP works more efficiently with TCP Segmentation Offload (TSO) 1162 compared to DUPACK-counting. RACK always marks the entire TSO 1163 aggregate lost because the segments in the same TSO aggregate have 1164 the same transmission timestamp. By contrast, the algorithms based 1165 on sequence counting (e.g., [RFC6675] [RFC5681]) may mark only a 1166 subset of segments in the TSO aggregate lost, forcing the stack to 1167 perform expensive fragmentation of the TSO aggregate, or to 1168 selectively tag individual segments lost in the scoreboard. 1170 The main drawback of RACK-TLP is the additional states required 1171 compared to DUPACK-counting. RACK requires the sender to record the 1172 transmission time of each segment sent at a clock granularity that is 1173 finer than 1/4 of the minimum RTT of the connection. TCP 1174 implementations that record this already for RTT estimation do not 1175 require any new per-packet state. But implementations that are not 1176 yet recording segment transmission times will need to add per-packet 1177 internal state (expected to be either 4 or 8 octets per segment or 1178 TSO aggregate) to track transmission times. In contrast, [RFC6675] 1179 loss detection approach does not require any per-packet state beyond 1180 the SACK scoreboard; this is particularly useful on ultra-low RTT 1181 networks where the RTT may be less than the sender TCP clock 1182 granularity (e.g. inside data-centers). Another disadvantage is the 1183 reordering timer may expire prematurely (like any other 1184 retransmission timer) to cause higher spurious retransmission 1185 especially if DSACK is not supported. 1187 9.2. Relationships with other loss recovery algorithms 1189 The primary motivation of RACK-TLP is to provide a general 1190 alternative to some of the standard loss recovery algorithms 1191 [RFC5681] [RFC6675] [RFC5827] [RFC4653]. [RFC5827] [RFC4653] 1192 dynamically adjusts the duplicate ACK threshold based on the current 1193 or previous flight sizes. RACK-TLP takes a different approach by 1194 using a time-based reordering window. RACK-TLP can be seen as an 1195 extended Early Retransmit [RFC5827] without a FlightSize limit but 1196 with an additional reordering window. [FACK] considers an original 1197 segment to be lost when its sequence range is sufficiently far below 1198 the highest SACKed sequence. In some sense RACK-TLP can be seen as a 1199 generalized form of FACK that operates in time space instead of 1200 sequence space, enabling it to better handle reordering, application- 1201 limited traffic, and lost retransmissions. 1203 RACK-TLP is compatible with the standard RTO [RFC6298], RTO-restart 1204 [RFC7765], F-RTO [RFC5682] and Eifel algorithms [RFC3522]. This is 1205 because RACK-TLP only detects loss by using ACK events. It neither 1206 changes the RTO timer calculation nor detects spurious RTO. RACK-TLP 1207 slightly changes the retransmission behavior of [RFC6298] by 1208 preceding the RTO with TLP and reducing potential spurious 1209 retransmission after RTO. 1211 9.3. Interaction with congestion control 1213 RACK-TLP intentionally decouples loss detection from congestion 1214 control. RACK-TLP only detects losses; it does not modify the 1215 congestion control algorithm [RFC5681] [RFC6937]. A segment marked 1216 lost by RACK-TLP MUST NOT be retransmitted until congestion control 1217 deems this appropriate. 1219 The only exception -- the only way in which RACK-TLP modulates the 1220 congestion control algorithm -- is that one outstanding loss probe 1221 can be sent even if the congestion window is fully used. However, 1222 this temporary over-commit is accounted for and credited in the in- 1223 flight data tracked for congestion control, so that congestion 1224 control will erase the over-commit upon the next ACK. 1226 If packet losses happen after the reordering window has been 1227 increased by DSACK, RACK-TLP may take longer to detect losses than 1228 the pure DUPACK-counting approach. In this case TCP may continue to 1229 increase the congestion window upon receiving ACKs during this time, 1230 making the sender more aggressive. 1232 The following simple example compares how RACK-TLP and non-RACK-TLP 1233 loss detection interacts with congestion control: suppose a sender 1234 has a congestion window (cwnd) of 20 segments on a SACK-enabled 1235 connection. It sends 10 data segments and all of them are lost. 1237 Without RACK-TLP, the sender would time out, reset cwnd to 1, and 1238 retransmit the first segment. It would take four round trips (1 + 2 1239 + 4 + 3 = 10) to retransmit all the 10 lost segments using slow 1240 start. The recovery latency would be RTO + 4*RTT, with an ending 1241 cwnd of 4 segments due to congestion window validation. 1243 With RACK-TLP, a sender would send the TLP after 2*RTT and get a 1244 DUPACK, enabling RACK to detect the losses and trigger fast recovery. 1245 If the sender implements Proportional Rate Reduction [RFC6937] it 1246 would slow start to retransmit the remaining 9 lost segments since 1247 the number of segments in flight (0) is lower than the slow start 1248 threshold (10). The slow start would again take four round trips (1 1249 + 2 + 4 + 3 = 10) to retransmit all the lost segments. The recovery 1250 latency would be 2*RTT + 4*RTT, with an ending cwnd set to the slow 1251 start threshold of 10 segments. 1253 The difference in recovery latency (RTO + 4*RTT vs 6*RTT) can be 1254 significant if the RTT is much smaller than the minimum RTO (1 second 1255 in [RFC6298]) or if the RTT is large. The former case can happen in 1256 local area networks, data-center networks, or content distribution 1257 networks with deep deployments. The latter case can happen in 1258 developing regions with highly congested and/or high-latency 1259 networks. 1261 9.4. TLP recovery detection with delayed ACKs 1263 Delayed or stretched ACKs complicate the detection of repairs done by 1264 TLP, since with such ACKs the sender takes longer time to receive 1265 fewer ACKs than would normally be expected. To mitigate this 1266 complication, before sending a TLP loss probe retransmission, the 1267 sender should attempt to wait long enough that the receiver has sent 1268 any delayed ACKs that it is withholding. The sender algorithm 1269 described above features such a delay, in the form of 1270 TLP.max_ack_delay. Furthermore, if the receiver supports DSACK then 1271 in the case of a delayed ACK the sender's TLP recovery detection 1272 mechanism (see above) can use the DSACK information to infer that the 1273 original and TLP retransmission both arrived at the receiver. 1275 If there is ACK loss or a delayed ACK without a DSACK, then this 1276 algorithm is conservative, because the sender will reduce the 1277 congestion window when in fact there was no packet loss. In practice 1278 this is acceptable, and potentially even desirable: if there is 1279 reverse path congestion then reducing the congestion window can be 1280 prudent. 1282 9.5. RACK for other transport protocols 1284 RACK can be implemented in other transport protocols (e.g., 1285 [QUICLR]). The [Sprout] loss detection algorithm was also 1286 independently designed to use a 10ms reordering window to improve its 1287 loss detection. 1289 10. Security Considerations 1291 RACK-TLP algorithm behavior is based on information conveyed in SACK 1292 options, so it has security considerations similar to those described 1293 in the Security Considerations section of [RFC6675]. 1295 Additionally, RACK-TLP has a lower risk profile than [RFC6675] 1296 because it is not vulnerable to ACK-splitting attacks [SCWA99]: for 1297 an MSS-size segment sent, the receiver or the attacker might send MSS 1298 ACKs that SACK or acknowledge one additional byte per ACK. This 1299 would not fool RACK. In such a scenario, RACK.xmit_ts would not 1300 advance, because all the sequence ranges within the segment were 1301 transmitted at the same time, and thus carry the same transmission 1302 timestamp. In other words, SACKing only one byte of a segment or 1303 SACKing the segment in entirety have the same effect with RACK. 1305 11. IANA Considerations 1307 This document makes no request of IANA. 1309 Note to RFC Editor: this section may be removed on publication as an 1310 RFC. 1312 12. Acknowledgments 1314 The authors thank Matt Mathis for his insights in FACK and Michael 1315 Welzl for his per-packet timer idea that inspired this work. Eric 1316 Dumazet, Randy Stewart, Van Jacobson, Ian Swett, Rick Jones, Jana 1317 Iyengar, Hiren Panchasara, Praveen Balasubramanian, Yoshifumi 1318 Nishida, Bob Briscoe, Felix Weinrank, Michael Tuexen, Martin Duke, 1319 Ilpo Jarvinen, Theresa Enghardt, Mirja Kuehlewind, Gorry Fairhurst, 1320 and Yi Huang contributed to the draft or the implementations in 1321 Linux, FreeBSD, Windows, and QUIC. 1323 13. References 1325 13.1. Normative References 1327 [RFC2018] Mathis, M. and J. Mahdavi, "TCP Selective Acknowledgment 1328 Options", RFC 2018, October 1996. 1330 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1331 Requirement Levels", RFC 2119, March 1997. 1333 [RFC2883] Floyd, S., Mahdavi, J., Mathis, M., and M. Podolsky, "An 1334 Extension to the Selective Acknowledgement (SACK) Option 1335 for TCP", RFC 2883, July 2000. 1337 [RFC3042] Allman, M., Balakrishnan, H., and S. Floyd, "Enhancing 1338 TCP's Loss Recovery Using Limited Transmit", January 2001. 1340 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1341 Control", RFC 5681, September 2009. 1343 [RFC6298] Paxson, V., Allman, M., Chu, J., and M. Sargent, 1344 "Computing TCP's Retransmission Timer", RFC 6298, June 1345 2011. 1347 [RFC6675] Blanton, E., Allman, M., Wang, L., Jarvinen, I., Kojo, M., 1348 and Y. Nishida, "A Conservative Loss Recovery Algorithm 1349 Based on Selective Acknowledgment (SACK) for TCP", 1350 RFC 6675, August 2012. 1352 [RFC7323] Borman, D., Braden, B., Jacobson, V., and R. 1353 Scheffenegger, "TCP Extensions for High Performance", 1354 September 2014. 1356 [RFC793] Postel, J., "Transmission Control Protocol", September 1357 1981. 1359 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1360 2119 Key Words", May 2017. 1362 13.2. Informative References 1364 [DMCG11] Dukkipati, N., Matthis, M., Cheng, Y., and M. Ghobadi, 1365 "Proportional Rate Reduction for TCP", ACM SIGCOMM 1366 Conference on Internet Measurement , 2011. 1368 [FACK] Mathis, M. and M. Jamshid, "Forward acknowledgement: 1369 refining TCP congestion control", ACM SIGCOMM Computer 1370 Communication Review, Volume 26, Issue 4, Oct. 1996. , 1371 1996. 1373 [POLICER16] 1374 Flach, T., Papageorge, P., Terzis, A., Pedrosa, L., Cheng, 1375 Y., Karim, T., Katz-Bassett, E., and R. Govindan, "An 1376 Analysis of Traffic Policing in the Web", ACM SIGCOMM , 1377 2016. 1379 [QUICLR] Iyengar, J. and I. Swett, "QUIC Loss Detection and 1380 Congestion Control", draft-ietf-quic-recovery (work in 1381 progress), Octobor 2020. 1383 [RFC3522] Ludwig, R. and M. Meyer, "The Eifel Detection Algorithm 1384 for TCP", April 2003. 1386 [RFC4653] Bhandarkar, S., Reddy, A., Allman, M., and E. Blanton, 1387 "Improving the Robustness of TCP to Non-Congestion 1388 Events", August 2006. 1390 [RFC5682] Sarolahti, P., Kojo, M., Yamamoto, K., and M. Hata, 1391 "Forward RTO-Recovery (F-RTO): An Algorithm for Detecting 1392 Spurious Retransmission Timeouts with TCP", RFC 5682, 1393 September 2009. 1395 [RFC5827] Allman, M., Ayesta, U., Wang, L., Blanton, J., and P. 1396 Hurtig, "Early Retransmit for TCP and Stream Control 1397 Transmission Protocol (SCTP)", RFC 5827, April 2010. 1399 [RFC6937] Mathis, M., Dukkipati, N., and Y. Cheng, "Proportional 1400 Rate Reduction for TCP", May 2013. 1402 [RFC7765] Hurtig, P., Brunstrom, A., Petlund, A., and M. Welzl, "TCP 1403 and SCTP RTO Restart", February 2016. 1405 [SCWA99] Savage, S., Cardwell, N., Wetherall, D., and T. Anderson, 1406 "TCP Congestion Control With a Misbehaving Receiver", ACM 1407 Computer Communication Review, 29(5) , 1999. 1409 [Sprout] Winstein, K., Sivaraman, A., and H. Balakrishnan, 1410 "Stochastic Forecasts Achieve High Throughput and Low 1411 Delay over Cellular Networks", USENIX Symposium on 1412 Networked Systems Design and Implementation (NSDI) , 2013. 1414 Authors' Addresses 1416 Yuchung Cheng 1417 Google, Inc 1419 Email: ycheng@google.com 1421 Neal Cardwell 1422 Google, Inc 1424 Email: ncardwell@google.com 1426 Nandita Dukkipati 1427 Google, Inc 1429 Email: nanditad@google.com 1431 Priyaranjan Jha 1432 Google, Inc 1434 Email: priyarjha@google.com