idnits 2.17.1 draft-ietf-tcpimpl-prob-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-20) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 56 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 57 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 28 instances of too long lines in the document, the longest one being 7 characters in excess of 72. == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 1998) is 9380 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC1323' is mentioned on line 1845, but not defined ** Obsolete undefined reference: RFC 1323 (Obsoleted by RFC 7323) == Missing Reference: 'DF' is mentioned on line 2199, but not defined == Unused Reference: 'Stevens94' is defined on line 2627, but no explicit reference was found in the text == Unused Reference: 'Wright95' is defined on line 2631, but no explicit reference was found in the text == Unused Reference: 'RFC2119' is defined on line 2576, but no explicit reference was found in the text == Unused Reference: 'RFC793' is defined on line 2618, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. 'Stevens94' -- Possible downref: Non-RFC (?) normative reference: ref. 'Wright95' -- Possible downref: Non-RFC (?) normative reference: ref. 'Brakmo95' -- Possible downref: Non-RFC (?) normative reference: ref. 'Allman97' -- Unexpected draft version: The latest known version of draft-floyd-incr-init-win is -02, but you're referring to -03. ** Downref: Normative reference to an Experimental draft: draft-floyd-incr-init-win (ref. 'Allman98') -- Possible downref: Non-RFC (?) normative reference: ref. 'Dawson97' -- Possible downref: Non-RFC (?) normative reference: ref. 'Fall96' -- Possible downref: Non-RFC (?) normative reference: ref. 'Hoe96' -- Possible downref: Non-RFC (?) normative reference: ref. 'Jacobson88' ** Obsolete normative reference: RFC 896 (Obsoleted by RFC 7805) -- Possible downref: Non-RFC (?) normative reference: ref. 'Paxson97' ** Obsolete normative reference: RFC 793 (Obsoleted by RFC 9293) ** Obsolete normative reference: RFC 2001 (Obsoleted by RFC 2581) Summary: 16 errors (**), 0 flaws (~~), 11 warnings (==), 12 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group V. Paxson, Editor 2 Internet Draft M. Allman 3 S. Dawson 4 J. Griner 5 I. Heavens 6 K. Lahey 7 J. Semke 8 B. Volz 9 Expiration Date: Feburary 1999 August 1998 11 Known TCP Implementation Problems 12 14 1. Status of this Memo 16 This document is an Internet Draft. Internet Drafts are working 17 documents of the Internet Engineering Task Force (IETF), its areas, 18 and its working groups. Note that other groups may also distribute 19 working documents as Internet Drafts. 21 Internet Drafts are draft documents valid for a maximum of six 22 months, and may be updated, replaced, or obsoleted by other documents 23 at any time. It is inappropriate to use Internet Drafts as reference 24 material or to cite them other than as ``work in progress''. 26 To view the entire list of current Internet-Drafts, please check the 27 "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow 28 Directories on ftp.is.co.za (Africa), ftp.nordu.net (Northern 29 Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au (Pacific 30 Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu (US West Coast). 32 This memo provides information for the Internet community. This memo 33 does not specify an Internet standard of any kind. Distribution of 34 this memo is unlimited. 36 2. Introduction 38 This memo catalogs a number of known TCP implementation problems. 39 The goal in doing so is to improve conditions in the existing 40 Internet by enhancing the quality of current TCP/IP implementations. 41 It is hoped that both performance and correctness issues can be 42 resolved by making implementors aware of the problems and their 43 solutions. In the long term, it is hoped that this will provide a 44 reduction in unnecessary traffic on the network, the rate of 45 connection failures due to protocol errors, and load on network 47 ID Known TCP Implementation Problems August 1998 49 servers due to time spent processing both unsuccessful connections 50 and retransmitted data. This will help to ensure the stability of 51 the global Internet. 53 Each problem is defined as follows: 55 Name of Problem 56 The name associated with the problem. In this memo, the name is 57 given as a subsection heading. 59 Classification 60 One or more problem categories for which the problem is 61 classified. Categories used so far: "congestion control", 62 "performance", "reliability", "resource management". Others 63 anticipated: "security", "interoperability", "configuration". 65 Description 66 A definition of the problem, succinct but including necessary 67 background material. 69 Significance 70 A brief summary of the sorts of environments for which the 71 problem is significant. 73 Implications 74 Why the problem is viewed as a problem. 76 Relevant RFCs 77 Brief discussion of the RFCs with respect to which the problem 78 is viewed as an implementation error. These RFCs often qualify 79 behavior using terms such as MUST, SHOULD, MAY, and others 80 written capitalized. See RFC 2119 for the exact interpretation 81 of these terms. 83 Trace file demonstrating the problem 84 One or more ASCII trace files demonstrating the problem, if 85 applicable. These may in the future be replaced with URLs to 86 on-line traces. 88 Trace file demonstrating correct behavior 89 One or more examples of how correct behavior appears in a trace, 90 if applicable. These may in the future be replaced with URLs to 91 on-line traces. 93 References 94 References that further discuss the problem. 96 ID Known TCP Implementation Problems August 1998 98 How to detect 99 How to test an implementation to see if it exhibits the problem. 100 This discussion may include difficulties and subtleties 101 associated with causing the problem to manifest itself, and with 102 interpreting traces to detect the presence of the problem (if 103 applicable). In the future, this may include URLs for 104 diagnostic tools. 106 How to fix 107 For known causes of the problem, how to correct the 108 implementation. 110 Implementation specifics 111 If it is viewed as beneficial to document particular 112 implementations exhibiting the problem, and if the corresponding 113 implementors approve, then this section gives the specifics of 114 those implementations, along with a contact address for the 115 implementors. 117 3. Known implementation problems 119 3.1. 121 Name of Problem 122 No initial slow start 124 Classification 125 Congestion control 127 Description 128 When a TCP begins transmitting data, it is required by RFC 1122, 129 4.2.2.15, to engage in a "slow start" by initializing its 130 congestion window, cwnd, to one packet (one segment of the maximum 131 size). (Note that an experimental change to TCP, documented in 132 [Allman98], allows an initial value somewhat larger than one 133 packet.) It subsequently increases cwnd by one packet for each ACK 134 it receives for new data. The minimum of cwnd and the receiver's 135 advertised window bounds the highest sequence number the TCP can 136 transmit. A TCP that fails to initialize and increment cwnd in 137 this fashion exhibits "No initial slow start". 139 Significance 140 In congested environments, detrimental to the performance of other 141 connections, and possibly to the connection itself. 143 ID Known TCP Implementation Problems August 1998 145 Implications 146 A TCP failing to slow start when beginning a connection results in 147 traffic bursts that can stress the network, leading to excessive 148 queueing delays and packet loss. 150 Implementations exhibiting this problem might do so because they 151 suffer from the general problem of not including the required 152 congestion window. These implementations will also suffer from "No 153 slow start after retransmission timeout". 155 There are different shades of "No initial slow start". From the 156 perspective of stressing the network, the worst is a connection 157 that simply always sends based on the receiver's advertised window, 158 with no notion of a separate congestion window. Another form is 159 described in "Uninitialized CWND" below. 161 Relevant RFCs 162 RFC 1122 requires use of slow start. RFC 2001 gives the specifics 163 of slow start. 165 Trace file demonstrating it 166 Made using tcpdump/BPF recording at the connection responder. No 167 losses reported. 169 10:40:42.244503 B > A: S 1168512000:1168512000(0) win 32768 170 (DF) [tos 0x8] 171 10:40:42.259908 A > B: S 3688169472:3688169472(0) 172 ack 1168512001 win 32768 173 10:40:42.389992 B > A: . ack 1 win 33580 (DF) [tos 0x8] 174 10:40:42.664975 A > B: P 1:513(512) ack 1 win 32768 175 10:40:42.700185 A > B: . 513:1973(1460) ack 1 win 32768 176 10:40:42.718017 A > B: . 1973:3433(1460) ack 1 win 32768 177 10:40:42.762945 A > B: . 3433:4893(1460) ack 1 win 32768 178 10:40:42.811273 A > B: . 4893:6353(1460) ack 1 win 32768 179 10:40:42.829149 A > B: . 6353:7813(1460) ack 1 win 32768 180 10:40:42.853687 B > A: . ack 1973 win 33580 (DF) [tos 0x8] 181 10:40:42.864031 B > A: . ack 3433 win 33580 (DF) [tos 0x8] 183 After the third packet, the connection is established. A, the 184 connection responder, begins transmitting to B, the connection 185 initiator. Host A quickly sends 6 packets comprising 7812 bytes, 186 even though the SYN exchange agreed upon an MSS of 1460 bytes 187 (implying an initial congestion window of 1 segment corresponds to 188 1460 bytes), and so A should have sent at most 1460 bytes. 190 The ACKs sent by B to A in the last two lines indicate that this 191 trace is not a measurement error (slow start really occurring but 193 ID Known TCP Implementation Problems August 1998 195 the corresponding ACKs having been dropped by the packet filter). 197 A second trace confirmed that the problem is repeatable. 199 Trace file demonstrating correct behavior 201 Made using tcpdump/BPF recording at the connection originator. No 202 losses reported. 204 12:35:31.914050 C > D: S 1448571845:1448571845(0) win 4380 205 12:35:32.068819 D > C: S 1755712000:1755712000(0) ack 1448571846 win 4096 206 12:35:32.069341 C > D: . ack 1 win 4608 207 12:35:32.075213 C > D: P 1:513(512) ack 1 win 4608 208 12:35:32.286073 D > C: . ack 513 win 4096 209 12:35:32.287032 C > D: . 513:1025(512) ack 1 win 4608 210 12:35:32.287506 C > D: . 1025:1537(512) ack 1 win 4608 211 12:35:32.432712 D > C: . ack 1537 win 4096 212 12:35:32.433690 C > D: . 1537:2049(512) ack 1 win 4608 213 12:35:32.434481 C > D: . 2049:2561(512) ack 1 win 4608 214 12:35:32.435032 C > D: . 2561:3073(512) ack 1 win 4608 215 12:35:32.594526 D > C: . ack 3073 win 4096 216 12:35:32.595465 C > D: . 3073:3585(512) ack 1 win 4608 217 12:35:32.595947 C > D: . 3585:4097(512) ack 1 win 4608 218 12:35:32.596414 C > D: . 4097:4609(512) ack 1 win 4608 219 12:35:32.596888 C > D: . 4609:5121(512) ack 1 win 4608 220 12:35:32.733453 D > C: . ack 4097 win 4096 222 References 223 This problem is documented in [Paxson97]. 225 How to detect 226 For implementations always manifesting this problem, it shows up 227 immediately in a packet trace or a sequence plot, as illustrated 228 above. 230 How to fix 231 If the root problem is that the implementation lacks a notion of a 232 congestion window, then unfortunately this requires significant 233 work to fix. However, doing so is important, as such 234 implementations also exhibit "No slow start after retransmission 235 timeout". 237 ID Known TCP Implementation Problems August 1998 239 3.2. 241 Name of Problem 242 No slow start after retransmission timeout 244 Classification 245 Congestion control 247 Description 248 When a TCP experiences a retransmission timeout, it is required by 249 RFC 1122, 4.2.2.15, to engage in "slow start" by initializing its 250 congestion window, cwnd, to one packet (one segment of the maximum 251 size). It subsequently increases cwnd by one packet for each ACK 252 it receives for new data until it reaches the "congestion 253 avoidance" threshold, ssthresh, at which point the congestion 254 avoidance algorithm for updating the window takes over. A TCP that 255 fails to enter slow start upon a timeout exhibits "No slow start 256 after retransmission timeout". 258 Significance 259 In congested environments, severely detrimental to the performance 260 of other connections, and also the connection itself. 262 Implications 263 Entering slow start upon timeout forms one of the cornerstones of 264 Internet congestion stability, as outlined in [Jacobson88]. If 265 TCPs fail to do so, the network becomes at risk of suffering 266 "congestion collapse" [RFC896]. 268 Relevant RFCs 269 RFC 1122 requires use of slow start after loss. RFC 2001 gives the 270 specifics of how to implement slow start. RFC 896 describes 271 congestion collapse. 273 The retransmission timeout discussed here should not be confused 274 with the separate "fast recovery" retransmission mechanism 275 discussed in RFC 2001. 277 Trace file demonstrating it 278 Made using tcpdump/BPF recording at the sending TCP (A). No losses 279 reported. 281 10:40:59.090612 B > A: . ack 357125 win 33580 (DF) [tos 0x8] 282 10:40:59.222025 A > B: . 357125:358585(1460) ack 1 win 32768 283 10:40:59.868871 A > B: . 357125:358585(1460) ack 1 win 32768 285 ID Known TCP Implementation Problems August 1998 287 10:41:00.016641 B > A: . ack 364425 win 33580 (DF) [tos 0x8] 288 10:41:00.036709 A > B: . 364425:365885(1460) ack 1 win 32768 289 10:41:00.045231 A > B: . 365885:367345(1460) ack 1 win 32768 290 10:41:00.053785 A > B: . 367345:368805(1460) ack 1 win 32768 291 10:41:00.062426 A > B: . 368805:370265(1460) ack 1 win 32768 292 10:41:00.071074 A > B: . 370265:371725(1460) ack 1 win 32768 293 10:41:00.079794 A > B: . 371725:373185(1460) ack 1 win 32768 294 10:41:00.089304 A > B: . 373185:374645(1460) ack 1 win 32768 295 10:41:00.097738 A > B: . 374645:376105(1460) ack 1 win 32768 296 10:41:00.106409 A > B: . 376105:377565(1460) ack 1 win 32768 297 10:41:00.115024 A > B: . 377565:379025(1460) ack 1 win 32768 298 10:41:00.123576 A > B: . 379025:380485(1460) ack 1 win 32768 299 10:41:00.132016 A > B: . 380485:381945(1460) ack 1 win 32768 300 10:41:00.141635 A > B: . 381945:383405(1460) ack 1 win 32768 301 10:41:00.150094 A > B: . 383405:384865(1460) ack 1 win 32768 302 10:41:00.158552 A > B: . 384865:386325(1460) ack 1 win 32768 303 10:41:00.167053 A > B: . 386325:387785(1460) ack 1 win 32768 304 10:41:00.175518 A > B: . 387785:389245(1460) ack 1 win 32768 305 10:41:00.210835 A > B: . 389245:390705(1460) ack 1 win 32768 306 10:41:00.226108 A > B: . 390705:392165(1460) ack 1 win 32768 307 10:41:00.241524 B > A: . ack 389245 win 8760 (DF) [tos 0x8] 309 The first packet indicates the ack point is 357125. 130 msec after 310 receiving the ACK, A transmits the packet after the ACK point, 311 357125:358585. 640 msec after this transmission, it retransmits 312 357125:358585, in an apparent retransmission timeout. At this 313 point, A's cwnd should be one MSS, or 1460 bytes, as A enters slow 314 start. The trace is consistent with this possibility. 316 B replies with an ACK of 364425, indicating that A has filled a 317 sequence hole. At this point, A's cwnd should be 1460*2 = 2920 318 bytes, since in slow start receiving an ACK advances cwnd by MSS. 319 However, A then launches 19 consecutive packets, which is 320 inconsistent with slow start. 322 A second trace confirmed that the problem is repeatable. 324 Trace file demonstrating correct behavior 325 Made using tcpdump/BPF recording at the sending TCP (C). No losses 326 reported. 328 12:35:48.442538 C > D: P 465409:465921(512) ack 1 win 4608 329 12:35:48.544483 D > C: . ack 461825 win 4096 330 12:35:48.703496 D > C: . ack 461825 win 4096 331 12:35:49.044613 C > D: . 461825:462337(512) ack 1 win 4608 332 12:35:49.192282 D > C: . ack 465921 win 2048 333 12:35:49.192538 D > C: . ack 465921 win 4096 335 ID Known TCP Implementation Problems August 1998 337 12:35:49.193392 C > D: P 465921:466433(512) ack 1 win 4608 338 12:35:49.194726 C > D: P 466433:466945(512) ack 1 win 4608 339 12:35:49.350665 D > C: . ack 466945 win 4096 340 12:35:49.351694 C > D: . 466945:467457(512) ack 1 win 4608 341 12:35:49.352168 C > D: . 467457:467969(512) ack 1 win 4608 342 12:35:49.352643 C > D: . 467969:468481(512) ack 1 win 4608 343 12:35:49.506000 D > C: . ack 467969 win 3584 345 After C transmits the first packet shown to D, it takes no action 346 in response to D's ACKs for 461825, because the first packet 347 already reached the advertised window limit of 4096 bytes above 348 461825. 600 msec after transmitting the first packet, C 349 retransmits 461825:462337, presumably due to a timeout. Its 350 congestion window is now MSS (512 bytes). 352 D acks 465921, indicating that C's retransmission filled a sequence 353 hole. This ACK advances C's cwnd from 512 to 1024. Very shortly 354 after, D acks 465921 again in order to update the offered window 355 from 2048 to 4096. This ACK does not advance cwnd since it is not 356 for new data. Very shortly after, C responds to the newly enlarged 357 window by transmitting two packets. D acks both, advancing cwnd 358 from 1024 to 1536. C in turn transmits three packets. 360 References 361 This problem is documented in [Paxson97]. 363 How to detect 364 Packet loss is common enough in the Internet that generally it is 365 not difficult to find an Internet path that will force 366 retransmission due to packet loss. 368 If the effective window prior to loss is large enough, however, 369 then the TCP may retransmit using the "fast recovery" mechanism 370 described in RFC 2001. In a packet trace, the signature of fast 371 recovery is that the packet retransmission occurs in response to 372 the receipt of three duplicate ACKs, and subsequent duplicate ACKs 373 may lead to the transmission of new data, above both the ack point 374 and the highest sequence transmitted so far. An absence of three 375 duplicate ACKs prior to retransmission suffices to distinguish 376 between timeout and fast recovery retransmissions. In the face of 377 only observing fast recovery retransmissions, generally it is not 378 difficult to repeat the data transfer until observing a timeout 379 retransmission. 381 Once armed with a trace exhibiting a timeout retransmission, 382 determining whether the TCP follows slow start is done by computing 384 ID Known TCP Implementation Problems August 1998 386 the correct progression of cwnd and comparing it to the amount of 387 data transmited by the TCP subsequent to the timeout rtransmission. 389 How to fix 390 If the root problem is that the implementation lacks a notion of a 391 congestion window, then unfortunately this requires significant 392 work to fix. However, doing so is critical, for reasons outlined 393 above. 395 3.3. 397 Name of Problem 398 Uninitialized CWND 400 Classification 401 Congestion control 403 Description 404 As described above for "No initial slow start", when a TCP 405 connection begins cwnd is initialized to one segment (or perhaps a 406 few segments, if experimenting with [Allman98]). One particular 407 form of "No initial slow start", worth separate mention as the bug 408 is fairly widely deployed, is "Uninitialized CWND". That is, while 409 the TCP implements the proper slow start mechanism, it fails to 410 initialize cwnd properly, so slow start in fact fails to occur. 412 The particular bug occurs when, during the connection establishment 413 handshake, the SYN ACK packet arrives without an MSS option. The 414 faulty implementation uses receipt of the MSS option to initialize 415 cwnd to one segment; if the option fails to arrive, then cwnd is 416 instead initialized to a very large value. 418 Significance 419 In congested environments, detrimental to the performance of other 420 connections, and likely to the connection itself. The burst can be 421 so large (see below) that it has deleterious effects even in 422 uncongested environments. 424 Implications 425 A TCP exhibiting this behavior is stressing the network with a 426 large burst of packets, which can cause loss in the network. 428 Relevant RFCs 429 RFC 1122 requires use of slow start. RFC 2001 gives the specifics 431 ID Known TCP Implementation Problems August 1998 433 of slow start. 435 Trace file demonstrating it 436 This trace was made using tcpdump/BPF running on host A. Host A is 437 the sender and host B is the receiver. The advertised window and 438 timestamp options have been omitted for clarity, except for the 439 first segment sent by host A. Note that A sends an MSS option in 440 its initial SYN but B does not include one in its reply. 442 16:56:02.226937 A > B: S 237585307:237585307(0) win 8192 443 444 16:56:02.557135 B > A: S 1617216000:1617216000(0) 445 ack 237585308 win 16384 446 16:56:02.557788 A > B: . ack 1 win 8192 447 16:56:02.566014 A > B: . 1:537(536) ack 1 448 16:56:02.566557 A > B: . 537:1073(536) ack 1 449 16:56:02.567120 A > B: . 1073:1609(536) ack 1 450 16:56:02.567662 A > B: P 1609:2049(440) ack 1 451 16:56:02.568349 A > B: . 2049:2585(536) ack 1 452 16:56:02.568909 A > B: . 2585:3121(536) ack 1 454 [54 additional burst segments deleted for brevity] 456 16:56:02.936638 A > B: . 32065:32601(536) ack 1 457 16:56:03.018685 B > A: . ack 1 459 After the three-way handshake, host A bursts 61 segments into the 460 network, before duplicate ACKs on the first segment cause a 461 retransmission to occur. Since host A did not wait for the ACK on 462 the first segment before sending additional segments, it is 463 exhibiting "Uninitialized CWND" 465 Trace file demonstrating correct behavior 467 See the example for "No initial slow start". 469 References 470 This problem is documented in [Paxson97]. 472 How to detect 473 This problem can be detected by examining a packet trace recorded 474 at either the sender or the receiver. However, the bug can be 475 difficult to induce because it requires finding a remote TCP peer 476 that does not send an MSS option in its SYN ACK. 478 ID Known TCP Implementation Problems August 1998 480 How to fix 481 This problem can be fixed by ensuring that cwnd is initialized upon 482 receipt of a SYN ACK, even if the SYN ACK does not contain an MSS 483 option. 485 3.4. 487 Name of Problem 488 Inconsistent retransmission 490 Classification 491 Reliability 493 Description 494 If, for a given sequence number, a sending TCP retransmits 495 different data than previously sent for that sequence number, then 496 a strong possibility arises that the receiving TCP will reconstruct 497 a different byte stream than that sent by the sending application, 498 depending on which instance of the sequence number it accepts. 499 Such a sending TCP exhibits "Inconsistent retransmission". 501 Significance 502 Critical for all environments. 504 Implications 505 Reliable delivery of data is a fundamental property of TCP. 507 Relevant RFCs 508 RFC 793, section 1.5, discusses the central role of reliability in 509 TCP operation. 511 Trace file demonstrating it 512 Made using tcpdump/BPF recording at the receiving TCP (B). No 513 losses reported. 515 12:35:53.145503 A > B: FP 90048435:90048461(26) ack 393464682 win 4096 516 4500 0042 9644 0000 517 3006 e4c2 86b1 0401 83f3 010a b2a4 0015 518 055e 07b3 1773 cb6a 5019 1000 68a9 0000 519 data starts here>504f 5254 2031 3334 2c31 3737*2c34 2c31 520 2c31 3738 2c31 3635 0d0a 521 12:35:53.146479 B > A: R 393464682:393464682(0) win 8192 522 12:35:53.851714 A > B: FP 90048429:90048463(34) ack 393464682 win 4096 523 4500 004a 965b 0000 524 3006 e4a3 86b1 0401 83f3 010a b2a4 0015 526 ID Known TCP Implementation Problems August 1998 528 055e 07ad 1773 cb6a 5019 1000 8bd3 0000 529 data starts here>5041 5356 0d0a 504f 5254 2031 3334 2c31 530 3737*2c31 3035 2c31 3431 2c34 2c31 3539 531 0d0a 533 The sequence numbers shown in this trace are absolute and not 534 adjusted to reflect the ISN. The 4-digit hex values show a dump of 535 the packet's IP and TCP headers, as well as payload. A first sends 536 to B data for 90048435:90048461. The corresponding data begins 537 with hex words 504f, 5254, etc. 539 B responds with a RST. Since the recording location was local to 540 B, it is unknown whether A received the RST. 542 A then sends 90048429:90048463, which includes six sequence 543 positions below the earlier transmission, all 26 positions of the 544 earlier transmission, and two additional sequence positions. 546 The retransmission disagrees starting just after sequence 90048447, 547 annotated above with a leading '*'. These two bytes were 548 originally transmitted as hex 2c34 but retransmitted as hex 2c31. 549 Subsequent positions disagree as well. 551 This behavior has been observed in other traces involving different 552 hosts. It is unknown how to repeat it. 554 In this instance, no corruption would occur, since B has already 555 indicated it will not accept further packets from A. 557 A second example illustrates a slightly different instance of the 558 problem. The tracing again was made with tcpdump/BPF at the 559 receiving TCP (D). 561 22:23:58.645829 C > D: P 185:212(27) ack 565 win 4096 562 4500 0043 90a3 0000 563 3306 0734 cbf1 9eef 83f3 010a 0525 0015 564 a3a2 faba 578c 70a4 5018 1000 9a53 0000 565 data starts here>504f 5254 2032 3033 2c32 3431 2c31 3538 566 2c32 3339 2c35 2c34 330d 0a 567 22:23:58.646805 D > C: . ack 184 win 8192 568 4500 0028 beeb 0000 569 3e06 ce06 83f3 010a cbf1 9eef 0015 0525 570 578c 70a4 a3a2 fab9 5010 2000 342f 0000 571 22:31:36.532244 C > D: FP 186:213(27) ack 565 win 4096 572 4500 0043 9435 0000 573 3306 03a2 cbf1 9eef 83f3 010a 0525 0015 574 a3a2 fabb 578c 70a4 5019 1000 9a51 0000 575 data starts here>504f 5254 2032 3033 2c32 3431 2c31 3538 577 ID Known TCP Implementation Problems August 1998 579 2c32 3339 2c35 2c34 330d 0a 581 In this trace, sequence numbers are relative. C sends 185:212, but 582 D only sends an ACK for 184 (so sequence number 184 is missing). C 583 then sends 186:213. The packet payload is identical to the 584 previous payload, but the base sequence number is one higher, 585 resulting in an inconsistent retransmission. 587 Neither trace exhibits checksum errors. 589 Trace file demonstrating correct behavior 590 (Omitted, as presumably correct behavior is obvious.) 592 References 593 None known. 595 How to detect 596 This problem unfortunately can be very difficult to detect, since 597 available experience indicates it is quite rare that it is 598 manifested. No "trigger" has been identified that can be used to 599 reproduce the problem. 601 How to fix 602 In the absence of a known "trigger", we cannot always assess how to 603 fix the problem. 605 In one implementation (not the one illustrated above), the problem 606 manifested itself when (1) the sender received a zero window and 607 stalled; (2) eventually an ACK arrived that offered a window larger 608 than that in effect at the time of the stall; (3) the sender 609 transmitted out of the buffer of data it held at the time of the 610 stall, but (4) failed to limit this transfer to the buffer length, 611 instead using the newly advertised (and larger) offered window. 612 Consequently, in addition to the valid buffer contents, it sent 613 whatever garbage values followed the end of the buffer. If it then 614 retransmitted the corresponding sequence numbers, at that point it 615 sent the correct data, resulting in an inconsistent retransmission. 616 Note that this instance of the problem reflects a more general 617 problem, that of initially transmitting incorrect data. 619 3.5. 621 Name of Problem 622 Failure to retain above-sequence data 624 ID Known TCP Implementation Problems August 1998 626 Classification 627 Congestion control, performance 629 Description 630 When a TCP receives an "above sequence" segment, meaning one with a 631 sequence number exceeding RCV.NXT but below RCV.NXT+RCV.WND, it 632 SHOULD queue the segment for later delivery (RFC 1122, 4.2.2.20). 633 A TCP that fails to do so is said to exhibit "Failure to retain 634 above-sequence data". 636 It may sometimes be appropriate for a TCP to discard above-sequence 637 data to reclaim memory. If they do so only rarely, then we would 638 not consider them to exhibit this problem. Instead, the particular 639 concern is with TCPs that always discard above-sequence data. 641 Significance 642 In environments prone to packet loss, detrimental to the 643 performance of both other connections and the connection itself. 645 Implications 646 In times of congestion, a failure to retain above-sequence data 647 will lead to numerous otherwise-unnecessary retransmissions, 648 aggravating the congestion and potentially reducing performance by 649 a large factor. 651 Relevant RFCs 652 RFC 1122 revises RFC 793 by upgrading the latter's MAY to a SHOULD 653 on this issue. 655 Trace file demonstrating it 656 Made using tcpdump/BPF recording at the receiving TCP. No losses 657 reported. 659 B is the TCP sender, A the receiver. A exhibits failure to retain 660 above sequence data: 662 10:38:10.164860 B > A: . 221078:221614(536) ack 1 win 33232 [tos 0x8] 663 10:38:10.170809 B > A: . 221614:222150(536) ack 1 win 33232 [tos 0x8] 664 10:38:10.177183 B > A: . 222150:222686(536) ack 1 win 33232 [tos 0x8] 665 10:38:10.225039 A > B: . ack 222686 win 25800 667 Here B has sent up to (relative) sequence 222686 in-sequence, and A 668 accordingly acknowledges. 670 10:38:10.268131 B > A: . 223222:223758(536) ack 1 win 33232 [tos 0x8] 671 10:38:10.337995 B > A: . 223758:224294(536) ack 1 win 33232 [tos 0x8] 673 ID Known TCP Implementation Problems August 1998 675 10:38:10.344065 B > A: . 224294:224830(536) ack 1 win 33232 [tos 0x8] 676 10:38:10.350169 B > A: . 224830:225366(536) ack 1 win 33232 [tos 0x8] 677 10:38:10.356362 B > A: . 225366:225902(536) ack 1 win 33232 [tos 0x8] 678 10:38:10.362445 B > A: . 225902:226438(536) ack 1 win 33232 [tos 0x8] 679 10:38:10.368579 B > A: . 226438:226974(536) ack 1 win 33232 [tos 0x8] 680 10:38:10.374732 B > A: . 226974:227510(536) ack 1 win 33232 [tos 0x8] 681 10:38:10.380825 B > A: . 227510:228046(536) ack 1 win 33232 [tos 0x8] 682 10:38:10.387027 B > A: . 228046:228582(536) ack 1 win 33232 [tos 0x8] 683 10:38:10.393053 B > A: . 228582:229118(536) ack 1 win 33232 [tos 0x8] 684 10:38:10.399193 B > A: . 229118:229654(536) ack 1 win 33232 [tos 0x8] 685 10:38:10.405356 B > A: . 229654:230190(536) ack 1 win 33232 [tos 0x8] 687 A now receives 13 additional packets from B. These are above- 688 sequence because 222686:223222 was dropped. The packets do however 689 fit within the offered window of 25800. A does not generate any 690 duplicate ACKs for them. 692 The trace contributor (V. Paxson) verified that these 13 packets 693 had valid IP and TCP checksums. 695 10:38:11.917728 B > A: . 222686:223222(536) ack 1 win 33232 [tos 0x8] 696 10:38:11.930925 A > B: . ack 223222 win 32232 698 B times out for 222686:223222 and retransmits it. Upon receiving 699 it, A only acknowledges 223222. Had it retained the valid above- 700 sequence packets, it would instead have ack'd 230190. 702 10:38:12.048438 B > A: . 223222:223758(536) ack 1 win 33232 [tos 0x8] 703 10:38:12.054397 B > A: . 223758:224294(536) ack 1 win 33232 [tos 0x8] 704 10:38:12.068029 A > B: . ack 224294 win 31696 706 B retransmits two more packets, and A only acknowledges them. This 707 pattern continues as B retransmits the entire set of previously- 708 received packets. 710 A second trace confirmed that the problem is repeatable. 712 Trace file demonstrating correct behavior 713 Made using tcpdump/BPF recording at the receiving TCP (C). No 714 losses reported. 716 09:11:25.790417 D > C: . 33793:34305(512) ack 1 win 61440 717 09:11:25.791393 D > C: . 34305:34817(512) ack 1 win 61440 718 09:11:25.792369 D > C: . 34817:35329(512) ack 1 win 61440 719 09:11:25.792369 D > C: . 35329:35841(512) ack 1 win 61440 720 09:11:25.793345 D > C: . 36353:36865(512) ack 1 win 61440 721 09:11:25.794321 C > D: . ack 35841 win 59904 723 ID Known TCP Implementation Problems August 1998 725 A sequence hole occurs because 35841:36353 has been dropped. 727 09:11:25.794321 D > C: . 36865:37377(512) ack 1 win 61440 728 09:11:25.794321 C > D: . ack 35841 win 59904 729 09:11:25.795297 D > C: . 37377:37889(512) ack 1 win 61440 730 09:11:25.795297 C > D: . ack 35841 win 59904 731 09:11:25.796273 C > D: . ack 35841 win 61440 732 09:11:25.798225 D > C: . 37889:38401(512) ack 1 win 61440 733 09:11:25.799201 C > D: . ack 35841 win 61440 734 09:11:25.807009 D > C: . 38401:38913(512) ack 1 win 61440 735 09:11:25.807009 C > D: . ack 35841 win 61440 736 (many additional lines omitted) 737 09:11:25.884113 D > C: . 52737:53249(512) ack 1 win 61440 738 09:11:25.884113 C > D: . ack 35841 win 61440 740 Each additional, above-sequence packet C receives from D elicits a 741 duplicate ACK for 35841. 743 09:11:25.887041 D > C: . 35841:36353(512) ack 1 win 61440 744 09:11:25.887041 C > D: . ack 53249 win 44032 746 D retransmits 35841:36353 and C acknowledges receipt of data all 747 the way up to 53249. 749 References 750 This problem is documented in [Paxson97]. 752 How to detect 753 Packet loss is common enough in the Internet that generally it is 754 not difficult to find an Internet path that will result in some 755 above-sequence packets arriving. A TCP that exhibits "Failure to 756 retain ..." may not generate duplicate ACKs for these packets. 757 However, some TCPs that do retain above-sequence data also do not 758 generate duplicate ACKs, so failure to do so does not definitively 759 identify the problem. Instead, the key observation is whether upon 760 retransmission of the dropped packet, data that was previously 761 above-sequence is acknowledged. 763 Two considerations in detecting this problem using a packet trace 764 are that it is easiest to do so with a trace made at the TCP 765 receiver, in order to unambiguously determine which packets arrived 766 successfully, and that such packets may still be correctly 767 discarded if they arrive with checksum errors. The latter can be 768 tested by capturing the entire packet contents and performing the 769 IP and TCP checksum algorithms to verify their integrity; or by 770 confirming that the packets arrive with the same checksum and 772 ID Known TCP Implementation Problems August 1998 774 contents as that with which they were sent, with a presumption that 775 the sending TCP correctly calculates checksums for the packets it 776 transmits. 778 It is considerably easier to verify that an implementation does NOT 779 exhibit this problem. This can be done by recording a trace at the 780 data sender, and observing that sometimes after a retransmission 781 the receiver acknowledges a higher sequence number than just that 782 which was retransmitted. 784 How to fix 785 If the root problem is that the implementation lacks buffer, then 786 then unfortunately this requires significant work to fix. However, 787 doing so is important, for reasons outlined above. 789 3.6. 791 Name of Problem 792 Extra additive constant in congestion avoidance 794 Classification 795 Congestion control / performance 797 Description 798 RFC 1122 section 4.2.2.15 states that TCP MUST implement Jacobson's 799 "congestion avoidance" algorithm [Jacobson88], which calls for 800 increasing the congestion window, cwnd, by: 802 MSS * MSS / cwnd 804 for each ACK received for new data [RFC2001]. This has the effect 805 of increasing cwnd by approximately one segment in each round trip 806 time. 808 Some TCP implementations add an additional fraction of a segment 809 (typically MSS/8) to cwnd for each ACK received for new data 810 [Stevens94, Wright95]: 812 (MSS * MSS / cwnd) + MSS/8 814 These implementations exhibit "Extra additive constant in 815 congestion avoidance". 817 ID Known TCP Implementation Problems August 1998 819 Significance 820 May be detrimental to performance even in completely uncongested 821 environments (see Implications). 823 In congested environments, may also be detrimental to the 824 performance of other connections. 826 Implications 827 The extra additive term allows a TCP to more aggressively open its 828 congestion window (quadratic rather than linear increase). For 829 congested networks, this can increase the loss rate experienced by 830 all connections sharing a bottleneck with the aggressive TCP. 832 However, even for completely uncongested networks, the extra 833 additive term can lead to diminished performance, as follows. In 834 congestion avoidance, a TCP sender probes the network path to 835 determine its available capacity, which often equates to the number 836 of buffers available at a bottleneck link. With linear congestion 837 avoidance, the TCP only probes for sufficient capacity (buffer) to 838 hold one extra packet per RTT. 840 Thus, when it exceeds the available capacity, generally only one 841 packet will be lost (since on the previous RTT it already found 842 that the path could sustain a window with one less packet in 843 flight). If the congestion window is sufficiently large, then the 844 TCP will recover from this single loss using fast retransmission 845 and avoid an expensive (in terms of performance) retransmission 846 timeout. 848 However, when the additional additive term is used, then cwnd can 849 increase by more than one packet per RTT, in which case the TCP 850 probes more aggressively. If in the previous RTT it had reached 851 the available capacity of the path, then the excess due to the 852 increase will again be lost, but now this will result in multiple 853 losses from the flight instead of a single loss. TCPs that do not 854 utilize SACK [RFC2018] generally will not recover from multiple 855 losses without incurring a retransmission timeout [Fall96,Hoe96], 856 significantly diminishing performance. 858 Relevant RFCs 859 RFC 1122 requires use of the "congestion avoidance" algorithm. RFC 860 2001 outlines the fast retransmit/fast recovery algorithms. RFC 861 2018 discusses the SACK option. 863 Trace file demonstrating it 865 ID Known TCP Implementation Problems August 1998 867 Recorded using tcpdump running on the same FDDI LAN as host A. 868 Host A is the sender and host B is the receiver. The connection 869 establishment specified an MSS of 4,312 bytes and a window scale 870 factor of 4. We omit the establishment and the first 2.5 MB of 871 data transfer, as the problem is best demonstrated when the window 872 has grown to a large value. At the beginning of the trace excerpt, 873 the congestion window is 31 packets. The connection is never 874 receiver-window limited, so we omit window advertisements from the 875 trace for clarity. 877 11:42:07.697951 B > A: . ack 2383006 878 11:42:07.699388 A > B: . 2508054:2512366(4312) 879 11:42:07.699962 A > B: . 2512366:2516678(4312) 880 11:42:07.700012 B > A: . ack 2391630 881 11:42:07.701081 A > B: . 2516678:2520990(4312) 882 11:42:07.701656 A > B: . 2520990:2525302(4312) 883 11:42:07.701739 B > A: . ack 2400254 884 11:42:07.702685 A > B: . 2525302:2529614(4312) 885 11:42:07.703257 A > B: . 2529614:2533926(4312) 886 11:42:07.703295 B > A: . ack 2408878 887 11:42:07.704414 A > B: . 2533926:2538238(4312) 888 11:42:07.704989 A > B: . 2538238:2542550(4312) 889 11:42:07.705040 B > A: . ack 2417502 890 11:42:07.705935 A > B: . 2542550:2546862(4312) 891 11:42:07.706506 A > B: . 2546862:2551174(4312) 892 11:42:07.706544 B > A: . ack 2426126 893 11:42:07.707480 A > B: . 2551174:2555486(4312) 894 11:42:07.708051 A > B: . 2555486:2559798(4312) 895 11:42:07.708088 B > A: . ack 2434750 896 11:42:07.709030 A > B: . 2559798:2564110(4312) 897 11:42:07.709604 A > B: . 2564110:2568422(4312) 898 11:42:07.710175 A > B: . 2568422:2572734(4312) * 900 11:42:07.710215 B > A: . ack 2443374 901 11:42:07.710799 A > B: . 2572734:2577046(4312) 902 11:42:07.711368 A > B: . 2577046:2581358(4312) 903 11:42:07.711405 B > A: . ack 2451998 904 11:42:07.712323 A > B: . 2581358:2585670(4312) 905 11:42:07.712898 A > B: . 2585670:2589982(4312) 906 11:42:07.712938 B > A: . ack 2460622 907 11:42:07.713926 A > B: . 2589982:2594294(4312) 908 11:42:07.714501 A > B: . 2594294:2598606(4312) 909 11:42:07.714547 B > A: . ack 2469246 910 11:42:07.715747 A > B: . 2598606:2602918(4312) 911 11:42:07.716287 A > B: . 2602918:2607230(4312) 912 11:42:07.716328 B > A: . ack 2477870 913 11:42:07.717146 A > B: . 2607230:2611542(4312) 914 11:42:07.717717 A > B: . 2611542:2615854(4312) 916 ID Known TCP Implementation Problems August 1998 918 11:42:07.717762 B > A: . ack 2486494 919 11:42:07.718754 A > B: . 2615854:2620166(4312) 920 11:42:07.719331 A > B: . 2620166:2624478(4312) 921 11:42:07.719906 A > B: . 2624478:2628790(4312) ** 923 11:42:07.719958 B > A: . ack 2495118 924 11:42:07.720500 A > B: . 2628790:2633102(4312) 925 11:42:07.721080 A > B: . 2633102:2637414(4312) 926 11:42:07.721739 B > A: . ack 2503742 927 11:42:07.722348 A > B: . 2637414:2641726(4312) 928 11:42:07.722918 A > B: . 2641726:2646038(4312) 929 11:42:07.769248 B > A: . ack 2512366 931 The receiver's acknowledgment policy is one ACK per two packets 932 received. Thus, for each ACK arriving at host A, two new packets 933 are sent, except when cwnd increases due to congestion avoidance, 934 in which case three new packets are sent. 936 With an ack-every-two-packets policy, cwnd should only increase one 937 MSS per 2 RTT. However, at the point marked "*" the window 938 increases after 7 ACKs have arrived, and then again at "**" after 6 939 more ACKs. 941 While we do not have space to show the effect, this trace suffered 942 from repeated timeout retransmissions due to multiple packet losses 943 during a single RTT. 945 Trace file demonstrating correct behavior 947 Made using the same host and tracing setup as above, except now A's 948 TCP has been modified to remove the MSS/8 additive constant. 949 Tcpdump reported 77 packet drops; the excerpt below is fully self- 950 consistent so it is unlikely that any of these occurred during the 951 excerpt. 953 We again begin when cwnd is 31 packets (this occurs significantly 954 later in the trace, because the congestion avoidance is now less 955 aggressive with opening the window). 957 14:22:21.236757 B > A: . ack 5194679 958 14:22:21.238192 A > B: . 5319727:5324039(4312) 959 14:22:21.238770 A > B: . 5324039:5328351(4312) 960 14:22:21.238821 B > A: . ack 5203303 961 14:22:21.240158 A > B: . 5328351:5332663(4312) 962 14:22:21.240738 A > B: . 5332663:5336975(4312) 963 14:22:21.270422 B > A: . ack 5211927 964 14:22:21.271883 A > B: . 5336975:5341287(4312) 966 ID Known TCP Implementation Problems August 1998 968 14:22:21.272458 A > B: . 5341287:5345599(4312) 969 14:22:21.279099 B > A: . ack 5220551 970 14:22:21.280539 A > B: . 5345599:5349911(4312) 971 14:22:21.281118 A > B: . 5349911:5354223(4312) 972 14:22:21.281183 B > A: . ack 5229175 973 14:22:21.282348 A > B: . 5354223:5358535(4312) 974 14:22:21.283029 A > B: . 5358535:5362847(4312) 975 14:22:21.283089 B > A: . ack 5237799 976 14:22:21.284213 A > B: . 5362847:5367159(4312) 977 14:22:21.284779 A > B: . 5367159:5371471(4312) 978 14:22:21.285976 B > A: . ack 5246423 979 14:22:21.287465 A > B: . 5371471:5375783(4312) 980 14:22:21.288036 A > B: . 5375783:5380095(4312) 981 14:22:21.288073 B > A: . ack 5255047 982 14:22:21.289155 A > B: . 5380095:5384407(4312) 983 14:22:21.289725 A > B: . 5384407:5388719(4312) 984 14:22:21.289762 B > A: . ack 5263671 985 14:22:21.291090 A > B: . 5388719:5393031(4312) 986 14:22:21.291662 A > B: . 5393031:5397343(4312) 987 14:22:21.291701 B > A: . ack 5272295 988 14:22:21.292870 A > B: . 5397343:5401655(4312) 989 14:22:21.293441 A > B: . 5401655:5405967(4312) 990 14:22:21.293481 B > A: . ack 5280919 991 14:22:21.294476 A > B: . 5405967:5410279(4312) 992 14:22:21.295053 A > B: . 5410279:5414591(4312) 993 14:22:21.295106 B > A: . ack 5289543 994 14:22:21.296306 A > B: . 5414591:5418903(4312) 995 14:22:21.296878 A > B: . 5418903:5423215(4312) 996 14:22:21.296917 B > A: . ack 5298167 997 14:22:21.297716 A > B: . 5423215:5427527(4312) 998 14:22:21.298285 A > B: . 5427527:5431839(4312) 999 14:22:21.298324 B > A: . ack 5306791 1000 14:22:21.299413 A > B: . 5431839:5436151(4312) 1001 14:22:21.299986 A > B: . 5436151:5440463(4312) 1002 14:22:21.303696 B > A: . ack 5315415 1003 14:22:21.305177 A > B: . 5440463:5444775(4312) 1004 14:22:21.305755 A > B: . 5444775:5449087(4312) 1005 14:22:21.308032 B > A: . ack 5324039 1006 14:22:21.309525 A > B: . 5449087:5453399(4312) 1007 14:22:21.310101 A > B: . 5453399:5457711(4312) 1008 14:22:21.310144 B > A: . ack 5332663 *** 1010 14:22:21.311615 A > B: . 5457711:5462023(4312) 1011 14:22:21.312198 A > B: . 5462023:5466335(4312) 1012 14:22:21.341876 B > A: . ack 5341287 1013 14:22:21.343451 A > B: . 5466335:5470647(4312) 1014 14:22:21.343985 A > B: . 5470647:5474959(4312) 1015 14:22:21.350304 B > A: . ack 5349911 1017 ID Known TCP Implementation Problems August 1998 1019 14:22:21.351852 A > B: . 5474959:5479271(4312) 1020 14:22:21.352430 A > B: . 5479271:5483583(4312) 1021 14:22:21.352484 B > A: . ack 5358535 1022 14:22:21.353574 A > B: . 5483583:5487895(4312) 1023 14:22:21.354149 A > B: . 5487895:5492207(4312) 1024 14:22:21.354205 B > A: . ack 5367159 1025 14:22:21.355467 A > B: . 5492207:5496519(4312) 1026 14:22:21.356039 A > B: . 5496519:5500831(4312) 1027 14:22:21.357361 B > A: . ack 5375783 1028 14:22:21.358855 A > B: . 5500831:5505143(4312) 1029 14:22:21.359424 A > B: . 5505143:5509455(4312) 1030 14:22:21.359465 B > A: . ack 5384407 1031 14:22:21.360605 A > B: . 5509455:5513767(4312) 1032 14:22:21.361181 A > B: . 5513767:5518079(4312) 1033 14:22:21.361225 B > A: . ack 5393031 1034 14:22:21.362485 A > B: . 5518079:5522391(4312) 1035 14:22:21.363057 A > B: . 5522391:5526703(4312) 1036 14:22:21.363096 B > A: . ack 5401655 1037 14:22:21.364236 A > B: . 5526703:5531015(4312) 1038 14:22:21.364810 A > B: . 5531015:5535327(4312) 1039 14:22:21.364867 B > A: . ack 5410279 1040 14:22:21.365819 A > B: . 5535327:5539639(4312) 1041 14:22:21.366386 A > B: . 5539639:5543951(4312) 1042 14:22:21.366427 B > A: . ack 5418903 1043 14:22:21.367586 A > B: . 5543951:5548263(4312) 1044 14:22:21.368158 A > B: . 5548263:5552575(4312) 1045 14:22:21.368199 B > A: . ack 5427527 1046 14:22:21.369189 A > B: . 5552575:5556887(4312) 1047 14:22:21.369758 A > B: . 5556887:5561199(4312) 1048 14:22:21.369803 B > A: . ack 5436151 1049 14:22:21.370814 A > B: . 5561199:5565511(4312) 1050 14:22:21.371398 A > B: . 5565511:5569823(4312) 1051 14:22:21.375159 B > A: . ack 5444775 1052 14:22:21.376658 A > B: . 5569823:5574135(4312) 1053 14:22:21.377235 A > B: . 5574135:5578447(4312) 1054 14:22:21.379303 B > A: . ack 5453399 1055 14:22:21.380802 A > B: . 5578447:5582759(4312) 1056 14:22:21.381377 A > B: . 5582759:5587071(4312) 1057 14:22:21.381947 A > B: . 5587071:5591383(4312) **** 1059 "***" marks the end of the first round trip. Note that cwnd did 1060 not increase (as evidenced by each ACK eliciting two new data 1061 packets). Only at "****", which comes near the end of the second 1062 round trip, does cwnd increase by one packet. 1064 This trace did not suffer any timeout retransmissions. It 1065 transferred the same amount of data as the first trace in about 1066 half as much time. This difference is repeatable between hosts A 1068 ID Known TCP Implementation Problems August 1998 1070 and B. 1072 References 1073 [Stevens94] and [Wright95] discuss this problem. The problem of 1074 Reno TCP failing to recover from multiple losses except via a 1075 retransmission timeout is discussed in [Fall96,Hoe96]. 1077 How to detect 1078 If source code is available, that is generally the easiest way to 1079 detect this problem. Search for each modification to the cwnd 1080 variable; (at least) one of these will be for congestion avoidance, 1081 and inspection of the related code should immediately identify the 1082 problem if present. 1084 The problem can also be detected by closely examining packet traces 1085 taken near the sender. During congestion avoidance, cwnd will 1086 increase by an additional segment upon the receipt of (typically) 1087 eight acknowledgements without a loss. This increase is in 1088 addition to the one segment increase per round trip time (or two 1089 round trip times if the receiver is using delayed ACKs). 1091 Furthermore, graphs of the sequence number vs. time, taken from 1092 packet traces, are normally linear during congestion avoidance. 1093 When viewing packet traces of transfers from senders exhibiting 1094 this problem, the graphs appear quadratic instead of linear. 1096 Finally, the traces will show that, with sufficiently large 1097 windows, nearly every loss event results in a timeout. 1099 How to fix 1100 This problem may be corrected by removing the "+ MSS/8" term from 1101 the congestion avoidance code that increases cwnd each time an ACK 1102 of new data is received. 1104 3.7. 1106 Name of Problem 1107 Initial RTO too low 1109 Classification 1110 Performance 1112 ID Known TCP Implementation Problems August 1998 1114 Description 1115 When a TCP first begins transmitting data, it lacks the RTT 1116 measurements necessary to have computed an adaptive retransmission 1117 timeout (RTO). RFC 1122, 4.2.3.1, states that a TCP SHOULD 1118 initialize RTO to 3 seconds. A TCP that uses a lower value 1119 exhibits "Initial RTO too low". 1121 Significance 1122 In environments with large RTTs (where "large" means any value 1123 larger than the initial RTO), TCPs will experience very poor 1124 performance. 1126 Implications 1127 Whenever RTO < RTT, very poor performance can result as packets are 1128 unnecessarily retransmitted (because RTO will expire before an ACK 1129 for the packet can arrive) and the connection enters slow start and 1130 congestion avoidance. Generally, the algorithms for computing RTO 1131 avoid this problem by adding a positive term to the estimated RTT. 1132 However, when a connection first begins it must use some estimate 1133 for RTO, and if it picks a value less than RTT, the above problems 1134 will arise. 1136 Furthermore, when the initial RTO < RTT, it can take a long time 1137 for the TCP to correct the problem by adapting the RTT estimate, 1138 because the use of Karn's algorithm (mandated by RFC 1122, 4.2.3.1) 1139 will discard many of the candidate RTT measurements made after the 1140 first timeout, since they will be measurements of retransmitted 1141 segments. 1143 Relevant RFCs 1144 RFC 1122 states that TCPs SHOULD initialize RTO to 3 seconds and 1145 MUST implement Karn's algorithm. 1147 Trace file demonstrating it 1148 The following trace file was taken using tcpdump at host A, the 1149 data sender. The advertised window and SYN options have been 1150 omitted for clarity. 1152 07:52:39.870301 A > B: S 2786333696:2786333696(0) 1153 07:52:40.548170 B > A: S 130240000:130240000(0) ack 2786333697 1154 07:52:40.561287 A > B: P 1:513(512) ack 1 1155 07:52:40.753466 A > B: . 1:513(512) ack 1 1156 07:52:41.133687 A > B: . 1:513(512) ack 1 1157 07:52:41.458529 B > A: . ack 513 1159 ID Known TCP Implementation Problems August 1998 1161 07:52:41.458686 A > B: . 513:1025(512) ack 1 1162 07:52:41.458797 A > B: P 1025:1537(512) ack 1 1163 07:52:41.541633 B > A: . ack 513 1164 07:52:41.703732 A > B: . 513:1025(512) ack 1 1165 07:52:42.044875 B > A: . ack 513 1166 07:52:42.173728 A > B: . 513:1025(512) ack 1 1167 07:52:42.330861 B > A: . ack 1537 1168 07:52:42.331129 A > B: . 1537:2049(512) ack 1 1169 07:52:42.331262 A > B: P 2049:2561(512) ack 1 1170 07:52:42.623673 A > B: . 1537:2049(512) ack 1 1171 07:52:42.683203 B > A: . ack 1537 1172 07:52:43.044029 B > A: . ack 1537 1173 07:52:43.193812 A > B: . 1537:2049(512) ack 1 1175 Note from the SYN/SYN-ack exchange, the RTT is over 600 msec. 1176 However, from the elapsed time between the third and fourth lines 1177 (the first packet being sent and then retransmitted), it is 1178 apparent the RTO was initialized to under 200 msec. The next line 1179 shows that this value has doubled to 400 msec (correct exponential 1180 backoff of RTO), but that still does not suffice to avoid an 1181 unnecessary retransmission. 1183 Finally, an ACK from B arrives for the first segment. Later two 1184 more duplicate ACKs for 513 arrive, indicating that both the 1185 original and the two retransmissions arrived at B. (Indeed, a 1186 concurrent trace at B showed that no packets were lost during the 1187 entire connection). This ACK opens the congestion window to two 1188 packets, which are sent back-to-back, but at 07:52:41.703732 RTO 1189 again expires after a little over 200 msec, leading to an 1190 unnecessary retransmission, and the pattern repeats. By the end of 1191 the trace excerpt above, 1536 bytes have been successfully 1192 transmitted from A to B, over an interval of more than 2 seconds, 1193 reflecting terrible performance. 1195 Trace file demonstrating correct behavior 1196 The following trace file was taken using tcpdump at host C, the 1197 data sender. The advertised window and SYN options have been 1198 omitted for clarity. 1200 17:30:32.090299 C > D: S 2031744000:2031744000(0) 1201 17:30:32.900325 D > C: S 262737964:262737964(0) ack 2031744001 1202 17:30:32.900326 C > D: . ack 1 1203 17:30:32.910326 C > D: . 1:513(512) ack 1 1204 17:30:34.150355 D > C: . ack 513 1205 17:30:34.150356 C > D: . 513:1025(512) ack 1 1206 17:30:34.150357 C > D: . 1025:1537(512) ack 1 1207 17:30:35.170384 D > C: . ack 1025 1209 ID Known TCP Implementation Problems August 1998 1211 17:30:35.170385 C > D: . 1537:2049(512) ack 1 1212 17:30:35.170386 C > D: . 2049:2561(512) ack 1 1213 17:30:35.320385 D > C: . ack 1537 1214 17:30:35.320386 C > D: . 2561:3073(512) ack 1 1215 17:30:35.320387 C > D: . 3073:3585(512) ack 1 1216 17:30:35.730384 D > C: . ack 2049 1218 The initital SYN/SYN-ack exchange shows that RTT is more than 800 1219 msec, and for some subsequent packets it rises above 1 second, but 1220 C's retransmit timer does not ever expire. 1222 References 1223 This problem is documented in [Paxson97]. 1225 How to detect 1226 This problem is readily detected by inspecting a packet trace of 1227 the startup of a TCP connection made over a long-delay path. It 1228 can be diagnosed from either a sender-side or receiver-side trace. 1229 Long-delay paths can often be found by locating remote sites on 1230 other continents. 1232 How to fix 1233 As this problem arises from a faulty initialization, one hopes 1234 fixing it requires a one-line change to the TCP source code. 1236 3.8. 1238 Name of Problem 1239 Failure of window deflation after loss recovery 1241 Classification 1242 Congestion control / performance 1244 Description 1245 The fast recovery algorithm allows TCP senders to continue to 1246 transmit new segments during loss recovery. First, fast 1247 retransmission is initiated after a TCP sender receives three 1248 duplicate ACKs. At this point, a retransmission is sent and cwnd 1249 is halved. The fast recovery algorithm then allows additional 1250 segments to be sent when sufficient additional duplicate ACKs 1251 arrive. Some implementations of fast recovery compute when to send 1252 additional segments by artificially incrementing cwnd, first by 1254 ID Known TCP Implementation Problems August 1998 1256 three segments to account for the three duplicate ACKs that 1257 triggered fast retransmission, and subsequently by 1 MSS for each 1258 new duplicate ACK that arrives. When cwnd allows, the sender 1259 transmits new data segments. 1261 When an ACK arrives that covers new data, cwnd is to be reduced by 1262 the amount by which it was artificially increased. However, some 1263 TCP implementations fail to "deflate" the window, causing an 1264 inappropriate amount of data to be sent into the network after 1265 recovery. One cause of this problem is the "header prediction" 1266 code, which is used to handle incoming segments that require little 1267 work. In some implementations of TCP, the header prediction code 1268 does not check to make sure cwnd has not been artificially 1269 inflated, and therefore does not reduce the artificially increased 1270 cwnd when appropriate. 1272 Significance 1273 TCP senders that exhibit this problem will transmit a burst of data 1274 immediately after recovery, which can degrade performance, as well 1275 as network stability. Effectively, the sender does not reduce the 1276 size of cwnd as much as it should (to half its value when loss was 1277 detected), if at all. This can harm the performance of the TCP 1278 connection itself, as well as competing TCP flows. 1280 Implications 1281 A TCP sender exhibiting this problem does not reduce cwnd 1282 appropriately in times of congestion, and therefore may contribute 1283 to congestive collapse. 1285 Relevant RFCs 1286 RFC 2001 outlines the fast retransmit/fast recovery algorithms. 1287 [Brakmo95] outlines this implementation problem and offers a fix. 1289 Trace file demonstrating it 1290 The following trace file was taken using tcpdump at host A, the 1291 data sender. The advertised window (which never changed) has been 1292 omitted for clarity, except for the first packet sent by each host. 1294 08:22:56.825635 A.7505 > B.7505: . 29697:30209(512) ack 1 win 4608 1295 08:22:57.038794 B.7505 > A.7505: . ack 27649 win 4096 1296 08:22:57.039279 A.7505 > B.7505: . 30209:30721(512) ack 1 1297 08:22:57.321876 B.7505 > A.7505: . ack 28161 1298 08:22:57.322356 A.7505 > B.7505: . 30721:31233(512) ack 1 1299 08:22:57.347128 B.7505 > A.7505: . ack 28673 1301 ID Known TCP Implementation Problems August 1998 1303 08:22:57.347572 A.7505 > B.7505: . 31233:31745(512) ack 1 1304 08:22:57.347782 A.7505 > B.7505: . 31745:32257(512) ack 1 1305 08:22:57.936393 B.7505 > A.7505: . ack 29185 1306 08:22:57.936864 A.7505 > B.7505: . 32257:32769(512) ack 1 1307 08:22:57.950802 B.7505 > A.7505: . ack 29697 win 4096 1308 08:22:57.951246 A.7505 > B.7505: . 32769:33281(512) ack 1 1309 08:22:58.169422 B.7505 > A.7505: . ack 29697 1310 08:22:58.638222 B.7505 > A.7505: . ack 29697 1311 08:22:58.643312 B.7505 > A.7505: . ack 29697 1312 08:22:58.643669 A.7505 > B.7505: . 29697:30209(512) ack 1 1313 08:22:58.936436 B.7505 > A.7505: . ack 29697 1314 08:22:59.002614 B.7505 > A.7505: . ack 29697 1315 08:22:59.003026 A.7505 > B.7505: . 33281:33793(512) ack 1 1316 08:22:59.682902 B.7505 > A.7505: . ack 33281 1317 08:22:59.683391 A.7505 > B.7505: P 33793:34305(512) ack 1 1318 08:22:59.683748 A.7505 > B.7505: P 34305:34817(512) ack 1 1319 08:22:59.684043 A.7505 > B.7505: P 34817:35329(512) ack 1 1320 08:22:59.684266 A.7505 > B.7505: P 35329:35841(512) ack 1 1321 08:22:59.684567 A.7505 > B.7505: P 35841:36353(512) ack 1 1322 08:22:59.684810 A.7505 > B.7505: P 36353:36865(512) ack 1 1323 08:22:59.685094 A.7505 > B.7505: P 36865:37377(512) ack 1 1325 The first 12 lines of the trace show incoming ACKs clocking out a 1326 window of data segments. At this point in the transfer, cwnd is 7 1327 segments. The next 4 lines of the trace show 3 duplicate ACKs 1328 arriving from the receiver, followed by a retransmission from the 1329 sender. At this point, cwnd is halved (to 3 segments) and 1330 artificially incremented by the three duplicate ACKs that have 1331 arrived, making cwnd 6 segments. The next two lines show 2 more 1332 duplicate ACKs arriving, each of which increases cwnd by 1 segment. 1333 So, after these two duplicate ACKs arrive the cwnd is 8 segments 1334 and the sender has permission to send 1 new segment (since there 1335 are 7 segments outstanding). The next line in the trace shows this 1336 new segment being transmitted. The next packet shown in the trace 1337 is an ACK from host B that covers the first 7 outstanding segments 1338 (all but the segment sent during recovery). This should cause cwnd 1339 to be reduced to 3 segments and 2 segments to be transmitted (since 1340 there is already 1 outstanding segment in the network). However, 1341 as shown by the last 7 lines of the trace, cwnd is not reduced, 1342 causing a line-rate burst of 7 new segments. 1344 Trace file demonstrating correct behavior 1345 The trace would appear identical to the one above, only it would 1346 stop after: 1348 08:22:59.683748 A.7505 > B.7505: P 34305:34817(512) ack 1 1350 ID Known TCP Implementation Problems August 1998 1352 because at this point host A would correctly reduce cwnd after 1353 recovery, allowing only 2 segments to be transmited, rather than 1354 producing a burst of 7 segments. 1356 References 1357 This problem is documented and the performance implications 1358 analyzed in [Brakmo95]. 1360 How to detect 1361 Failure of window deflation after loss recovery can be found by 1362 examining sender-side packet traces recorded during periods of 1363 moderate loss (so cwnd can grow large enough to allow for fast 1364 recovery when loss occurs). 1366 How to fix 1367 When this bug is caused by incorrect header prediction, the fix is 1368 to add a predicate to the header prediction test that checks to see 1369 whether cwnd is inflated; if so, the header prediction test fails 1370 and the usual ACK processing occurs, which (in this case) takes 1371 care to deflate the window. 1373 3.9. 1375 Name of Problem 1376 Excessively short keepalive connection timeout 1378 Classification 1379 Reliability 1381 Description 1382 Keep-alive is a mechanism for checking whether an idle connection 1383 is still alive. According to RFC-1122, keepalive should only be 1384 invoked in server applications that might otherwise hang 1385 indefinitely and consume resources unnecessarily if a client 1386 crashes or aborts a connection during a network failure. 1388 RFC-1122 also specifies that if a keep-alive mechanism is 1389 implemented it MUST NOT interpret failure to respond to any 1390 specific probe as a dead connection. The RFC does not specify a 1391 particular mechanism for timing out a connection when no response 1392 is received for keepalive probes. However, if the mechanism does 1393 not allow ample time for recovery from network congestion or delay, 1395 ID Known TCP Implementation Problems August 1998 1397 connections may be timed out unnecessarily. 1399 Significance 1400 In congested networks, can lead to unwarranted termination of 1401 connections. 1403 Implications 1404 It is possible for the network connection between two peer machines 1405 to become congested or to exhibit packet loss at the time that a 1406 keep-alive probe is sent on a connection. If the keep-alive 1407 mechanism does not allow sufficient time before dropping 1408 connections in the face of unacknowledged probes, connections may 1409 be dropped even when both peers of a connection are still alive. 1411 Relevant RFCs 1412 RFC 1122 specifies that the keep-alive mechanism may be provided. 1413 It does not specify a mechanism for determining dead connections 1414 when keepalive probes are not acknowledged. 1416 Trace file demonstrating it 1417 Made using the Orchestra tool at the peer of the machine using 1418 keep-alive. After connection establishment, incoming keep-alives 1419 were dropped by Orchestra to simulate a dead connection. 1421 22:11:12.040000 A > B: 22666019:0 win 8192 datasz 4 SYN 1422 22:11:12.060000 B > A: 2496001:22666020 win 4096 datasz 4 SYN ACK 1423 22:11:12.130000 A > B: 22666020:2496002 win 8760 datasz 0 ACK 1424 (more than two hours elapse) 1425 00:23:00.680000 A > B: 22666019:2496002 win 8760 datasz 1 ACK 1426 00:23:01.770000 A > B: 22666019:2496002 win 8760 datasz 1 ACK 1427 00:23:02.870000 A > B: 22666019:2496002 win 8760 datasz 1 ACK 1428 00:23.03.970000 A > B: 22666019:2496002 win 8760 datasz 1 ACK 1429 00:23.05.070000 A > B: 22666019:2496002 win 8760 datasz 1 ACK 1431 The initial three packets are the SYN exchange for connection 1432 setup. About two hours later, the keepalive timer fires because 1433 the connection has been idle. Keepalive probes are transmitted a 1434 total of 5 times, with a 1 second spacing between probes, after 1435 which the connection is dropped. This is problematic because a 5 1436 second network outage at the time of the first probe results in the 1437 connection being killed. 1439 Trace file demonstrating correct behavior 1441 ID Known TCP Implementation Problems August 1998 1443 Made using the Orchestra tool at the peer of the machine using 1444 keep-alive. After connection establishment, incoming keep-alives 1445 were dropped by Orchestra to simulate a dead connection. 1447 16:01:52.130000 A > B: 1804412929:0 win 4096 datasz 4 SYN 1448 16:01:52.360000 B > A: 16512001:1804412930 win 4096 datasz 4 SYN ACK 1449 16:01:52.410000 A > B: 1804412930:16512002 win 4096 datasz 0 ACK 1450 (two hours elapse) 1451 18:01:57.170000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 1452 18:03:12.220000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 1453 18:04:27.270000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 1454 18:05:42.320000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 1455 18:06:57.370000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 1456 18:08:12.420000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 1457 18:09:27.480000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 1458 18:10:43.290000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 1459 18:11:57.580000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 1460 18:13:12.630000 A > B: 1804412929:16512002 win 4096 datasz 0 RST ACK 1462 In this trace, when the keep-alive timer expires, 9 keepalive 1463 probes are sent at 75 second intervals. 75 seconds after the last 1464 probe is sent, a final RST segment is sent indicating that the 1465 connection has been closed. This implementation waits about 11 1466 minutes before timing out the connection, while the first 1467 implementation shown allows only 5 seconds. 1469 References 1470 This problem is documented in [Dawson97]. 1472 How to detect 1473 For implementations manifesting this problem, it shows up on a 1474 packet trace after the keepalive timer fires if the peer machine 1475 receiving the keepalive does not respond. Usually the keepalive 1476 timer will fire at least two hours after keepalive is turned on, 1477 but it may be sooner if the timer value has been configured lower, 1478 or if the keepalive mechanism violates the specification (see 1479 Insufficient interval between keepalives problem). In this 1480 example, suppressing the response of the peer to keepalive probes 1481 was accomplished using the Orchestra toolkit, which can be 1482 configured to drop packets. It could also have been done by 1483 creating a connection, turning on keepalive, and disconnecting the 1484 network connection at the receiver machine. 1486 How to fix 1487 This problem can be fixed by using a different method for timing 1489 ID Known TCP Implementation Problems August 1998 1491 out keepalives that allows a longer period of time to elapse before 1492 dropping the connection. For example, the algorithm for timing out 1493 on dropped data could be used. Another possibility is an algorithm 1494 such as the one shown in the trace above, which sends 9 probes at 1495 75 second intervals and then waits an additional 75 seconds for a 1496 response before closing the connection. 1498 3.10. 1500 Name of Problem 1501 Failure to back off retransmission timeout 1503 Classification 1504 Congestion control / reliability 1506 Description 1507 The retransmission timeout is used to determine when a packet has 1508 been dropped in the network. When this timeout has expired without 1509 the arrival of an ACK, the segment is retransmitted. Each time a 1510 segment is retransmitted, the timeout is adjusted according to an 1511 exponential backoff algorithm, doubling each time. If a TCP fails 1512 to receive an ACK after numerous attempts at retransmitting the 1513 same segment, it terminates the connection. A TCP that fails to 1514 double its retransmission timeout upon repeated timeouts is said to 1515 exhibit "Failure to back off retransmission timeout". 1517 Significance 1518 Backing off the retransmission timer is a cornerstone of network 1519 stability in the presence of congestion. Consequently, this bug 1520 can have severe adverse affects in congested networks. It also 1521 affects TCP reliability in congested networks, as discussed in the 1522 next section. 1524 Implications 1525 It is possible for the network connection between two TCP peers to 1526 become congested or to exhibit packet loss at the time that a 1527 retransmission is sent on a connection. If the retransmission 1528 mechanism does not allow sufficient time before dropping 1529 connections in the face of unacknowledged segments, connections may 1530 be dropped even when, by waiting longer, the connection could have 1531 continued. 1533 ID Known TCP Implementation Problems August 1998 1535 Relevant RFCs 1536 RFC 1122 specifies mandatory exponential backoff of the 1537 retransmission timeout, and the termination of connections after 1538 some period of time (at least 100 seconds). 1540 Trace file demonstrating it 1541 Made using tcpdump on an intermediate host: 1543 16:51:12.671727 A > B: S 510878852:510878852(0) win 16384 1544 16:51:12.672479 B > A: S 2392143687:2392143687(0) ack 510878853 win 16384 1545 16:51:12.672581 A > B: . ack 1 win 16384 1546 16:51:15.244171 A > B: P 1:3(2) ack 1 win 16384 1547 16:51:15.244933 B > A: . ack 3 win 17518 (DF) 1549 1551 16:51:19.381176 A > B: P 3:5(2) ack 1 win 16384 1552 16:51:20.162016 A > B: P 3:5(2) ack 1 win 16384 1553 16:51:21.161936 A > B: P 3:5(2) ack 1 win 16384 1554 16:51:22.161914 A > B: P 3:5(2) ack 1 win 16384 1555 16:51:23.161914 A > B: P 3:5(2) ack 1 win 16384 1556 16:51:24.161879 A > B: P 3:5(2) ack 1 win 16384 1557 16:51:25.161857 A > B: P 3:5(2) ack 1 win 16384 1558 16:51:26.161836 A > B: P 3:5(2) ack 1 win 16384 1559 16:51:27.161814 A > B: P 3:5(2) ack 1 win 16384 1560 16:51:28.161791 A > B: P 3:5(2) ack 1 win 16384 1561 16:51:29.161769 A > B: P 3:5(2) ack 1 win 16384 1562 16:51:30.161750 A > B: P 3:5(2) ack 1 win 16384 1563 16:51:31.161727 A > B: P 3:5(2) ack 1 win 16384 1565 16:51:32.161701 A > B: R 5:5(0) ack 1 win 16384 1567 The initial three packets are the SYN exchange for connection 1568 setup, then a single data packet, to verify that data can be 1569 transferred. Then the connection to the destination host was 1570 disconnected, and more data sent. Retransmissions occur every 1571 second for 12 seconds, and then the connection is terminated with a 1572 RST. This is problematic because a 12 second pause in connectivity 1573 could result in the termination of a connection. 1575 Trace file demonstrating correct behavior 1576 Again, a tcpdump taken from a third host: 1578 16:59:05.398301 A > B: S 2503324757:2503324757(0) win 16384 1579 16:59:05.399673 B > A: S 2492674648:2492674648(0) ack 2503324758 win 16384 1580 16:59:05.399866 A > B: . ack 1 win 17520 1582 ID Known TCP Implementation Problems August 1998 1584 16:59:06.538107 A > B: P 1:3(2) ack 1 win 17520 1585 16:59:06.540977 B > A: . ack 3 win 17518 (DF) 1587 1589 16:59:13.121542 A > B: P 3:5(2) ack 1 win 17520 1590 16:59:14.010928 A > B: P 3:5(2) ack 1 win 17520 1591 16:59:16.010979 A > B: P 3:5(2) ack 1 win 17520 1592 16:59:20.011229 A > B: P 3:5(2) ack 1 win 17520 1593 16:59:28.011896 A > B: P 3:5(2) ack 1 win 17520 1594 16:59:44.013200 A > B: P 3:5(2) ack 1 win 17520 1595 17:00:16.015766 A > B: P 3:5(2) ack 1 win 17520 1596 17:01:20.021308 A > B: P 3:5(2) ack 1 win 17520 1597 17:02:24.027752 A > B: P 3:5(2) ack 1 win 17520 1598 17:03:28.034569 A > B: P 3:5(2) ack 1 win 17520 1599 17:04:32.041567 A > B: P 3:5(2) ack 1 win 17520 1600 17:05:36.048264 A > B: P 3:5(2) ack 1 win 17520 1601 17:06:40.054900 A > B: P 3:5(2) ack 1 win 17520 1603 17:07:44.061306 A > B: R 5:5(0) ack 1 win 17520 1605 In this trace, when the retransmission timer expires, 12 1606 retransmissions are sent at exponentially-increasing intervals, 1607 until the interval value reaches 64 seconds, at which time the 1608 interval stops growing. 64 seconds after the last retransmission, 1609 a final RST segment is sent indicating that the connection has been 1610 closed. This implementation waits about 9 minutes before timing 1611 out the connection, while the first implementation shown allows 1612 only 12 seconds. 1614 References 1615 None known. 1617 How to detect 1618 A simple transfer can be eaily interrupted by disconnecting the 1619 receiving host from the network. tcpdump or another appropriate 1620 tool should show the retransmissions being sent. Several trials in 1621 a low-rtt environment may be required to demonstrate the bug. 1623 How to fix 1624 For one of the implementations studied, this problem seemed to be 1625 the result of an error introduced with the addition of the Brakmo- 1626 Peterson RTO algorithm [Brakmo95], which can return a value of zero 1627 where the older Jacobson algorithm would always have a minimum 1628 value of three. Brakmo and Peterson specified an additional step 1630 ID Known TCP Implementation Problems August 1998 1632 of min(rtt + 2, RTO) to avoid problems with this. Unfortunately, 1633 in the implementation this step was omitted when calculating the 1634 exponential backoff for the RTO. This results in an RTO of 0 1635 seconds being multiplied by the backoff, yielding again zero, and 1636 then being subjected to a later MAX operation that increases it to 1637 1 second, regardless of the backoff factor. 1639 A similar TCP persist failure has the same cause. 1641 3.11. 1643 Name of Problem 1644 Insufficient interval between keepalives 1646 Classification 1647 Reliability 1649 Description 1650 Keep-alive is a mechanism for checking whether an idle connection 1651 is still alive. According to RFC-1122, keep-alive may be included 1652 in an implementation. If it is included, the interval between 1653 keep-alive packets MUST be configurable, and MUST default to no 1654 less than two hours. 1656 Significance 1657 In congested networks, can lead to unwarranted termination of 1658 connections. 1660 Implications 1661 According to RFC-1122, keep-alive is not required of 1662 implementations because it could: (1) cause perfectly good 1663 connections to break during transient Internet failures; (2) 1664 consume unnecessary bandwidth ("if no one is using the connection, 1665 who cares if it is still good?"); and (3) cost money for an 1666 Internet path that charges for packets. Regarding this last point, 1667 we note that in addition the presence of dial-on-demand links in 1668 the route can greatly magnify the cost penalty of excess 1669 keepalives, potentially forcing a full-time connection on a link 1670 that would otherwise only be connected a few minutes a day. 1672 If keepalive is provided the RFC states that the required inter- 1673 keepalive distance MUST default to no less than two hours. If it 1674 does not, the probability of connections breaking increases, the 1676 ID Known TCP Implementation Problems August 1998 1678 bandwidth used due to keepalives increases, and cost increases over 1679 paths which charge per packet. 1681 Relevant RFCs 1682 RFC 1122 specifies that the keep-alive mechanism may be provided. 1683 It also specifies the two hour minimum for the default interval 1684 between keepalive probes. 1686 Trace file demonstrating it 1687 Made using the Orchestra tool at the peer of the machine using 1688 keep-alive. Machine A was configured to use default settings for 1689 the keepalive timer. 1691 11:36:32.910000 A > B: 3288354305:0 win 28672 datasz 4 SYN 1692 11:36:32.930000 B > A: 896001:3288354306 win 4096 datasz 4 SYN ACK 1693 11:36:32.950000 A > B: 3288354306:896002 win 28672 datasz 0 ACK 1695 11:50:01.190000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 1696 11:50:01.210000 B > A: 896002:3288354306 win 4096 datasz 0 ACK 1698 12:03:29.410000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 1699 12:03:29.430000 B > A: 896002:3288354306 win 4096 datasz 0 ACK 1701 12:16:57.630000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 1702 12:16:57.650000 B > A: 896002:3288354306 win 4096 datasz 0 ACK 1704 12:30:25.850000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 1705 12:30:25.870000 B > A: 896002:3288354306 win 4096 datasz 0 ACK 1707 12:43:54.070000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 1708 12:43:54.090000 B > A: 896002:3288354306 win 4096 datasz 0 ACK 1710 The initial three packets are the SYN exchange for connection 1711 setup. About 13 minutes later, the keepalive timer fires because 1712 the connection is idle. The keepalive is acknowledged, and the 1713 timer fires again in about 13 more minutes. This behavior 1714 continues indefinitely until the connection is closed, and is a 1715 violation of the specification. 1717 Trace file demonstrating correct behavior 1718 Made using the Orchestra tool at the peer of the machine using 1719 keep-alive. Machine A was configured to use default settings for 1720 the keepalive timer. 1722 17:37:20.500000 A > B: 34155521:0 win 4096 datasz 4 SYN 1724 ID Known TCP Implementation Problems August 1998 1726 17:37:20.520000 B > A: 6272001:34155522 win 4096 datasz 4 SYN ACK 1727 17:37:20.540000 A > B: 34155522:6272002 win 4096 datasz 0 ACK 1729 19:37:25.430000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 1730 19:37:25.450000 B > A: 6272002:34155522 win 4096 datasz 0 ACK 1732 21:37:30.560000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 1733 21:37:30.570000 B > A: 6272002:34155522 win 4096 datasz 0 ACK 1735 23:37:35.580000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 1736 23:37:35.600000 B > A: 6272002:34155522 win 4096 datasz 0 ACK 1738 01:37:40.620000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 1739 01:37:40.640000 B > A: 6272002:34155522 win 4096 datasz 0 ACK 1741 03:37:45.590000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 1742 03:37:45.610000 B > A: 6272002:34155522 win 4096 datasz 0 ACK 1744 The initial three packets are the SYN exchange for connection 1745 setup. Just over two hours later, the keepalive timer fires 1746 because the connection is idle. The keepalive is acknowledged, and 1747 the timer fires again just over two hours later. This behavior 1748 continues indefinitely until the connection is closed. 1750 References 1751 This problem is documented in [Dawson97]. 1753 How to detect 1754 For implementations manifesting this problem, it shows up on a 1755 packet trace. If the connection is left idle, the keepalive probes 1756 will arrive closer together than the two hour minimum. 1758 3.12. 1760 Name of Problem 1761 Stretch ACK violation 1763 Classification 1764 Congestion Control/Performance 1766 Description 1767 To improve efficiency (both computer and network) a data receiver 1768 may refrain from sending an ACK for each incoming segment, 1770 ID Known TCP Implementation Problems August 1998 1772 according to [RFC1122]. However, an ACK should not be delayed an 1773 inordinate amount of time. Specifically, ACKs MUST be sent for 1774 every second full-sized segment that arrives. If a second full- 1775 sized segment does not arrive within a given timeout (of no more 1776 than 0.5 seconds), an ACK must be transmitted, according to 1777 [RFC1122]. A TCP receiver which does not generate an ACK for every 1778 second full-sized segment exhibits a "Stretch ACK Violation". 1780 Significance 1781 TCP receivers exhibiting this behavior will cause TCP senders to 1782 generate burstier traffic, which can degrade performance in 1783 congested environments. In addition, generating fewer ACKs 1784 increases the amount of time needed by the slow start algorithm to 1785 open the congestion window to an appropriate point, which 1786 diminishes performance in environments with large bandwidth-delay 1787 products. Finally, generating fewer ACKs may cause needless 1788 retransmission timeouts in lossy environments, as it increases the 1789 possibility that an entire window of ACKs is lost, forcing a 1790 retransmission timeout. 1792 Implications 1793 When not in loss recovery, every ACK received by a TCP sender 1794 triggers the transmission of new data segments. The burst size is 1795 determined by the number of previously unacknowledged segments each 1796 ACK covers. Therefore, a TCP receiver ACKing more than 2 segments 1797 at a time causes the sending TCP to generate a larger burst of 1798 traffic upon receipt of the ACK. This large burst of traffic can 1799 overwhelm an intervening gateway, leading to higher drop rates for 1800 both the connection and other connections passing through the 1801 congested gateway. 1803 In addition, the TCP slow start algorithm increases the congestion 1804 window by 1 segment for each ACK received. Therefore, increasing 1805 the ACK interval (thus decreasing the rate at which ACKs are 1806 transmitted) increases the amount of time it takes slow start to 1807 increase the congestion window to an appropriate operating point, 1808 and the connection consequently suffers from reduced performance. 1809 This is especially true for connections using large windows. 1811 Relevant RFCs 1812 RFC 1122 outlines delayed ACKs as a recommended mechanism. 1814 Trace file demonstrating it 1815 Trace file taken using tcpdump at host B, the data receiver (and 1817 ID Known TCP Implementation Problems August 1998 1819 ACK originator). The advertised window (which never changed) and 1820 timestamp options have been omitted for clarity, except for the 1821 first packet sent by A: 1823 12:09:24.820187 A.1174 > B.3999: . 2049:3497(1448) ack 1 1824 win 33580 [tos 0x8] 1825 12:09:24.824147 A.1174 > B.3999: . 3497:4945(1448) ack 1 1826 12:09:24.832034 A.1174 > B.3999: . 4945:6393(1448) ack 1 1827 12:09:24.832222 B.3999 > A.1174: . ack 6393 1828 12:09:24.934837 A.1174 > B.3999: . 6393:7841(1448) ack 1 1829 12:09:24.942721 A.1174 > B.3999: . 7841:9289(1448) ack 1 1830 12:09:24.950605 A.1174 > B.3999: . 9289:10737(1448) ack 1 1831 12:09:24.950797 B.3999 > A.1174: . ack 10737 1832 12:09:24.958488 A.1174 > B.3999: . 10737:12185(1448) ack 1 1833 12:09:25.052330 A.1174 > B.3999: . 12185:13633(1448) ack 1 1834 12:09:25.060216 A.1174 > B.3999: . 13633:15081(1448) ack 1 1835 12:09:25.060405 B.3999 > A.1174: . ack 15081 1837 This portion of the trace clearly shows that the receiver (host B) 1838 sends an ACK for every third full sized packet received. Further 1839 investigation of this implementation found that the cause of the 1840 increased ACK interval was the TCP options being used. The 1841 implementation sent an ACK after it was holding 2*MSS worth of 1842 unacknowledged data. In the above case, the MSS is 1460 bytes so 1843 the receiver transmits an ACK after it is holding at least 2920 1844 bytes of unacknowledged data. However, the length of the TCP 1845 options being used [RFC1323] took 12 bytes away from the data 1846 portion of each packet. This produced packets containing 1448 1847 bytes of data. But the additional bytes used by the options in the 1848 header were not taken into account when determining when to trigger 1849 an ACK. Therefore, it took 3 data segments before the data 1850 receiver was holding enough unacknowledged data (>= 2*MSS, or 2920 1851 bytes in the above example) to transmit an ACK. 1853 Trace file demonstrating correct behavior 1855 Trace file taken using tcpdump at host B, the data receiver (and 1856 ACK originator), again with window and timestamp information 1857 omitted except for the first packet: 1859 12:06:53.627320 A.1172 > B.3999: . 1449:2897(1448) ack 1 1860 win 33580 [tos 0x8] 1861 12:06:53.634773 A.1172 > B.3999: . 2897:4345(1448) ack 1 1862 12:06:53.634961 B.3999 > A.1172: . ack 4345 1863 12:06:53.737326 A.1172 > B.3999: . 4345:5793(1448) ack 1 1864 12:06:53.744401 A.1172 > B.3999: . 5793:7241(1448) ack 1 1865 12:06:53.744592 B.3999 > A.1172: . ack 7241 1867 ID Known TCP Implementation Problems August 1998 1869 12:06:53.752287 A.1172 > B.3999: . 7241:8689(1448) ack 1 1870 12:06:53.847332 A.1172 > B.3999: . 8689:10137(1448) ack 1 1871 12:06:53.847525 B.3999 > A.1172: . ack 10137 1873 This trace shows the TCP receiver (host B) ack'ing every second 1874 full-sized packet, according to [RFC1122]. This is the same 1875 implementation shown above, with slight modifications that allow 1876 the receiver to take the length of the options into account when 1877 deciding when to transmit an ACK. 1879 References 1880 This problem is documented in [Allman97] and [Paxson97]. 1882 How to detect 1883 Stretch ACK violations show up immediately in receiver-side packet 1884 traces of bulk transfers, as shown above. However, packet traces 1885 made on the sender side of the TCP connection may lead to 1886 ambiguities when diagnosing this problem due to the possibility of 1887 lost ACKs. 1889 3.13. 1891 Name of Problem 1892 Retransmission sends multiple packets 1894 Classification 1895 Congestion control 1897 Description 1898 When a TCP retransmits a segment due to a timeout expiration or 1899 beginning a fast retransmission sequence, it should only transmit a 1900 single segment. A TCP that transmits more than one segment 1901 exhibits "Retransmission Sends Multiple Packets". 1903 Instances of this problem have been known to occur due to 1904 miscomputations involving the use of TCP options. TCP options 1905 increase the TCP header beyond its usual size of 20 bytes. The 1906 total size of header must be taken into account when retransmitting 1907 a packet. If a TCP sender does not account for the length of the 1908 TCP options when determining how much data to retransmit, it will 1909 send too much data to fit into a single packet. In this case, the 1910 correct retransmission will be followed by a short segment 1911 (tinygram) containing data that may not need to be retransmitted. 1913 A specific case is a TCP using the RFC 1323 timestamp option, which 1915 ID Known TCP Implementation Problems August 1998 1917 adds 12 bytes to the standard 20-byte TCP header. On 1918 retransmission of a packet, the 12 byte option is incorrectly 1919 interpreted as part of the data portion of the segment. A standard 1920 TCP header and a new 12-byte option is added to the data, which 1921 yields a transmission of 12 bytes more data than contained in the 1922 original segment. This overflow causes a smaller packet, with 12 1923 data bytes, to be transmitted. 1925 Significance 1926 This problem is somewhat serious for congested environments because 1927 the TCP implementation injects more packets into the network than 1928 is appropriate. However, since a tinygram is only sent in response 1929 to a fast retransmit or a timeout, it does not effect the sustained 1930 sending rate. 1932 Implications 1933 A TCP exhibiting this behavior is stressing the network with more 1934 traffic than appropriate, and stressing routers by increasing the 1935 number of packets they must process. The redundant tinygram will 1936 also elicit a duplicate ack from the receiver, resulting in yet 1937 another unnecessary transmission. 1939 Relevant RFCs 1940 RFC 1122 requires use of slow start after loss; RFC 2001 explicates 1941 slow start; RFC 1323 describes the timestamp option that has been 1942 observed to lead to some implementations exhibiting this problem. 1944 Trace file demonstrating it 1945 Made using tcpdump/BPF recording at a machine on the same subnet as 1946 Host A. Host A is the sender and Host B is the receiver. The 1947 advertised window and timestamp options have been omitted for 1948 clarity, except for the first segment sent by host A. In addition, 1949 portions of the trace file not pertaining to the packet in question 1950 have been removed (missing packets are denoted by ``[...]'' in the 1951 trace). 1953 11:55:22.701668 A > B: . 7361:7821(460) ack 1 1954 win 49324 1955 11:55:22.702109 A > B: . 7821:8281(460) ack 1 1957 [...] 1959 11:55:23.112405 B > A: . ack 7821 1960 11:55:23.113069 A > B: . 12421:12881(460) ack 1 1962 ID Known TCP Implementation Problems August 1998 1964 11:55:23.113511 A > B: . 12881:13341(460) ack 1 1965 11:55:23.333077 B > A: . ack 7821 1966 11:55:23.336860 B > A: . ack 7821 1967 11:55:23.340638 B > A: . ack 7821 1968 11:55:23.341290 A > B: . 7821:8281(460) ack 1 1969 11:55:23.341317 A > B: . 8281:8293(12) ack 1 1970 11:55:23.498242 B > A: . ack 7821 1971 11:55:23.506850 B > A: . ack 7821 1972 11:55:23.510630 B > A: . ack 7821 1974 [...] 1976 11:55:23.746649 B > A: . ack 10581 1978 The second line of the above trace shows the original transmission 1979 of a segment which is later dropped. After 3 duplicate ACKs, line 1980 9 of the trace shows the dropped packet (7821:8281), with a 460- 1981 byte payload, being retransmitted. Immediately following this 1982 retransmission, a packet with a 12-byte payload is unnecessarily 1983 sent. 1985 Trace file demonstrating correct behavior 1987 The trace file would be identical to the one above, with a single 1988 line: 1990 11:55:23.341317 A > B: . 8281:8293(12) ack 1 1992 omitted. 1994 References 1995 [Brakmo95] 1997 How to detect 1998 This problem can be detected by examining a packet trace of the TCP 1999 connections of a machine using TCP options, during which a packet 2000 is retransmitted. 2002 3.14. 2004 Name of Problem 2005 Failure to send FIN notification promptly 2007 ID Known TCP Implementation Problems August 1998 2009 Classification 2010 Performance 2012 Description 2013 When an application closes a connection, the corresponding TCP 2014 should send the FIN notification promptly to its peer (unless 2015 prevented by the congestion window). If a TCP implementation 2016 delays in sending the FIN notification, for example due to waiting 2017 until unacknowledged data has been acknowledged, then it is said to 2018 exhibit "Failure to send FIN notification promptly". 2020 Also, while not strictly required, FIN segments should include the 2021 PSH flag to ensure expedited delivery of any pending data at the 2022 receiver. 2024 Significance 2025 The greatest impact occurs for short-lived connections, since for 2026 these the additional time required to close the connection 2027 introduces the greatest relative delay. 2029 The additional time can be significant in the common case of the 2030 sender waiting for an ACK that is delayed by the receiver. 2032 Implications 2033 Can diminish total throughput as seen at the application layer, 2034 because connection termination takes longer to complete. 2036 Relevant RFCs 2037 RFC 793 indicates that a receiver should treat an incoming FIN flag 2038 as implying the push function. 2040 Trace file demonstrating it 2041 Made using tcpdump (no losses reported). 2043 10:04:38.68 A > B: S 1031850376:1031850376(0) win 4096 2044 (DF) 2045 10:04:38.71 B > A: S 596916473:596916473(0) ack 1031850377 2046 win 8760 (DF) 2047 10:04:38.73 A > B: . ack 1 win 4096 (DF) 2048 10:04:41.98 A > B: P 1:4(3) ack 1 win 4096 (DF) 2049 10:04:42.15 B > A: . ack 4 win 8757 (DF) 2050 10:04:42.23 A > B: P 4:7(3) ack 1 win 4096 (DF) 2051 10:04:42.25 B > A: P 1:11(10) ack 7 win 8754 (DF) 2053 ID Known TCP Implementation Problems August 1998 2055 10:04:42.32 A > B: . ack 11 win 4096 (DF) 2056 10:04:42.33 B > A: P 11:51(40) ack 7 win 8754 (DF) 2057 10:04:42.51 A > B: . ack 51 win 4096 (DF) 2058 10:04:42.53 B > A: F 51:51(0) ack 7 win 8754 (DF) 2059 10:04:42.56 A > B: FP 7:7(0) ack 52 win 4096 (DF) 2060 10:04:42.58 B > A: . ack 8 win 8754 (DF) 2062 Machine B in the trace above does not send out a FIN notification 2063 promptly if there is any data outstanding. It instead waits for 2064 all unacknowledged data to be acknowledged before sending the FIN 2065 segment. The connection was closed at 10:04.42.33 after requesting 2066 40 bytes to be sent. However, the FIN notification isn't sent 2067 until 10:04.42.51, after the (delayed) acknowledgement of the 40 2068 bytes of data. 2070 Trace file demonstrating correct behavior 2071 Made using tcpdump (no losses reported). 2073 10:27:53.85 C > D: S 419744533:419744533(0) win 4096 2074 (DF) 2075 10:27:53.92 D > C: S 10082297:10082297(0) ack 419744534 2076 win 8760 (DF) 2077 10:27:53.95 C > D: . ack 1 win 4096 (DF) 2078 10:27:54.42 C > D: P 1:4(3) ack 1 win 4096 (DF) 2079 10:27:54.62 D > C: . ack 4 win 8757 (DF) 2080 10:27:54.76 C > D: P 4:7(3) ack 1 win 4096 (DF) 2081 10:27:54.89 D > C: P 1:11(10) ack 7 win 8754 (DF) 2082 10:27:54.90 D > C: FP 11:51(40) ack7 win 8754 (DF) 2083 10:27:54.92 C > D: . ack 52 win 4096 (DF) 2084 10:27:55.01 C > D: FP 7:7(0) ack 52 win 4096 (DF) 2085 10:27:55.09 D > C: . ack 8 win 8754 (DF) 2087 Here, Machine D sends a FIN with 40 bytes of data even before the 2088 original 10 octets have been acknowledged. This is correct behavior 2089 as it provides for the highest performance. 2091 References 2092 This problem is documented in [Dawson97]. 2094 How to detect 2095 For implementations manifesting this problem, it shows up on a 2096 packet trace. 2098 ID Known TCP Implementation Problems August 1998 2100 3.15. 2102 Name of Problem 2103 Failure to send a RST after Half Duplex Close 2105 Classification 2106 Resource management 2108 Description 2109 RFC 1122 4.2.2.13 states that a TCP SHOULD send a RST if data is 2110 received after "half duplex close", i.e. if it cannot be delivered 2111 to the application. A TCP that fails to do so is said to exhibit 2112 "Failure to send a RST after Half Duplex Close". 2114 Significance 2115 Potentially serious for TCP endpoints that manage large numbers of 2116 connections, due to exhaustion of memory and/or process slots 2117 available for managing connection state. 2119 Implications 2120 Failure to send the RST can lead to permanently hung TCP 2121 connections. This problem has been demonstrated when HTTP clients 2122 abort connections, common when users move on to a new page before 2123 the current page has finished downloading. The HTTP client closes 2124 by transmitting a FIN while the server is transmitting images, 2125 text, etc. The server TCP receives the FIN, but its application 2126 does not close the connection until all data has been queued for 2127 transmission. Since the server will not transmit a FIN until all 2128 the preceding data has been transmitted, deadlock results if the 2129 client TCP does not consume the pending data or tear down the 2130 connection: the window decreases to zero, since the client cannot 2131 pass the data to the application, and the server sends probe 2132 segments. The client acknowledges the probe segments with a zero 2133 window. As mandated in RFC1122 4.2.2.17, the probe segments are 2134 transmitted forever. Server connection state remains in 2135 CLOSE_WAIT, and eventually server processes are exhausted. 2137 Note that there are two bugs. First, probe segments should be 2138 ignored if the window can never subsequently increase. Second, a 2139 RST should be sent when data is received after half duplex close. 2140 Fixing the first bug, but not the second, results in the probe 2141 segments eventually timing out the connection, but the server 2142 remains in CLOSE_WAIT for a significant and unnecessary period. 2144 ID Known TCP Implementation Problems August 1998 2146 Relevant RFCs 2147 RFC 1122 sections 4.2.2.13 and 4.2.2.17. 2149 Trace file demonstrating it 2150 Made using an unknown network analyzer. No drop information 2151 available. 2153 client.1391 > server.8080: S 0:1(0) ack: 0 win: 2000 2154 server.8080 > client.1391: SA 8c01:8c02(0) ack: 1 win: 8000 2155 client.1391 > server.8080: PA 2156 client.1391 > server.8080: PA 1:1c2(1c1) ack: 8c02 win: 2000 2157 server.8080 > client.1391: [DF] PA 8c02:8cde(dc) ack: 1c2 win: 8000 2158 server.8080 > client.1391: [DF] A 8cde:9292(5b4) ack: 1c2 win: 8000 2159 server.8080 > client.1391: [DF] A 9292:9846(5b4) ack: 1c2 win: 8000 2160 server.8080 > client.1391: [DF] A 9846:9dfa(5b4) ack: 1c2 win: 8000 2161 client.1391 > server.8080: PA 2162 server.8080 > client.1391: [DF] A 9dfa:a3ae(5b4) ack: 1c2 win: 8000 2163 server.8080 > client.1391: [DF] A a3ae:a962(5b4) ack: 1c2 win: 8000 2164 server.8080 > client.1391: [DF] A a962:af16(5b4) ack: 1c2 win: 8000 2165 server.8080 > client.1391: [DF] A af16:b4ca(5b4) ack: 1c2 win: 8000 2166 client.1391 > server.8080: PA 2167 server.8080 > client.1391: [DF] A b4ca:ba7e(5b4) ack: 1c2 win: 8000 2168 server.8080 > client.1391: [DF] A b4ca:ba7e(5b4) ack: 1c2 win: 8000 2169 client.1391 > server.8080: PA 2170 server.8080 > client.1391: [DF] A ba7e:bdfa(37c) ack: 1c2 win: 8000 2171 client.1391 > server.8080: PA 2172 server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c2 win: 8000 2173 client.1391 > server.8080: PA 2175 [ HTTP client aborts and enters FIN_WAIT_1 ] 2177 client.1391 > server.8080: FPA 2179 [ server ACKs the FIN and enters CLOSE_WAIT ] 2181 server.8080 > client.1391: [DF] A 2183 [ client enters FIN_WAIT_2 ] 2185 server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000 2187 [ server continues to try to send its data ] 2189 client.1391 > server.8080: PA < window = 0 > 2190 server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000 2191 client.1391 > server.8080: PA < window = 0 > 2192 server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000 2194 ID Known TCP Implementation Problems August 1998 2196 client.1391 > server.8080: PA < window = 0 > 2197 server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000 2198 client.1391 > server.8080: PA < window = 0 > 2199 server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000 2200 client.1391 > server.8080: PA < window = 0 > 2202 [ ... repeat ad exhaustium ... ] 2204 Trace file demonstrating correct behavior 2205 Made using an unknown network analyzer. No drop information 2206 available. 2208 client > server D=80 S=59500 Syn Seq=337 Len=0 Win=8760 2209 server > client D=59500 S=80 Syn Ack=338 Seq=80153 Len=0 Win=8760 2210 client > server D=80 S=59500 Ack=80154 Seq=338 Len=0 Win=8760 2212 [ ... normal data omitted ... ] 2214 client > server D=80 S=59500 Ack=14559 Seq=596 Len=0 Win=8760 2215 server > client D=59500 S=80 Ack=596 Seq=114559 Len=1460 Win=8760 2217 [ client closes connection ] 2219 client > server D=80 S=59500 Fin Seq=596 Len=0 Win=8760 2220 server > client D=59500 S=80 Ack=597 Seq=116019 Len=1460 Win=8760 2222 [ client sends RST (RFC1122 4.2.2.13) ] 2224 client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0 2225 server > client D=59500 S=80 Ack=597 Seq=117479 Len=1460 Win=8760 2226 client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0 2227 server > client D=59500 S=80 Ack=597 Seq=118939 Len=1460 Win=8760 2228 client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0 2229 server > client D=59500 S=80 Ack=597 Seq=120399 Len=892 Win=8760 2230 client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0 2231 server > client D=59500 S=80 Ack=597 Seq=121291 Len=1460 Win=8760 2232 client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0 2234 "client" sends a number of RSTs, one in response to each incoming 2235 packet from "server". One might wonder why "server" keeps sending 2236 data packets after it has received a RST from "client"; the 2237 explanation is that "server" had already transmitted all five of 2238 the data packets before receiving the first RST from "client", so 2239 it is too late to avoid transmitting them. 2241 ID Known TCP Implementation Problems August 1998 2243 How to detect 2244 The problem can be detected by inspecting packet traces of a large, 2245 interrupted bulk transfer. 2247 3.16. 2249 Name of Problem 2250 Failure to RST on close with data pending 2252 Classification 2253 Resource management 2255 Description 2256 When an application closes a connection in such a way that it can 2257 no longer read any received data, the TCP SHOULD, per section 2258 4.2.2.13 of RFC 1122, send a RST if there is any unread received 2259 data, or if any new data is received. A TCP that fails to do so 2260 exhibits "Failure to RST on close with data pending". 2262 Note that, for some TCPs, this situation can be caused by an 2263 application "crashing" while a peer is sending data. 2265 We have observed a number of TCPs that exhibit this problem. The 2266 problem is less serious if any subsequent data sent to the now- 2267 closed connection endpoint elicits a RST (see illustration below). 2269 Significance 2270 This problem is most significant for endpoints that engage in large 2271 numbers of connections, as their ability to do so will be curtailed 2272 as they leak away resources. 2274 Implications 2275 Failure to reset the connection can lead to permanently hung 2276 connections, in which the remote endpoint takes no further action 2277 to tear down the connection because it is waiting on the local TCP 2278 to first take some action. This is particularly the case if the 2279 local TCP also allows the advertised window to go to zero, and 2280 fails to tear down the connection when the remote TCP engages in 2281 "persist" probes (see example below). 2283 Relevant RFCs 2284 RFC 1122 section 4.2.2.13. Also, 4.2.2.17 for the zero-window 2286 ID Known TCP Implementation Problems August 1998 2288 probing discussion below. 2290 Trace file demonstrating it 2291 Made using tcpdump. No drop information available. 2293 13:11:46.04 A > B: S 458659166:458659166(0) win 4096 2294 (DF) 2295 13:11:46.04 B > A: S 792320000:792320000(0) ack 458659167 2296 win 4096 2297 13:11:46.04 A > B: . ack 1 win 4096 (DF) 2298 13:11.55.80 A > B: . 1:513(512) ack 1 win 4096 (DF) 2299 13:11.55.80 A > B: . 513:1025(512) ack 1 win 4096 (DF) 2300 13:11:55.83 B > A: . ack 1025 win 3072 2301 13:11.55.84 A > B: . 1025:1537(512) ack 1 win 4096 (DF) 2302 13:11.55.84 A > B: . 1537:2049(512) ack 1 win 4096 (DF) 2303 13:11.55.85 A > B: . 2049:2561(512) ack 1 win 4096 (DF) 2304 13:11:56.03 B > A: . ack 2561 win 1536 2305 13:11.56.05 A > B: . 2561:3073(512) ack 1 win 4096 (DF) 2306 13:11.56.06 A > B: . 3073:3585(512) ack 1 win 4096 (DF) 2307 13:11.56.06 A > B: . 3585:4097(512) ack 1 win 4096 (DF) 2308 13:11:56.23 B > A: . ack 4097 win 0 2309 13:11:58.16 A > B: . 4096:4097(1) ack 1 win 4096 (DF) 2310 13:11:58.16 B > A: . ack 4097 win 0 2311 13:12:00.16 A > B: . 4096:4097(1) ack 1 win 4096 (DF) 2312 13:12:00.16 B > A: . ack 4097 win 0 2313 13:12:02.16 A > B: . 4096:4097(1) ack 1 win 4096 (DF) 2314 13:12:02.16 B > A: . ack 4097 win 0 2315 13:12:05.37 A > B: . 4096:4097(1) ack 1 win 4096 (DF) 2316 13:12:05.37 B > A: . ack 4097 win 0 2317 13:12:06.36 B > A: F 1:1(0) ack 4097 win 0 2318 13:12:06.37 A > B: . ack 2 win 4096 (DF) 2319 13:12:11.78 A > B: . 4096:4097(1) ack 2 win 4096 (DF) 2320 13:12:11.78 B > A: . ack 4097 win 0 2321 13:12:24.59 A > B: . 4096:4097(1) ack 2 win 4096 (DF) 2322 13:12:24.60 B > A: . ack 4097 win 0 2323 13:12:50.22 A > B: . 4096:4097(1) ack 2 win 4096 (DF) 2324 13:12:50.22 B > A: . ack 4097 win 0 2326 Machine B in the trace above does not drop received data when the 2327 socket is "closed" by the application (in this case, the 2328 application process was terminated). This occured at approximately 2329 13:12:06.36 and resulted in the FIN being sent in response to the 2330 close. However, because there is no longer an application to 2331 deliver the data to, the TCP should have instead sent a RST. 2333 Note: Machine A's zero-window probing is also broken. It is 2334 resending old data, rather than new data. Section 3.7 in RFC 793 2336 ID Known TCP Implementation Problems August 1998 2338 and Section 4.2.2.17 in RFC 1122 discuss zero-window probing. 2340 Trace file demonstrating better behavior 2341 Made using tcpdump. No drop information available. 2343 Better, but still not fully correct, behavior, per the discussion 2344 below. We show this behavior because it has been observed for a 2345 number of different TCP implementations. 2347 13:48:29.24 C > D: S 73445554:73445554(0) win 4096 2348 (DF) 2349 13:48:29.24 D > C: S 36050296:36050296(0) ack 73445555 2350 win 4096 (DF) 2351 13:48:29.25 C > D: . ack 1 win 4096 (DF) 2352 13:48:30.78 C > D: . 1:1461(1460) ack 1 win 4096 (DF) 2353 13:48:30.79 C > D: . 1461:2921(1460) ack 1 win 4096 (DF) 2354 13:48:30.80 D > C: . ack 2921 win 1176 (DF) 2355 13:48:32.75 C > D: . 2921:4097(1176) ack 1 win 4096 (DF) 2356 13:48:32.82 D > C: . ack 4097 win 0 (DF) 2357 13:48:34.76 C > D: . 4096:4097(1) ack 1 win 4096 (DF) 2358 13:48:34.84 D > C: . ack 4097 win 0 (DF) 2359 13:48:36.34 D > C: FP 1:1(0) ack 4097 win 4096 (DF) 2360 13:48:36.34 C > D: . 4097:5557(1460) ack 2 win 4096 (DF) 2361 13:48:36.34 D > C: R 36050298:36050298(0) win 24576 2362 13:48:36.34 C > D: . 5557:7017(1460) ack 2 win 4096 (DF) 2363 13:48:36.34 D > C: R 36050298:36050298(0) win 24576 2365 In this trace, the application process is terminated on Machine D 2366 at approximately 13:48:36.34. Its TCP sends the FIN with the 2367 window opened again (since it discarded the previously received 2368 data). Machine C promptly sends more data, causing Machine D to 2369 reset the connection since it cannot deliver the data to the 2370 application. Ideally, Machine D SHOULD send a RST instead of 2371 dropping the data and re-opening the receive window. 2373 Note: Machine C's zero-window probing is broken, the same as in the 2374 example above. 2376 Trace file demonstrating correct behavior 2377 Made using tcpdump. No losses reported. 2379 14:12:02.19 E > F: S 1143360000:1143360000(0) win 4096 2380 14:12:02.19 F > E: S 1002988443:1002988443(0) ack 1143360001 2381 win 4096 (DF) 2382 14:12:02.19 E > F: . ack 1 win 4096 2384 ID Known TCP Implementation Problems August 1998 2386 14:12:10.43 E > F: . 1:513(512) ack 1 win 4096 2387 14:12:10.61 F > E: . ack 513 win 3584 (DF) 2388 14:12:10.61 E > F: . 513:1025(512) ack 1 win 4096 2389 14:12:10.61 E > F: . 1025:1537(512) ack 1 win 4096 2390 14:12:10.81 F > E: . ack 1537 win 2560 (DF) 2391 14:12:10.81 E > F: . 1537:2049(512) ack 1 win 4096 2392 14:12:10.81 E > F: . 2049:2561(512) ack 1 win 4096 2393 14:12:10.81 E > F: . 2561:3073(512) ack 1 win 4096 2394 14:12:11.01 F > E: . ack 3073 win 1024 (DF) 2395 14:12:11.01 E > F: . 3073:3585(512) ack 1 win 4096 2396 14:12:11.01 E > F: . 3585:4097(512) ack 1 win 4096 2397 14:12:11.21 F > E: . ack 4097 win 0 (DF) 2398 14:12:15.88 E > F: . 4097:4098(1) ack 1 win 4096 2399 14:12:16.06 F > E: . ack 4097 win 0 (DF) 2400 14:12:20.88 E > F: . 4097:4098(1) ack 1 win 4096 2401 14:12:20.91 F > E: . ack 4097 win 0 (DF) 2402 14:12:21.94 F > E: R 1002988444:1002988444(0) win 4096 2404 When the application terminates at 14:12:21.94, F immediately sends 2405 a RST. 2407 Note: Machine E's zero-window probing is (finally) correct. 2409 How to detect 2410 The problem can often be detected by inspecting packet traces of a 2411 transfer in which the receiving application terminates abnormally. 2412 When doing so, there can be an ambiguity (if only looking at the 2413 trace) as to whether the receiving TCP did indeed have unread data 2414 that it could now no longer deliver. To provoke this to happen, it 2415 may help to suspend the receiving application so that it fails to 2416 consume any data, eventually exhausting the advertised window. At 2417 this point, since the advertised window is zero, we know that the 2418 receiving TCP has undelivered data buffered up. Terminating the 2419 application process then should suffice to test the correctness of 2420 the TCP's behavior. 2422 3.17. 2424 Name of Problem 2425 Options missing from TCP MSS calculation 2427 Classification 2428 Reliability / performance 2430 ID Known TCP Implementation Problems August 1998 2432 Description 2433 When a TCP determines how much data to send per packet, it 2434 calculates a segment size based on the MTU of the path. It must 2435 then subtract from that MTU the size of the IP and TCP headers in 2436 the packet. If IP options and TCP options are not taken into 2437 account correctly in this calculation, the resulting segment size 2438 may be too large. TCPs that do so are said to exhibit "Options 2439 missing from TCP MSS calculation". 2441 Significance 2442 In some implementations, this causes the transmission of strangely 2443 fragmented packets. In some implementations with Path MTU (PMTU) 2444 discovery [RFC1191], this problem can actually result in a total 2445 failure to transmit any data at all, regardless of the environment 2446 (see below). 2448 Arguably, especially since the wide deployment of firewalls, IP 2449 options appear only rarely in normal operations. 2451 Implications 2452 In implementations using PMTU discovery, this problem can result in 2453 packets that are too large for the output interface, and that have 2454 the DF (don't fragment) bit set in the IP header. Thus, the IP 2455 layer on the local machine is not allowed to fragment the packet to 2456 send it out the interface. It instead informs the TCP layer of the 2457 correct MTU size of the interface; the TCP layer again miscomputes 2458 the MSS by failing to take into account the size of IP options; and 2459 the problem repeats, with no data flowing. 2461 Relevant RFCs 2462 RFC 1122 describes the calculation of the effective send MSS. RFC 2463 1191 describes Path MTU discovery. 2465 Trace file demonstrating it 2466 Trace file taking using tcpdump on host C. The first trace 2467 demonstrates the fragmentation that occurs without path MTU 2468 discovery: 2470 13:55:25.488728 A.65528 > C.discard: 2471 P 567833:569273(1440) ack 1 win 17520 2472 2473 (frag 20828:1472@0+) 2474 (ttl 62, optlen=8 LSRR{B#} NOP) 2476 ID Known TCP Implementation Problems August 1998 2478 13:55:25.488943 A > C: 2479 (frag 20828:8@1472) 2480 (ttl 62, optlen=8 LSRR{B#} NOP) 2482 13:55:25.489052 C.discard > A.65528: 2483 . ack 566385 win 60816 2484 (DF) 2485 (ttl 60, id 41266) 2487 Host A repeatedly sends 1440-octet data segments, but these hare 2488 fragmented into two packets, one with 1432 octets of data, and 2489 another with 8 octets of data. 2491 The second trace demonstrates the failure to send any data 2492 segments, sometimes seen with hosts doing path MTU discovery: 2494 13:55:44.332219 A.65527 > C.discard: 2495 S 1018235390:1018235390(0) win 16384 2496 (DF) 2497 (ttl 62, id 20912, optlen=8 LSRR{B#} NOP) 2499 13:55:44.333015 C.discard > A.65527: 2500 S 1271629000:1271629000(0) ack 1018235391 win 60816 2501 (DF) 2502 (ttl 60, id 41427) 2504 13:55:44.333206 C.discard > A.65527: 2505 S 1271629000:1271629000(0) ack 1018235391 win 60816 2506 (DF) 2507 (ttl 60, id 41427) 2509 This is all of the activity seen on this connection. Eventually 2510 host C will time out attempting to establish the connection. 2512 How to detect 2513 The "netcat" utility is useful for generating source routed 2514 packets: 2516 1% nc C discard 2517 (interactive typing) 2518 ^C 2519 2% nc C discard < /dev/zero 2520 ^C 2521 3% nc -g B C discard 2522 (interactive typing) 2523 ^C 2524 4% nc -g B C discard < /dev/zero 2526 ID Known TCP Implementation Problems August 1998 2528 ^C 2530 Lines 1 through 3 should generate appropriate packets, which can be 2531 verified using tcpdump. If the problem is present, line 4 should 2532 generate one of the two kinds of packet traces shown. 2534 How to fix 2535 The implementation should ensure that the effective send MSS 2536 calculation includes a term for the IP and TCP options, as mandated 2537 by RFC 1122. 2539 4. Security Considerations 2541 This version of this memo does not discuss any security-related 2542 implementation problems. Futures versions most likely will, so 2543 security considerations will require revisiting. 2545 5. Acknowledgements 2547 Thanks to numerous correspondents on the tcp-impl mailing list for 2548 their input: Steve Alexander, Mark Allman, Larry Backman, Jerry Chu, 2549 Alan Cox, Kevin Fall, Richard Fox, Jim Gettys, Rick Jones, Allison 2550 Mankin, Neal McBurnett, Perry Metzger, der Mouse, Thomas Narten, 2551 Andras Olah, Steve Parker, Francesco Potorti`, Luigi Rizzo, Allyn 2552 Romanow, Jeff Semke, Al Smith, Jerry Toporek, Joe Touch, and Curtis 2553 Villamizar. 2555 Thanks also to Josh Cohen for the traces documenting the "Failure to 2556 send a RST after Half Duplex Close" problem. 2558 6. References 2560 [Allman97] 2561 M. Allman, "Fixing Two BSD TCP Bugs," Technical Report CR-204151, 2562 NASA Lewis Research Center, October 1997. 2563 http://gigahertz.lerc.nasa.gov/~mallman/papers/bug.ps 2565 [Allman98] 2566 M. Allman, S. Floyd and C. Partridge, "Increasing TCP's Initial 2567 Window," Internet-Draft draft-floyd-incr-init-win-03.txt, May 1998. 2569 [RFC1122] 2571 ID Known TCP Implementation Problems August 1998 2573 R. Braden, Editor, "Requirements for Internet Hosts -- 2574 Communication Layers," Oct. 1989. 2576 [RFC2119] 2577 S. Bradner, "Key words for use in RFCs to Indicate Requirement 2578 Levels," Mar. 1997. 2580 [Brakmo95] 2581 L. Brakmo and L. Peterson, "Performance Problems in BSD4.4 TCP," 2582 ACM Computer Communication Review, 25(5):69-86, 1995. 2584 [Dawson97] 2585 S. Dawson, F. Jahanian, and T. Mitton, "Experiments on Six 2586 Commercial TCP Implementations Using a Software Fault Injection 2587 Tool," to appear in Software Practice & Experience, 1997. A 2588 technical report version of this paper can be obtained at 2589 ftp://rtcl.eecs.umich.edu/outgoing/sdawson/CSE-TR-298-96.ps.gz. 2591 [Fall96] 2592 K. Fall and S. Floyd, "Simulation-based Comparisons of Tahoe, Reno, 2593 and SACK TCP," ACM Computer Communication Review, 26(3):5-21, 1996. 2595 [Hoe96] 2596 J. Hoe, "Improving the Start-up Behavior of a Congestion Control 2597 Scheme for TCP," Proc. SIGCOMM '96. 2599 [Jacobson88] 2600 V. Jacobson, "Congestion Avoidance and Control," Proc. SIGCOMM '88. 2601 ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z 2603 [RFC2018] 2604 M. Mathis, J. Mahdavi, S. Floyd, A. Romanow, "TCP Selective 2605 Acknowledgement Options," Oct. 1996. 2607 [RFC1191] 2608 J. Mogul and S. Deering, "Path MTU discovery," Nov. 1990. 2610 [RFC896] 2611 J. Nagle, "Congestion Control in IP/TCP Internetworks," Jan. 1984. 2613 [Paxson97] 2614 V. Paxson, "Automated Packet Trace Analysis of TCP 2615 Implementations," Proc. SIGCOMM '97, available from 2616 ftp://ftp.ee.lbl.gov/papers/vp-tcpanaly-sigcomm97.ps.Z. 2618 [RFC793] 2619 J. Postel, Editor, "Transmission Control Protocol," Sep. 1981. 2621 ID Known TCP Implementation Problems August 1998 2623 [RFC2001] 2624 W. Stevens, "TCP Slow Start, Congestion Avoidance, Fast Retransmit, 2625 and Fast Recovery Algorithms," Jan. 1997. 2627 [Stevens94] 2628 W. Stevens, "TCP/IP Illustrated, Volume 1", Addison-Wesley 2629 Publishing Company, Reading, Massachusetts, 1994. 2631 [Wright95] 2632 G. Wright and W. Stevens, "TCP/IP Illustrated, Volume 2", Addison- 2633 Wesley Publishing Company, Reading Massachusetts, 1995. 2635 7. Authors' Addresses 2637 Vern Paxson 2638 Network Research Group 2639 Lawrence Berkeley National Laboratory 2640 Berkeley, CA 94720 2641 USA 2642 Phone: +1 510/486-7504 2644 Mark Allman 2645 NASA Lewis Research Center/Sterling Software 2646 21000 Brookpark Road 2647 MS 54-2 2648 Cleveland, OH 44135 2649 USA 2650 Phone: +1 216/433-6586 2652 Scott Dawson 2653 Real-Time Computing Laboratory 2654 EECS Building 2655 University of Michigan 2656 Ann Arbor, MI 48109-2122 2657 USA 2658 Phone: +1 313/763-5363 2660 Jim Griner 2661 NASA Lewis Research Center 2662 21000 Brookpark Road 2663 MS 54-2 2664 Cleveland, OH 44135 2665 USA 2666 Phone: +1 216/433-5787 2668 Ian Heavens 2670 ID Known TCP Implementation Problems August 1998 2672 Spider Software Ltd. 2673 8 John's Place, Leith 2674 Edinburgh EH6 7EL 2675 UK 2676 Phone: +44 131/475-7015 2678 Kevin Lahey 2679 NASA Ames Research Center/MRJ 2680 MS 258-6 2681 Moffett Field, CA 94035 2682 USA 2683 Phone: +1 650/604-4334 2685 Jeff Semke 2686 Pittsburgh Supercomputing Center 2687 4400 Fifth Ave 2688 Pittsburgh, PA 15213 2689 USA 2690 Phone: +1 412/268-4960 2692 Bernie Volz 2693 Process Software Corporation 2694 959 Concord Street 2695 Framingham, MA 01701 2696 USA 2697 Phone: +1 508/879-6994