idnits 2.17.1 draft-irtf-tmrg-tools-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 13. -- Found old boilerplate from RFC 3978, Section 5.5 on line 866. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 877. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 884. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 890. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (1 June 2006) is 6532 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'F02' is mentioned on line 489, but not defined Summary: 3 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Engineering Task Force S. Floyd 2 INTERNET-DRAFT E. Kohler 3 draft-irtf-tmrg-tools-02.txt Editors 4 Expires: December 2006 1 June 2006 6 Tools for the Evaluation of Simulation and Testbed Scenarios 8 Status of this Memo 10 By submitting this Internet-Draft, each author represents that any 11 applicable patent or other IPR claims of which he or she is aware 12 have been or will be disclosed, and any of which he or she becomes 13 aware will be disclosed, in accordance with Section 6 of BCP 79. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six 21 months and may be updated, replaced, or obsoleted by other documents 22 at any time. It is inappropriate to use Internet-Drafts as 23 reference material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt. 28 The list of Internet-Draft Shadow Directories can be accessed at 29 http://www.ietf.org/shadow.html. 31 This Internet-Draft will expire on December 2006. 33 Abstract 35 This document describes tools for the evaluation of simulation and 36 testbed scenarios used in research on Internet congestion control 37 mechanisms. We believe that research in congestion control 38 mechanisms has been seriously hampered by the lack of good models 39 underpinning analysis, simulation, and testbed experiments, and that 40 tools for the evaluation of simulation and testbed scenarios can 41 help in the construction of better scenarios, based on better 42 underlying models. One use of the tools described in this document 43 is in comparing key characteristics of test scenarios with known 44 characteristics from the diverse and ever-changing real world. 45 Tools characterizing the aggregate traffic on a link include the 46 distribution of per-packet round-trip times, the distribution of 47 packet sequence numbers, and the like. Tools characterizing end-to- 48 end paths include drop rates as a function of packet size and of 49 burst size, the synchronization ratio between two end-to-end TCP 50 flows, and the like. For each characteristic, we describe what 51 aspects of the scenario determine this characteristic, how the 52 characteristic can affect the results of simulations and experiments 53 for the evaluation of congestion control mechanisms, and what is 54 known about this characteristic in the real world. We also explain 55 why the use of such tools can add considerable power to our 56 understanding and evaluation of simulation and testbed scenarios. 58 Table of Contents 60 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 4 61 2. Conventions . . . . . . . . . . . . . . . . . . . . . . . . . 5 62 3. Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 63 3.1. Characterizing Aggregate Traffic on a Link . . . . . . . 5 64 3.2. Characterizing an End-to-End Path. . . . . . . . . . . . 5 65 3.3. Other Characteristics. . . . . . . . . . . . . . . . . . 5 66 4. Distribution of per-packet round-trip times . . . . . . . . . 6 67 5. Distribution of packet sequence numbers . . . . . . . . . . . 7 68 6. The Distribution of Packet Sizes. . . . . . . . . . . . . . . 8 69 7. The Ratio Between Forward-path and Reverse-path Traf- 70 fic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 71 8. The Distribution of Per-Packet Peak Flow Rates. . . . . . . . 9 72 9. The Distribution of Transport Protocols.. . . . . . . . . . . 10 73 10. The Synchronization Ratio. . . . . . . . . . . . . . . . . . 11 74 11. Drop or Mark Rates as a Function of Packet Size. . . . . . . 12 75 12. Drop Rates as a Function of Burst Size.. . . . . . . . . . . 14 76 13. Drop Rates as a Function of Sending Rate.. . . . . . . . . . 16 77 14. Congestion Control Mechanisms for Traffic, along 78 with Sender and. . . . . . . . . . . . . . . . . . . . . . . . . 16 79 15. Characterization of Congested Links in Terms of 80 Bandwidth and Typical Levels of Congestion . . . . . . . . . . . 16 81 16. Characterization of Challenging Lower Layers.. . . . . . . . 17 82 17. Characterization of Network Changes Affecting Con- 83 gestion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 84 18. Using the Tools Presented in this Document . . . . . . . . . 17 85 19. Related Work . . . . . . . . . . . . . . . . . . . . . . . . 17 86 20. Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . 17 87 21. Security Considerations. . . . . . . . . . . . . . . . . . . 17 88 22. IANA Considerations. . . . . . . . . . . . . . . . . . . . . 17 89 23. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 17 90 Informative References . . . . . . . . . . . . . . . . . . . . . 17 91 Editors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 19 92 Full Copyright Statement . . . . . . . . . . . . . . . . . . . . 19 93 Intellectual Property. . . . . . . . . . . . . . . . . . . . . . 20 94 TO BE DELETED BY THE RFC EDITOR UPON PUBLICATION: 96 Changes from draft-irtf-tmrg-tools-01.txt: 97 * Added section on "Drop Rates as a Function of Sending Rate." 99 * Added a number of new references. 101 1. Introduction 103 This document discusses tools for the evaluation of simulation and 104 testbed scenarios used in research on Internet congestion control 105 mechanisms. These tools include but are not limited to measurement 106 tools; the tools discussed in this document are largely ways of 107 characterizing aggregate traffic on a link, or characterizing the 108 end-to-end path. One use of these tools is for understanding key 109 characteristics of test scenarios; many characteristics, such as the 110 distribution of per-packet round-trip times on the link, don't come 111 from a single input parameter but are determined by a range of 112 inputs. A second use of the tools is to compare key characteristics 113 of test scenarios with what is known of the same characteristics of 114 the past and current Internet, and with what can be conjectured 115 about these characteristics of future networks. This paper follows 116 the general approach from "Internet Research Needs Better Models" 117 [FK02]. 119 As an example of the power of tools for characterizing scenarios, a 120 great deal is known about the distribution of connection sizes on a 121 link, or equivalently, the distribution of per-packet sequence 122 numbers. It has been conjectured that a heavy-tailed distribution 123 of connection sizes is an invariant feature of Internet traffic. A 124 test scenario with mostly long-lived traffic, or with a mix with 125 only long-lived and very short flows, does not have a realistic 126 distribution of connection sizes, and can give unrealistic results 127 in simulations or experiments evaluating congestion control 128 mechanisms. For instance, the distribution of packet sequence 129 numbers makes clear the fraction of traffic on a link from medium- 130 sized connections, e.g., with packet sequence numbers from 100 to 131 1000. These medium-sized connections can slow-start up to a large 132 congestion window, possibly coming to an abrupt stop soon 133 afterwards, contributing significantly to the burstiness of the 134 aggregate traffic, and to the problems facing congestion control. 136 In the sections below we will discuss a number of tools for 137 describing and evaluating scenarios, show how these characteristics 138 can affect the results of research on congestion control mechanisms, 139 and summarize what is known about these characteristics in real- 140 world networks. 142 2. Conventions 144 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 145 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 146 document are to be interpreted as described in [RFC 2119]. 148 3. Tools 150 The tools or characteristics that we discuss are the following. 152 3.1. Characterizing Aggregate Traffic on a Link 154 o Distribution of per-packet round-trip times. 156 o Distribution of packet sequence numbers. 158 o Distribution of packet sizes. 160 o Ratio between forward-path and reverse-path traffic. 162 o Distribution of peak flow rates. 164 o Distribution of transport protocols. 166 3.2. Characterizing an End-to-End Path 168 o Synchronization ratio. 170 o Drop rates as a function of packet size. 172 o Drop rates as a function of burst size. 174 o Drop rates as a function of sending rate. 176 o Degree of packet drops. 178 o Range of queueing delay. 180 3.3. Other Characteristics 182 o Congestion control mechanisms for traffic, along with sender and 183 receiver buffer sizes. 185 o Characterization of congested links in terms of bandwidth and 186 typical levels of congestion (in terms of packet drop rates). 188 o Characterization of congested links in terms of buffer size. 190 o Characterization of challenging lower layers in terms of 191 reordering, delay variation, packet corruption, and the like. 193 o Characterization of network changes affecting congestion, such as 194 routing changes or link outages. 196 Below we will discuss each characteristic in turn, giving the 197 definition, the factors determining that characteristic, the effect 198 on congestion control metrics, and what is known so far from 199 measurement studies in the Internet. 201 4. Distribution of per-packet round-trip times 203 Definition: The distribution of per-packet round-trip times on a 204 link is defined formally by assigning to each packet the most recent 205 round trip time measured for that end-to-end connection. In 206 practice, coarse-grained information is generally sufficient, even 207 though it has been shown that there is significant variability in 208 round-trip times within a TCP connection [AKSJ03], and it is 209 sufficient to assign to each packet the first round-trip time 210 measurement for that connection, or to assign the current round-trip 211 time estimate maintained by the TCP connection. 213 Determining factors: The distribution of per-packet round-trip times 214 on a link is determined by end-to-end propagation delays, by 215 queueing delays along end-to-end paths, and by the congestion 216 control mechanisms used by the traffic. For example, for a scenario 217 using TCP, TCP connections with smaller round-trip times will 218 receive a proportionally larger fraction of traffic than competing 219 TCP connections with larger round-trip times, all else being equal, 220 due to the dynamics of TCP favoring flows with smaller round-trip 221 times. This will generally shift the distribution of per-packet 222 RTTs lower relative to the distribution of per-connection RTTs, 223 since short-RTT connections will have more packets. 225 Effect on congestion control metrics: The distribution of per-packet 226 round-trip times on a link affects the burstiness of the aggregate 227 traffic, and therefore can affect congestion control performance in 228 a range of areas such as delay/throughput tradeoffs. The 229 distribution of per-packet round-trip times can also affect metrics 230 of fairness, degree of oscillations, and the like. For example, 231 long-term oscillations of queueing delay are more likely to occur in 232 scenarios with a narrow range of round-trip times [FK02]. 234 Measurements: The distribution of per-packet round-trip times for 235 TCP traffic on a link can be measured from a packet trace with the 236 passive TCP round-trip time estimator from Jiang and Dovrolis 238 [JD02]. [Add pointers to other estimators, such as ones mentioned 239 in JD02. Add a pointer to Mark Allman's loss detection tool.] 240 Their paper shows the distribution of per-packet round-trip times 241 for TCP packets for a number of different links. For the links 242 measured, the percent of packets with round-trip times at most 243 100 ms ranged from 30% to 80%, and the percent of packets with 244 round-trip times at most 200 ms ranged from 55% to 90%, depending on 245 the link. 247 In the NS simulator, the distribution of per-packet round-trip times 248 for TCP packets on a link can be reported by the queue monitor, 249 using TCP's estimated round-trip time added to packet headers. This 250 is illustrated in the validation test "./test-all-simple stats3" in 251 the directory tcl/test. 253 Scenarios: [FK02] shows a relatively simple scenario, with a 254 dumbbell topology with four access links on each end, that gives a 255 fairly realistic range of round-trip times. [Look for the other 256 citations to add.] 258 5. Distribution of packet sequence numbers 260 Definition: The distribution of packet sequence numbers on a link is 261 defined by giving each packet a sequence number, where the first 262 packet in a connection has sequence number 1, the second packet has 263 sequence number 2, and so on. The distribution of packet sequence 264 numbers can be derived in a straightforward manner from the 265 distribution of connection sizes, and vice versa; however, the 266 distribution of connection sizes is more suited for traffic 267 generators, and the distribution of packet sequence numbers is more 268 suited for measuring and illustrating the packets actually seen on a 269 link over a fixed interval of time. There has been a considerably 270 body of research over the last ten years on the heavy-tailed 271 distribution of connection sizes for traffic on the Internet. 272 [CBC95] [Add citations.] 274 Determining factors: The distribution of connection sizes is largely 275 determined by the traffic generators used in a scenario. For 276 example, is there a single traffic generator characterized by a 277 distribution of connection sizes? A mix of long-lived and web 278 traffic, with the web traffic characterized by a distribution of 279 connection sizes? Or something else? 281 Effect on congestion control metrics: The distribution of packet 282 sequence numbers affects the burstiness of aggregate traffic on a 283 link, thereby affecting all congestion control metrics for which 284 this is a factor. As an example, [FK02] illustrates that the 285 traffic mix can affect the queue dynamics on a congested link. 287 [Find more to cite, about the effect of the distribution of packet 288 sequence numbers on congestion control metrics.] 290 [Add a paragraph about the impact of medium-size flows.] 292 [Add a paragraph about the impact of flows starting and stopping.] 294 [Add a warning about scenarios that use only long-lived flows, or a 295 mix of long-lived and very short flows.] 297 Measurements: [Cite some of the literature.] 299 Traffic generators: Some of the available traffic generators are 300 listed on the web site for "Traffic Generators for Internet Traffic" 301 [TG]. This includes pointers to traffic generators for peer-to-peer 302 traffic, traffic from online games, and traffic from Distributed 303 Denial of Service (DDoS) attacks. 305 In the NS simulator, the distribution of packet sequence numbers for 306 TCP packets on a link can be reported by the queue monitor at a 307 router. This is illustrated in the validation test "./test-all- 308 simple stats3" in the directory tcl/test. 310 6. The Distribution of Packet Sizes 312 Definition: The distribution of packet sizes is defined in a 313 straightforward way, using packet sizes in bytes. 315 Determining factors: The distribution of packet sizes is determined 316 by the traffic mix, the path MTUs, and by the packet sizes used by 317 the transport-level senders. 319 The distribution of packet sizes on a link is also determined by the 320 mix of forward-path TCP traffic and reverse-path TCP traffic in that 321 scenario, for a scenario characterized by a `forward path' (e.g., 322 left to right on a particular link) and a `reverse path' (e.g., 323 right to left on the same link). For such a scenario, the forward- 324 path TCP traffic contributes data packets to the forward link and 325 acknowledgment packets to the reverse link, while the reverse-path 326 TCP traffic contributes small acknowledgment packets to the forward 327 link. The ratio between TCP data and TCP ACK packets on a link can 328 be used as some indication of the ratio between forward-path and 329 reverse-path TCP traffic. 331 Effect on congestion control metrics: The distribution of packet 332 sizes on a link is an indicator of the ratio of forward-path and 333 reverse-path TCP traffic in that network. The amount of reverse- 334 path traffic determines the loss and queueing delay experienced by 335 acknowledgement packets on the reverse path, significantly affecting 336 the burstiness of the aggregate traffic on the forward path. [In 337 what other ways does the distribution of packet sizes affect 338 congestion control metrics?] 340 Measurements: There has been a wealth of measurements over time on 341 the packet size distribution of traffic [A00], [HMTG01]. These 342 measurements are generally consistent with a model of roughly 10% of 343 the TCP connections using an MSS of roughly 500 bytes, and with the 344 other 90% of TCP connections using an MSS of 1460 bytes. 346 7. The Ratio Between Forward-path and Reverse-path Traffic 348 Definition: For a scenario characterized by a `forward path' (e.g., 349 left to right on a particular link) and a `reverse path' (e.g., 350 right to left on the same link), the ratio between forward-path and 351 reverse-path traffic can be defined as the ratio between the 352 forward-path traffic in bps, and the reverse-path traffic in bps. 354 Determining factors: The ratio between forward-path and reverse-path 355 traffic is defined largely by the traffic mix. 357 Effect on congestion control metrics: Zhang, Shenker and Clark have 358 shown in 1991 that for TCP, the amount of reverse-path traffic 359 affects the ACK compression and packet drop rate for TCP 360 acknowledgement packets, significantly affecting the burstiness of 361 TCP traffic on the forward path [ZSC91]. The queueing delay on the 362 reverse path also affects the performance of delay-based congestion 363 control mechanisms, if the delay is computed based on round-trip 364 times. This has been shown by Grieco and Mascolo in [GM04] and by 365 Prasad, Jain, and Dovrolis in [PJD04]. 367 Measurements: There is a need for measurements on the range of 368 ratios between forward-path and reverse-path traffic for congested 369 links. In particular, for TCP traffic traversing congested link X, 370 what is the likelihood that the acknowledgement traffic will 371 encounter congestion (i.e., queueing delay, packet drops) somewhere 372 on the reverse path as well? 374 As discussed in Section 6, the distribution of packet sizes on a 375 link can be used as an indicator of the ratio of forward-path and 376 reverse-path TCP traffic in that network. 378 8. The Distribution of Per-Packet Peak Flow Rates 380 Definition: The distribution of peak flow rates is defined by 381 assigning to each packet the peak sending rate in bytes per second 382 of that connection, where the peak sending rate is defined over 383 0.1-second intervals. The distribution of peak flow rates gives 384 some indication of the ratio of "alpha" and "beta" traffic on a 385 link, where alpha traffic on a congested link is defined as traffic 386 with that link at the main bottleneck, while the beta traffic on the 387 link has a primary bottleneck elsewhere along its path [RSB01]. 389 Determining factors: The distribution of peak flow rates is 390 determined by flows with bottlenecks elsewhere along their end-to- 391 end path, e.g., flows with low-bandwidth access links. The 392 distribution of peak flow rates is also affected by applications 393 with limited sending rates. 395 Effect on congestion control metrics: The distribution of peak flow 396 rates affects the burstiness of aggregate traffic, with low-peak- 397 rate traffic decreasing the aggregate burstiness, and adding to the 398 traffic's tractability. 400 Measurements: [RSB01]. The distribution of peak rates can be 401 expected to change over time, as there is an increasing number of 402 high-bandwidth access links to the home, and of high-bandwidth 403 Ethernet links at work and at other institutions. 405 Simulators: [For NS, add a pointer to the DelayBox, 406 "http://dirt.cs.unc.edu/delaybox/", for more easily simulating low- 407 bandwidth access links for flows.] 409 Testbeds: In testbeds, Dummynet [Dummynet] and NISTNet [NISTNet] 410 provide convenient ways to emulate paths with different limited peak 411 rates. 413 9. The Distribution of Transport Protocols. 415 Definition: The distribution of transport protocols on a congested 416 link is straightforward, with each packet given its associated 417 transport protocol (e.g., TCP, UDP). The distribution is often 418 given both in terms of packets and in terms of bytes. 420 For UDP packets, it might be more helpful to classify them in terms 421 of the port number, or the assumed application (e.g., DNS, RIP, 422 games, Windows Media, RealAudio, RealVideo, etc.) [MAWI]). Other 423 traffic includes ICMP, IPSEC, and the like. In the future there 424 could be traffic from SCTP, DCCP, or from other transport protocols. 426 Effect on congestion control metrics: The distribution of transport 427 protocols affects metrics relating to the effectiveness of AQM 428 mechanisms on a link. 430 Measurements: In the past, TCP traffic has typically consisted of 431 90% to 95% of the bytes on a link [UW02], [UA01]. [Get updated 432 citations for this.] Measurement studies show that TCP traffic from 433 web servers almost always uses conformant TCP congestion control 434 procedures [MAF05]. 436 10. The Synchronization Ratio 438 Definition: The synchronization ratio is defined as the degree of 439 synchronization of loss events between two TCP flows on the same 440 path. Thus, the synchronization ratio is defined as a 441 characteristic of an end-to-end path. When one TCP flow of a pair 442 has a loss event, the synchronization ratio is given by the fraction 443 of those loss events for which the second flow has a loss event 444 within one round-trip time. Each connection in a flow pair has a 445 separate synchronization ratio, and the overall synchronization 446 ratio of the pair of flows is the higher of the two ratios. When 447 measuring the synchronization ratio, it is preferable to start the 448 two TCP flows at slightly different times, with large receive 449 windows. 451 Determining factors: The synchronization ratio is determined largely 452 by the traffic mix on the congested link, and by the AQM mechanism 453 (or lack of AQM mechanism). 455 Different types of TCP flows are also likely to have different 456 synchronization measures. E.g., Two HighSpeed TCP flows might have 457 higher synchronization measures that two Standard TCP flows on the 458 same path, because of their more aggressive window increase rates. 459 Raina, Towsley, and Wischik [RTW05] have discussed the relationships 460 between synchronization and TCP's increase and decrease parameters. 462 Effect on congestion control metrics: The synchronization ratio 463 affects convergence times for high-bandwidth TCPs. Convergence 464 times are known to be poor for some high-bandwidth protocols in 465 environments with high levels of synchronization. [Cite the papers 466 by Leith and Shorten.] However, it is not clear if these 467 environments with high levels of synchronization are realistic. 469 Wischik and MeKweon [WM05] have shown that the level of 470 synchronization affects the buffer requirements at congested 471 routers. Baccelli and Hong [BH02] have a model showing the effect 472 of the synchronization ratio on aggregate throughput. 474 Measurements: Grenville Armitage and Qiang Fu have performed initial 475 experiments of synchronization in the Internet, using Standard TCP 476 flows, and have found very low levels of synchronization. 478 In a discussion of the relationship between stability and 479 desynchronization, Raina, Towsley, and Wischik [RTW05] report that 480 "synchronization has been reported again and again in simulations". 481 In contrast, synchronization has not been reported again and again 482 in the real-world Internet. 484 Appenzeller, Keslassy, and McKeown in [AKM04] report the following: 485 "Flows are not synchronized in a backbone router carrying thousands 486 of flows with varying RTTs. Small variations in RTT or processing 487 time are sufficient to prevent synchronization [QZK01]; and the 488 absence of synchronization has been demonstrated in real networks 489 [F02,IMD01]." 491 [Appenzeller et al, Sizing Router Buffers, reports that 492 synchronization is rare as the number of competing flows increases. 493 Kevin Jeffay has some results on synchronization also.] 495 Needed: We need measurements of the synchronization ratio for flows 496 that use high-bandwidth protocols over high-bandwidth paths, given 497 typical levels of competing traffic and with typical queuing 498 mechanisms at routers (whatever these are), to see if there are 499 higher levels of synchronization with high-bandwidth protocols such 500 as HighSpeed TCP, Fast TCP, and the like, which are more aggressive 501 than Standard TCP. The assumption would be that in many 502 environments, high-bandwidth protocols have higher levels of 503 synchronization than flows using Standard TCP. 505 11. Drop or Mark Rates as a Function of Packet Size 507 Definition: Drop rates as a function of packet size are defined by 508 the actual drop rates for different packets on an end-to-end path or 509 on a congested link over a particular time interval. In some cases, 510 e.g., Drop-Tail queues in units of packets, general statements can 511 be made; e.g., that large and small packets will experience the same 512 packet drop rates. However, in other cases, e.g., Drop-Tail queues 513 in units of bytes, no such general statement can be made, and the 514 drop rate as a function of packet size will be determined in part by 515 the traffic mix at the congested link at that point of time. 517 Determining factors: The drop rate as a function of packet size is 518 determined in part by the queue architecture. E.g., is the Drop- 519 Tail queue in units of packets, of bytes, of 60-byte buffers, or of 520 a mix of buffer sizes? Is the AQM mechanism in packet mode, 521 dropping each packet with the same probability, or in byte mode, 522 with the probability of dropping or marking a packet being 523 proportional to the packet size in bytes. 525 The effect of packet size on drop rate would also be affected by the 526 presence of preferential scheduling for small packets, or by 527 differential scheduling for packets from different flows (e.g., per- 528 flow scheduling, or differential scheduling for UDP and TCP 529 traffic). 531 In many environments, the drop rate as a function of packet size 532 will be heavily affected by the traffic mix at a particular time. 533 For example, is the traffic mix dominated by large packets, or by 534 smaller ones? In some cases, the overall packet drop rate could 535 also affect the relative drop rates for different packet sizes. 537 In wireless networks, the drop rate as a function of packet size is 538 also determined by the packet corruption rate as a function of 539 packet size. [Cite Deborah Pinck's papers on Satellite-Enhanced 540 Personal Communications Experiments and on Experimental Results from 541 Internetworking Data Applications Over Various Wireless Networks 542 Using a Single Flexible Error Control Protocol.] [Cite the general 543 literature.] 545 Effect on congestion control metrics: The drop rate as a function of 546 packet size has a significant effect on the performance of 547 congestion control for VoIP and other small-packet flows. 548 [Citation: "TFRC for Voice: the VoIP Variant", draft-ietf-dccp-tfrc- 549 voip-02.txt, and earlier papers.] The drop rate as a function of 550 packet size also has an effect on TCP performance, as it affects the 551 drop rates for TCP's SYN and ACK packets. [Citation: Jeffay and 552 others.] 554 Measurements: We need measurements of the drop rate as a function of 555 packet size over a wide range of paths, or for a wide range of 556 congested links. For tests of relative drop rates on end-to-end 557 packets, one possibility would be to run successive TCP connections 558 with 200-byte, 512-byte, and 1460-byte packets, and to compare the 559 packet drop rates. The ideal test would include running TCP 560 connections on the reverse path, to measure the drop rates for the 561 small ACK packets on the forward path. It would also be useful to 562 characterize the difference in drop rates for 200-byte TCP packets 563 and 200-byte UDP packets, even though some of this difference could 564 be due to the relative burstiness of the different connections. 566 Ping experiments could also be used to get measurements of drop 567 rates as a function size, but it would be necessary to make sure 568 that the ping sending rates were adjusted to be TCP-friendly. 570 [Cite the known literature on drop rates as a function of packet 571 size.] 572 Our conjecture is that there is a wide range of behaviors for this 573 characteristic in the real world. Routers include Drop-Tail queues 574 in packets, bytes, and buffer sizes in between; these will have 575 quite different drop rates as a function of packet size. Some 576 routers include RED in byte mode (the default for RED in Linux) and 577 some have RED in packet mode (Cisco, I believe). This also affects 578 drop rates as a function of packet size. 580 Some routers on congested access links use per-flow scheduling. In 581 this case, does the per-flow scheduling have the goal of fairness in 582 *bytes* per second or in *packets* per second? What effect does the 583 per-flow scheduling have on the drop rate as a function of packet 584 size, for packets in different flows (e.g., a small-packet VoIP flow 585 competing against a large-packet TCP flow) or for packets within the 586 same flow (small ACK packets and large data packets on a two-way TCP 587 connection). 589 12. Drop Rates as a Function of Burst Size. 591 Definition: Burst-tolerance, or drop rates as a function of burst 592 size, can be defined in terms of an end-to-end path, or in terms of 593 aggregate traffic on a congested link. 595 The burst-tolerance of an end-to-end path is defined in terms of 596 connections with different degrees of burstiness within a round-trip 597 time. When packets are sent in bursts of N packets, does the drop 598 rate vary as a function of N? For example, if the TCP sender sends 599 small bursts of K packets, for K less than the congestion window, 600 how does the size of K affect the loss rate? Similarly, for a ping 601 tool sending pings at a certain rate in packets per second, one 602 could see how the clustering of the ping packets in clusters of size 603 K affects the packet drop rate. As always with such ping 604 experiments, it would be important to adjust the sending rate to 605 maintain a longer-term sending rate that was TCP-friendly. 607 Determining factors: The burst-tolerance is determined largely by 608 the AQM mechanisms for the congested routers on a path, and by the 609 traffic mix. For a Drop-Tail queue with only a small number of 610 competing flows, the burst-tolerance is likely to be low, and for 611 AQM mechanisms where the packet drop rate is a function of the 612 average queue size rather than the instantaneous queue size, the 613 burst tolerance should be quite high. 615 Effect on congestion control metrics: The burst-tolerance of the 616 path or congested link can affect fairness between competing flows 617 with different round-trip times; for example, Standard TCP flows 618 with longer round-trip times are likely to have a more bursty 619 arrival pattern at the congested link that that of Standard TCP 620 flows with shorter round-trip times. As a result, in environment 621 with low burst tolerance (e.g., scenarios with Drop-Tail queues), 622 longer-round-trip-time TCP connections can see higher packet drop 623 rates than other TCP connections, and receive an even smaller 624 fraction of the link bandwidth than they would otherwise. [FJ92] 625 (Section 3.2). We note that some TCP traffic is inherently bursty, 626 e.g., Standard TCP without rate-based pacing, particularly in the 627 presence of dropped ACK packets or of ACK compression. The burst- 628 tolerance of a router can also affect the delay-throughput tradeoffs 629 and packet drop rates of the path or of the congested link. 631 Measurements: One could measure the burst-tolerance of an end-to-end 632 path by running successive TCP connections, forcing bursts of size 633 at least K by dropping an appropriate fraction of the ACK packets to 634 the TCP receiver. Alternately, if one had control of the TCP 635 sender, one could modify the TCP sender to send bursts of K packets 636 when the congestion window was K or more segments. 638 [Look at Crovella's paper on loss pairs.] 640 [Look at: M. Allman and E. Blanton, "Notes on Burst Mitigation for 641 Transport Protocols", ACM Computer Communication Review, vol. 35(2), 642 (2005).] 644 Making inferences about the AQM mechanism for the congested router 645 on an end-to-end path: One potential use of measurement tools for 646 determining the burst-tolerance of an end-to-end path would be to 647 make inferences about the presence or absence of an AQM mechanism at 648 the congested link or links. As a simple test, one could run a TCP 649 connection until the connection comes out of slow-start. If the 650 receive window of the TCP connection was sufficiently high that the 651 connection exited slow-start with packet drops or marks instead of 652 because of the limitation of the receive window, one could record 653 the congestion window at the end of slow-start, and the number of 654 packets dropped from this window. A high packet drop rate might be 655 more typical of a Drop-Tail queue with small-scale statistical 656 multiplexing on the congested link, and a single packet drop coming 657 out of slow-start would suggest an AQM mechanism at the congested 658 link. 660 The synchronization measure could also add information about the 661 likely presence or absence of AQM on the congested link(s) of an 662 end-to-end path, with paths with higher levels of synchronization 663 being more likely to have Drop-Tail queues with small-scale 664 statistical multiplexing on the congested link(s). 666 [Cite the relevant literature about tools for determining the AQM 667 mechanism on an end-to-end path.] 669 13. Drop Rates as a Function of Sending Rate. 671 Definition: Drop rates as a function of sending rate is defined in 672 terms of the drop behavior of a flow in the end-to-end path. That 673 is, does the sending rate of an individual flow affect its own 674 packet drop rate, or its packet drop rate largely independent of the 675 sending rate of the flow? 677 Determining factors: The sending rate of the flow affects its own 678 packet drop rate in an environment with small-scale statistical 679 multiplexing on the congested link. The packet drop rate is largely 680 independent of the sending rate in an environment with large-scale 681 statistical multiplexing, with many competing small flows at the 682 congested link. Thus, the behavior of drop rates as a function of 683 sending rate is a rough measure of the level of statistical 684 multiplexing on the congested links of an end-to-end path. 686 Effect on congestion control metrics: The level of statistical 687 multiplexing at the congested link can affect the performance of 688 congestion control mechanisms in transport protocols. For example, 689 delay-based congestion control is often better suited to 690 environments small-scale statistical multiplexing at the congested 691 link, where the transport protocol responds to the delay caused by 692 its own sending rate. 694 Measurements: In a simulation or testbed, the level of statistical 695 multiplexing on the congested link can be observed directly. In the 696 Internet, the level of statistical multiplexing on the congested 697 links of an end-to-end path can be inferred indirectly through per- 698 flow measurements, by observing whether the packet drop rate varies 699 as a function of the sending rate of the flow. 701 14. Congestion Control Mechanisms for Traffic, along with Sender and 702 Receiver Buffer Sizes." 704 Effect on congestion control metrics: Please don't evaluate AQM 705 mechanisms by using Reno TCP, or evaluate new transport protocols by 706 comparing them with the performance of Reno TCP! For an 707 explanation, see [FK02] (Section 3.4). 709 Measurements: See [MAF05]. 711 15. Characterization of Congested Links in Terms of Bandwidth and 712 Typical Levels of Congestion 714 [Pointers to the current state of our knowledge.] 716 16. Characterization of Challenging Lower Layers. 718 [This will just be a short set of pointers to the relevant 719 literature, which is quite extensive.] 721 17. Characterization of Network Changes Affecting Congestion 723 [Pointers to the current state of our knowledge.] 725 18. Using the Tools Presented in this Document 727 [To be done.] 729 19. Related Work 731 [Cite "On the Effective Evaluation of TCP" by Allman and Falk.] 733 20. Conclusions 735 [To be done.] 737 21. Security Considerations 739 There are no security considerations in this document. 741 22. IANA Considerations 743 There are no IANA considerations in this document. 745 23. Acknowledgements 747 Thanks to Xiaoliang (David) Wei for feedback and contributions to 748 this document. 750 Informative References 752 [RFC 2119] S. Bradner. Key Words For Use in RFCs to Indicate 753 Requirement Levels. RFC 2119. 755 [MAWI] M.W. Group, Mawi working group traffic archive, URL 756 "http://tracer.csl.sony.jp/mawi/". 758 [AKM04] B. Appenzeller, I. Keslassy, and N. McKeown, Sizing Router 759 Buffers, SIGCOMM 2004. 761 [AKSJ03] J. Aikat, J. Kaur, F.D. Smith, and K. Jeffay, Variability 762 in TCP Roundtrip Times, ACM SIGCOMM Internet Measurement 763 Conference, Maimi, FL, October 2003, pp. 279-284. 765 [A00] M. Allman, A Web Server's View of the Transport Layer, 766 Computer Communication Review, 30(5), October 200. 768 [BH02] F. Baccelli and D. Hong, AIMD, Fairness and Fractal Scaling 769 of TCP Traffic, Infocom 2002. 771 [CBC95] C. Cunha, A. Bestavros, and M. Crovella, "Characteristics of 772 WWW Client-based Traces", BU Technical Report BUCS-95-010, 1995. 774 [Dummynet] L. Rizzo, Dummynet, URL 775 "http://info.iet.unipi.it/~luigi/ip_dummynet/". 777 BIBREF F02 "F02" C. J. Fraleigh, Provisioning Internet Backbone 778 Networks to Support Latency Sensitive Applications. PhD thesis, 779 Stanford University, Department of Electrical Engineering, June 780 2002. 782 [FJ92] S. Floyd and V. Jacobson, On Traffic Phase Effects in Packet- 783 Switched Gateways, Internetworking: Research and Experience, V.3 784 N.3, September 1992, p.115-156. 786 [FK02] S. Floyd and E. Kohler, Internet Research Needs Better 787 Models, Hotnets-I, October 2002. 789 [GM04] L. Grieco and S. Mascolo, Performance Evaluation and 790 Comparison of Westwood+, New Reno, and Vegas TCP Congestion 791 Control, CCR, April 2004. 793 [HMTG01] C. Hollot, V. Misra, D. Towsley, and W. Gong, On Designing 794 Improved Controllers for AQM Routers Supporting TCP Flows, IEEE 795 Infocom, 2001. 797 [IMD01] G. Iannaccone, M. May, and C. Diot, Aggregate Traffic 798 Performance with Active Queue Management and Drop From Tail. 799 SIGCOMM Comput. Commun. Rev., 31(3):4-13, 2001. 801 [JD02] H. Jiang and C. Dovrolis, Passive Estimation of TCP Round- 802 trip Times, Computer Communication Review, 32(3), July 2002. 804 [MAF05] A. Medina, M. Allman, and A. Floyd. Measuring the Evolution 805 of Transport Protocols in the Internet. Computer Communication 806 Review, April 2005. 808 [NISTNet] NIST Net, URL "http://snad.ncsl.nist.gov/itg/nistnet/". 810 [PJD04] R. Prasad, M. Jain, and C. Dovrolis, On the Effectiveness of 811 Delay-Based Congestion Avoidance, PFLDnet 2004, February 2004. 813 [QZK01] L. Qiu, Y. Zhang, and S. Keshav, Understanding the 814 Performance of Many TCP Flows, Comput. Networks, 815 37(3-4):277-306, 2001. 817 [RSB01] R. Riedi, S. Sarvotham, and R. Varaniuk, Connection-level 818 Analysis and Modeling of Network Traffic, SIGCOMM Internet 819 Measurement Workshop, 2001. 821 [RTW05] G. Raina, D. Towsley, and D. Wischik, Control Theory for 822 Buffer Sizing, CCR, July 2005. 824 [TG] Traffic Generators for Internet Traffic Web Page, URL 825 "http://www.icir.org/models/trafficgenerators.html". 827 [UA01] U. of Auckland, Auckland-vi trace data, June 2001. URL 828 "http://wans.cs.waikato.ac.nz/wand/wits/auck/6/". 830 [UW02] UW-Madison, Network Performance Statistics, October 2002. 831 URL "http://wwwstats.net.wisc.edu/". 833 [WM05] D. Wischik and N. McKeown, Buffer sizes for Core Routers, 834 CCR, July 2005. 836 [ZSC91] L. Zhang, S. Shenker, and D.D. Clark, Observations and 837 Dynamics of a Congestion Control Algorithm: the Effects of Two- 838 way Traffic, SIGCOMM 1991. 840 Editors' Addresses 842 Sally Floyd 843 ICSI Center for Internet Research 844 1947 Center Street, Suite 600 845 Berkeley, CA 94704 846 USA 848 Eddie Kohler 849 4531C Boelter Hall 850 UCLA Computer Science Department 851 Los Angeles, CA 90095 852 USA 854 Full Copyright Statement 856 Copyright (C) The Internet Society 2006. This document is subject 857 to the rights, licenses and restrictions contained in BCP 78, and 858 except as set forth therein, the authors retain all their rights. 860 This document and the information contained herein are provided on 861 an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 862 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE 863 INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR 864 IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 865 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 866 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 868 Intellectual Property 870 The IETF takes no position regarding the validity or scope of any 871 Intellectual Property Rights or other rights that might be claimed 872 to pertain to the implementation or use of the technology described 873 in this document or the extent to which any license under such 874 rights might or might not be available; nor does it represent that 875 it has made any independent effort to identify any such rights. 876 Information on the procedures with respect to rights in RFC 877 documents can be found in BCP 78 and BCP 79. 879 Copies of IPR disclosures made to the IETF Secretariat and any 880 assurances of licenses to be made available, or the result of an 881 attempt made to obtain a general license or permission for the use 882 of such proprietary rights by implementers or users of this 883 specification can be obtained from the IETF on-line IPR repository 884 at http://www.ietf.org/ipr. 886 The IETF invites any interested party to bring to its attention any 887 copyrights, patents or patent applications, or other proprietary 888 rights that may cover technology that may be required to implement 889 this standard. Please address the information to the IETF at ietf- 890 ipr@ietf.org.