idnits 2.17.1 draft-ietf-ledbat-survey-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 9, 2010) is 4948 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 1323 (Obsoleted by RFC 7323) -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) -- Obsolete informational reference (is this intentional?): RFC 3662 (Obsoleted by RFC 8622) Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force M. Welzl 3 Internet-Draft University of Oslo 4 Intended status: Informational D. Ros 5 Expires: April 12, 2011 Telecom Bretagne 6 October 9, 2010 8 A Survey of Lower-than-Best-Effort Transport Protocols 9 draft-ietf-ledbat-survey-01.txt 11 Abstract 13 This document provides a survey of transport protocols which are 14 designed to have a smaller bandwidth and/or delay impact on standard 15 TCP than standard TCP itself when they share a bottleneck with it. 16 Such protocols could be used for low-priority "background" traffic, 17 as they provide what is sometimes called a "less than" (or "lower 18 than") best-effort service. 20 Status of this Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at http://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on April 12, 2011. 37 Copyright Notice 39 Copyright (c) 2010 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 2. Delay-based transport protocols . . . . . . . . . . . . . . . 3 56 2.1. Accuracy of delay-based congestion predictors . . . . . . 6 57 2.2. Delay-based congestion control = LBE? . . . . . . . . . . 7 58 3. Non-delay-based transport protocols . . . . . . . . . . . . . 7 59 4. Application layer approaches . . . . . . . . . . . . . . . . . 8 60 5. Orthogonal work . . . . . . . . . . . . . . . . . . . . . . . 9 61 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 11 62 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 11 63 8. Security Considerations . . . . . . . . . . . . . . . . . . . 11 64 9. Informative References . . . . . . . . . . . . . . . . . . . . 11 65 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 67 1. Introduction 69 This document presents a brief survey of proposals to attain a Less 70 than Best Effort (LBE) service without help from routers. We loosely 71 define a LBE service as a service which results in smaller bandwidth 72 and/or delay impact on standard TCP than standard TCP itself when 73 sharing a bottleneck with it. We refer to systems that provide this 74 service as Less than Best Effort (LBE) systems. Generally, LBE 75 behavior can be achieved by reacting to queue growth earlier than 76 standard TCP would, or by changing the congestion avoidance behavior 77 of TCP without utilizing any additional implicit feedback. Some 78 mechanisms achieve a LBE behavior at the application layer, e.g. by 79 changing the receiver window of standard TCP, and there is also a 80 substantial amount of work that is related to the LBE concept but not 81 presenting a solution that can be installed in end hosts or expected 82 to work over the Internet. According to this classification, 83 solutions have been categorized in this document as delay-based 84 transport protocols, non-delay-based transport protocols, application 85 layer approaches and orthogonal work. 87 2. Delay-based transport protocols 89 It is wrong to generally equate "little impact on standard TCP" with 90 "small sending rate". Unless the sender's maximum window is limited 91 for some reason, and in the absence of ECN support, standard TCP will 92 normally increase its rate until a queue overflows, causing one or 93 more packets to be dropped and the rate to be reduced. A protocol 94 which stops increasing the rate before this event happens can, in 95 principle, achieve a better performance than standard TCP. In the 96 absence of any other traffic, this is even true for TCP itself when 97 its maximum send window is limited to the bandwidth*round-trip time 98 (RTT) product. 100 TCP Vegas [Bra+94] is one of the first protocols that was known to 101 have a smaller sending rate than standard TCP when both protocols 102 share a bottleneck [Kur+00] -- yet it was designed to achieve more, 103 not less throughput than standard TCP. Indeed, when it is the only 104 protocol on the bottleneck, the throughput of TCP Vegas is greater 105 than the throughput of standard TCP. Depending on the bottleneck 106 queue length, TCP Vegas itself can be starved by standard TCP flows. 107 This can be remedied to some degree by the RED Active Queue 108 Management mechanism [RFC2309]. 110 The congestion avoidance behavior is the protocol's most important 111 feature in terms of historical relevance as well as relevance in the 112 context of this document (it has been shown that other elements of 113 the protocol can sometimes play a greater role for its overall 114 behavior [Hen+00]). In Congestion Avoidance, once per RTT, TCP Vegas 115 calculates the expected throughput as WindowSize / BaseRTT, where 116 WindowSize is the current congestion window and BaseRTT is the 117 minimum of all measured RTTs. The expected throughput is then 118 compared with the actual (measured) throughput. If the actual 119 throughput is smaller than the expected throughput minus a threshold 120 beta, this is taken as a sign of congestion, causing the protocol to 121 linearly decrease its rate. If the actual throughput is greater than 122 the expected throughput minus a threshold alpha (with alpha < beta), 123 this is taken as a sign that the network is underutilized, causing 124 the protocol to linearly increase its rate. 126 TCP Vegas has been analyzed extensively. One of the most prominent 127 properties of TCP Vegas is its fairness between multiple flows of the 128 same kind, which does not penalize flows with large propagation 129 delays in the same way as standard TCP. While it was not the first 130 protocol that uses delay as a congestion indication, its predecessors 131 (which can be found in [Bra+94]) are not discussed here because of 132 the historical "landmark" role that TCP Vegas has taken in the 133 literature. 135 Transport protocols which were designed to be non-intrusive include 136 TCP-LP [Kuz+06] and TCP Nice [Ven+02]. Using a simple analytical 137 model, the authors of [Kuz+06] illustrate the feasibility of this 138 endeavor by showing that, due to the non-linear relationship between 139 throughput and RTT, it is possible to remain transparent to standard 140 TCP even when the flows under consideration have a larger RTT than 141 standard TCP flows. 143 TCP Nice [Ven+02] follows the same basic approach as TCP Vegas but 144 improves upon it in some aspects. Because of its moderate linear- 145 decrease congestion response, TCP Vegas can affect standard TCP 146 despite its ability to detect congestion early. TCP Nice removes 147 this issue by halving the congestion window (at most once per RTT, 148 like standard TCP) instead of linearly reducing it. To avoid being 149 too conservative, this is only done if a fixed predefined fraction of 150 delay-based incipient congestion signals appears within one RTT. 151 Otherwise, TCP Nice falls back to the congestion avoidance rules of 152 TCP Vegas if no packet was lost or standard TCP if a packet was lost. 153 One more feature of TCP Nice is its ability to support a congestion 154 window of less than one packet, by clocking out single packets over 155 more than one RTT. With ns-2 simulations and real-life experiments 156 using a Linux implementation, the authors of [Ven+02] show that TCP 157 Nice achieves its goal of efficiently utilizing spare capacity while 158 being non-intrusive to standard TCP. 160 Other than TCP Vegas and TCP Nice, TCP-LP uses only the one-way delay 161 (OWD) instead of the RTT as an indicator of incipient congestion. 163 This is done to avoid reacting to delay fluctuations that are caused 164 by reverse cross-traffic. Using the TCP Timestamps option [RFC1323], 165 the OWD is determined as the difference between the receiver's 166 Timestamp value in the ACK and the original Timestamp value that the 167 receiver copied into the ACK. While the result of this subtraction 168 can only precisely represent the OWD if clocks are synchronized, its 169 absolute value is of no concern to TCP-LP and hence clock 170 synchronization is unnecessary. Using a constant smoothing 171 parameter, TCP-LP calculates an Exponentially Weighted Moving Average 172 (EWMA) of the measured OWD and checks whether the result exceeds a 173 threshold within the range of the minimum and maximum OWD that was 174 seen during the connections's lifetime; if it does, this condition is 175 interpreted as an "early congestion indication". The minimum and 176 maximum OWD values are initialized during the slow-start phase. 178 Regarding its reaction to an early congestion indication, TCP-LP 179 tries to strike a middle ground between the overly conservative 180 choice of immediately setting the congestion window to one packet and 181 the presumably too aggressive choice of halving the congestion window 182 like standard TCP. It does so by halving the window at first in 183 response to an early congestion indication, then initializing an 184 "interference time-out timer", and maintaining the window size until 185 this timer fires. If another early congestion indication appeared 186 during this "interference phase", the window is then set to 1; 187 otherwise, the window is maintained and TCP-LP continues to increase 188 it the standard Additive-Increase fashion. This method ensures that 189 it takes at least two RTTs for a TCP-LP flow to decrease its window 190 to 1, and, like standard TCP, TCP-LP reacts to congestion at most 191 once per RTT. 193 With ns-2 simulations and real-life experiments using a Linux 194 implementation, the authors of [Kuz+06] show that TCP-LP is largely 195 non-intrusive to TCP traffic while at the same time enabling it to 196 utilize a large portion of the excess network bandwidth, which is 197 fairly shared among competing TCP-LP flows. They also show that 198 using their protocol for bulk data transfers greatly reduces file 199 transfer times of competing best-effort web traffic. 201 Sync-TCP [Wei+05] follows a similar approach as TCP-LP, by adapting 202 its reaction to congestion according to changes in the OWD. By 203 comparing the estimated (average) forward queuing delay to the 204 maximum observed delay, Sync-TCP adapts the AIMD parameters depending 205 on the trend followed by the average delay over an observation 206 window. Even though the authors of [Wei+05] did not explicitely 207 consider its use as an LBE protocol, Sync-TCP was designed to react 208 early to incipient congestion, while grabbing available bandwidth 209 more aggressively than a standard TCP in congestion-avoidance mode. 211 Delay-based congestion control is also at the basis of proposals 212 aiming at adapting TCP's congestion avoidance to very high-speed 213 networks. Some of these proposals [Tan+06][Sri+08][Liu+08] are 214 hybrid loss- and delay-based mechanisms, whereas others 215 [Dev+03][Wei+06][Cha+10] are variants of Vegas based solely on 216 delays. 218 2.1. Accuracy of delay-based congestion predictors 220 The accuracy of delay-based congestion predictors has been the 221 subject of a good deal of research, see e.g. [Bia+03], [Mar+03], 222 [Pra+04], [Rew+06], [McC+08]. The main result of most of these 223 studies is that delays (or, more precisely, round-trip times) are, in 224 general, weakly correlated with congestion. There are several 225 factors that may induce such a poor correlation: 227 o Bottleneck buffer size: in principle, a delay-based mechanism 228 could be made "more than TCP friendly" _if_ buffers are "large 229 enough", so that RTT fluctuations and/or deviations from the 230 minimum RTT can be detected by the end-host with reasonable 231 accuracy. Otherwise, it may be hard to distinguish real delay 232 variations from measurement noise. 234 o RTT measurement issues: in principle, RTT samples may suffer from 235 poor resolution, due to timers which are too coarse-grained with 236 respect to the scale of delay fluctuations. Also, a flow may 237 obtain a very noisy estimate of RTTs due to undersampling, under 238 some circumstances (e.g., the flow rate is much lower than the 239 link bandwidth). For TCP, other potential sources of measurement 240 noise include: TCP segmentation offloading (TSO) and the use of 241 delayed ACKs [Hay10]. 243 o Level of statistical multiplexing and RTT sampling: it may be easy 244 for an individual flow to "miss" loss/queue overflow events, 245 especially if the number of flows sharing a bottleneck buffer is 246 significant. This is nicely illustrated e.g. in Fig. 2 of 247 [McC+08]. 249 o Impact of wireless links: several mechanisms that are typical of 250 wireless links, like link-layer scheduling and error recovery, may 251 induce strong delay fluctuations over short time scales [Gur+04]. 253 Whether a delay-based protocol behaves in its intended manner (e.g., 254 it is "more than TCP friendly", or it grabs available bandwidth in a 255 very aggressive manner) may therefore depend on the accuracy issues 256 listed above. Moreover, protocols like Vegas need to keep an 257 estimate of the minimum ("base") delay; this makes such protocols 258 highly sensitive to eventual changes in the end-to-end route during 259 the lifetime of the flow [Mo+99]. 261 TODO: incorporate [Bha+07] and any references therein that may be 262 missing. 264 2.2. Delay-based congestion control = LBE? 266 Regarding the issue of false positives/false negatives with a delay- 267 based congestion detector, most studies focus on the loss of 268 throughput coming from the erroneous detection of queue build-up and 269 of alleviation of congestion. Arguably, for a LBE transport protocol 270 it's better to err on the "more-than-TCP-friendly side", that is, to 271 always yield to _perceived_ congestion whether it is "real" or not; 272 however, failure to detect congestion (due to one of the above 273 accuracy problems) would result in behavior that is not LBE. For 274 instance, consider the case in which the bottleneck buffer is small, 275 so that the contribution of queueing delay at the bottleneck to the 276 global end-to-end delay is small. In such a case, a flow using a 277 delay-based mechanism might end up consuming a good deal of bandwidth 278 with respect to a competing standard TCP flow, unless it also 279 incorporates a suitable reaction to loss. 281 Consider also the case in which the bottleneck link is already (very) 282 congested. In such a scenario, delay variations may be quite small, 283 hence, it may be very difficult to tell an empty queue from a 284 heavily-loaded queue, in terms of delay fluctuation. Therefore, a 285 newly-arriving delay-based flow may start sending faster when there 286 is already heavy congestion, eventually driving away loss-based flows 287 [Sha+05]. 289 3. Non-delay-based transport protocols 291 4CP [Liu+07], which stands for "Competitive and Considerate 292 Congestion Control", is a protocol which provides a LBE service by 293 changing the window control rules of standard TCP. A "virtual 294 window" is maintained, which, during a so-called "bad congestion 295 phase" is reduced to less than a predefined minimum value of the 296 actual congestion window. The congestion window is only increased 297 again once the virtual window exceeds this minimum, and in this way 298 the virtual window controls the duration during which the sender 299 transmits with a fixed minimum rate. The 4CP congestion avoidance 300 algorithm allows for setting a target average window and avoids 301 starvation of "background" flows while bounding the impact on 302 "foreground" flows. Its performance was evaluated in ns-2 303 simulations and in real-life experiments with a kernel-level 304 implementation in Microsoft Windows Vista. 306 Some work was done on applying weights to congestion control 307 mechanisms, allowing a flow to be as aggressive as a number of 308 parallel TCP flows at the same time. This is usually motivated by 309 the fact that users may want to assign different priorities to 310 different flows. The first, and best known, such protocol is MulTCP 311 [Cro+98], which emulates N TCPs in a rather simple fashion. Improved 312 versions of the parallel-TCP idea are presented in [Hac+04] and 313 [Hac+08], and there is also a variant where only one feedback loop is 314 applied to control a larger traffic aggregate by the name of Probe- 315 Aided (PA-)MulTCP [Kuo+08]. Another protocol, CP [Ott+04], applies 316 the same concept to the TFRC protocol [RFC5348] in order to provide 317 such fairness differentiation for multimedia flows. 319 The general assumption underlying all of the above work is that these 320 protocols are "N-TCP-friendly", i.e. they are as TCP-friendly as N 321 TCPs, where N is a positive (and possibly natural) number which is 322 greater than or equal to 1. The MulTFRC [Dam+09] protocol, another 323 extension of TFRC for multiple flows, is however able to support 324 values between 0 and 1, making it applicable as a mechanism for a LBE 325 service. Since it does not react to delay like the mechanisms above 326 but adjusts its rate like TFRC, it can probably be expected to be 327 more aggressive than mechanisms such as TCP Nice or TCP-LP. This 328 also means that MulTFRC is less likely to be prone to starvation, as 329 its aggression is tunable at a fine granularity even when N is 330 between 0 and 1. 332 4. Application layer approaches 334 A simplistic, application-level approach to a background transport 335 service may just consist in scheduling automated transfers at times 336 when the network is lightly loaded, as described in e.g. [Dyk+02]. 337 An issue with such a technique is that it may not necessarily be 338 appropriate to applications like peer-to-peer file transfer, since 339 the notion of an "off-peak hour" is not meaningful when end-hosts may 340 be located anywhere in the world. 342 TCP's built-in flow control can be used as a means to achieve a low- 343 priority transport service. For instance, the mechanism described in 344 [Spr+00] controls the bandwidth by letting the receiver intelligently 345 manipulate the receiver window of standard TCP. This is done because 346 the authors assume a client-server setting where the receiver's 347 access link is typically the bottleneck. The scheme incorporates a 348 delay-based calculation of the expected queue length at the 349 bottleneck, which is quite similar to the calculation in the above 350 delay based protocols, e.g. TCP Vegas. Using a Linux 351 implementation, where TCP flows are classified according to their 352 application's needs, it is shown that a significant improvement in 353 packet latency can be attained over an unmodified system while 354 maintaining good link utilization. 356 A similar method is employed by Mehra et al. [Meh+03], where both 357 the advertised receiver window and the delay in sending ACK messages 358 are dynamically adapted to attain a given rate. As in [Spr+00], 359 Mehra et al. assume that the bottleneck is located at the receiver's 360 access link. However, the latter also propose a bandwidth-sharing 361 system, allowing to control the bandwidth allocated to different 362 flows, as well as to allot a minimum rate to some flows. 364 Receiver window tuning is also done in [Key+04], where choosing the 365 right value for the window is phrased as an optimization problem. On 366 this basis, two algorithms are presented, binary search -- which is 367 faster than the other one at achieving a good operation point but 368 fluctuates -- and stochastic optimization, which does not fluctuate 369 but converges slower than binary search. These algorithms merely use 370 the previous receiver window and the amount of data received during 371 the previous control interval as input. According to [Key+04], the 372 encouraging simulation results suggest that such an application level 373 mechanism can work almost as well as a transport layer scheme like 374 TCP-LP. 376 Another way of dealing with non-interactive flows, like e.g. web 377 prefetching, is to rate-limit the transfer of such bursty traffic 378 [Cro+98b]. Note that one of the techniques used in [Cro+98b] is, 379 precisely, to have the downloading application adapt the TCP receiver 380 window, so as to reduce the data rate to the minimum needed. 382 The so-called Background Intelligent Transfer Service (BITS) [BITS], 383 implemented in several versions of Microsoft Windows, uses a system 384 of application-layer priority levels for file-transfer jobs, together 385 with monitoring of bandwidth usage of the network interface (or, in 386 more recent versions, of the network gateway connected to the end- 387 host), so that low-priority transfers give way to both high-priority 388 (foreground) transfers and traffic from interactive applications. 390 5. Orthogonal work 392 Various suggestions have been published for realizing a LBE service 393 by influencing the way packets are treated in routers. One example 394 is the Persistent Class Based Queuing (P-CBQ) scheme presented in 395 [Car+01], which is a variant of Class Based Queuing (CBQ) with per- 396 flow accounting. RFC 3662 [RFC3662] defines a DiffServ per-domain 397 behavior called "Lower Effort". Similar Lower-Effort PDBs have been 398 tested and deployed, at least in research networks [Cho+03], [QBSS]. 400 Harp [Kok+04] realizes a LBE service by dissipating background 401 traffic to less-utilized paths of the network. This is achieved 402 without changing routers by using edge nodes as relays. According to 403 the authors, these edge nodes should be gateways of organizations in 404 order to align their scheme with usage incentives, but the technical 405 solution would also work if Harp was only deployed in end hosts. It 406 detects impending congestion by looking at delay, similar to TCP Nice 407 [Ven+02], and manages to improve utilization and fairness over pure 408 single-path solutions. 410 An entirely different approach is taken in [Egg+05]: here, the 411 priority of a flow is reduced via a generic idletime scheduling 412 strategy in a host's operating system. While results presented in 413 this paper show that the new scheduler can effectively shield regular 414 tasks from low-priority ones (e.g. TCP from greedy UDP) with only a 415 minor performance impact, it is an underlying assumption that all 416 involved end hosts would use the idletime scheduler. In other words, 417 it is not the focus of this work to protect a standard TCP flow which 418 originates from any host where the presented scheduling scheme may 419 not be implemented. 421 In [Ven+08], Venkataraman et al. propose a transport-layer approach 422 to leverage an existing, network-layer LBE service based on priority 423 queueing. The transport protocol, which they call PLT (Priority- 424 Layer Transport), splits a layer-4 connection into two flows, a high- 425 priority one and a low-priority one. The high-priority flow is sent 426 over the higher-priority queueing class (in principle, offering a 427 best-effort service) using an AIMD, TCP-like congestion control 428 mechanism. The low-priority flow, which is mapped to the LBE class, 429 uses a non TCP-friendly congestion control algorithm. The goal of 430 PLT is thus to maximize its aggregate throughput by exploiting unused 431 capacity in an aggressive way, while protecting standard TCP flows 432 carried by the best-effort class. [Ott+03] proposes simple changes 433 to the AIMD parameters of TCP for use over a network-layer LBE 434 service, so that such "filler" traffic may aggressively consume 435 unused bandwidth. Note that [Ven+08] also considers a mechanism for 436 detecting the lack of priority queueing in the network, so that the 437 non-TCP friendly flow may be inhibited. The PLT receiver monitors 438 the loss rate of both flows; if the high-priority flow starts seeing 439 losses while the low-priority one does not experience 100% loss, this 440 is taken as an indication of the absence of strict priority queueing. 442 Another technique is that used by protocols like NF-TCP [Aru+10b], 443 where a bandwidth-estimation module integrated into the transport 444 protocol allows to rapidly take advantage of free capacity. NF-TCP 445 combines this with an early congestion detection based on Explicit 446 Congestion Notification (ECN) [RFC3168] and RED [RFC2309]; when 447 congestion starts building up, appropriate tuning of a RED queue 448 allows to mark low-priority (i.e., NF-TCP) packets with a much higher 449 probability than high-priority (i.e., standard TCP) packets, so low- 450 priority flows yield up bandwidth before standard TCP flows. 452 6. Acknowledgements 454 The authors would like to thank Dragana Damjanovic, Melissa Chavez, 455 Yinxia Zhao and Mayutan Arumaithurai for reference pointers. 457 7. IANA Considerations 459 This memo includes no request to IANA. 461 8. Security Considerations 463 This document introduces no new security considerations. 465 9. Informative References 467 [Aru+10b] Arumaithurai, M., Fu, X., and K. Ramakrishnan, "NF-TCP: A 468 Network Friendly TCP Variant for Background Delay- 469 Insensitive Applications", Technical Report No. IFI-TB- 470 2010-05, Institute of Computer Science, University of 471 Goettingen, Germany, September 2010, . 475 [BITS] Microsoft, "Windows Background Intelligent Transfer 476 Service", 477 . 479 [Bha+07] Bhandarkar, S., Reddy, A., Zhang, Y., and D. Loguinov, 480 "Emulating AQM from end hosts", Proceedings of ACM 481 SIGCOMM 2007, 2007. 483 [Bia+03] Biaz, S. and N. Vaidya, "Is the round-trip time correlated 484 with the number of packets in flight?", Proceedings of the 485 3rd ACM SIGCOMM conference on Internet measurement (IMC 486 '03) , pages 273-278, 2003. 488 [Bra+94] Brakmo, L., O'Malley, S., and L. Peterson, "TCP Vegas: New 489 techniques for congestion detection and avoidance", 490 Proceedings of SIGCOMM '94, pages 24-35, August 1994. 492 [Car+01] Carlberg, K., Gevros, P., and J. Crowcroft, "Lower than 493 best effort: a design and implementation", Workshop on 494 Data communication in Latin America and the 495 Caribbean 2001, San Jose, Costa Rica, Pages: 244 - 265, 496 2001. 498 [Cha+10] Chan, Y., Lin, C., Chan, C., and C. Ho, "CODE TCP: A 499 competitive delay-based TCP", Computer Communications , 500 33(9):1013-1029, June 2010. 502 [Cho+03] Chown, T., Ferrari, T., Leinen, S., Sabatino, R., Simar, 503 N., and S. Venaas, "Less than Best Effort: Application 504 Scenarios and Experimental Results", Proceedings of 505 QoS-IP , pages 131-144, February 2003. 507 [Cro+98] Crowcroft, J. and P. Oechslin, "Differentiated end-to-end 508 Internet services using a weighted proportional fair 509 sharing TCP", ACM SIGCOMM Computer Communication 510 Review vol. 28, no. 3 (July 1998), pp. 53-69, 1998. 512 [Cro+98b] Crovella, M. and P. Barford, "The network effects of 513 prefetching", Proceedings of Infocom 1998, April 1998. 515 [Dam+09] Damjanovic, D. and M. Welzl, "MulTFRC: Providing Weighted 516 Fairness for Multimedia Applications (and others too!)", 517 ACM Computer Communication Review vol. 39, no. 3 (July 518 2009), 2009. 520 [Dev+03] De Vendictis, A., Baiocchi, A., and M. Bonacci, "Analysis 521 and enhancement of TCP Vegas congestion control in a mixed 522 TCP Vegas and TCP Reno network scenario", Performance 523 Evaluation , 53(3-4):225-253, 2003. 525 [Dyk+02] Dykes, S. and K. Robbins, "Limitations and benefits of 526 cooperative proxy caching", IEEE Journal on Selected Areas 527 in Communications 20(7):1290-1304, September 2002. 529 [Egg+05] Eggert, L. and J. Touch, "Idletime Scheduling with 530 Preemption Intervals", Proceedings of 20th ACM Symposium 531 on Operating Systems Principles SOSP 2005, Brighton, 532 United Kingdom, pp. 249/262, October 2005. 534 [Gur+04] Gurtov, A. and S. Floyd, "Modeling wireless links for 535 transport protocols", ACM SIGCOMM Computer Communications 536 Review 34(2):85-96, April 2004. 538 [Hac+04] Hacker, T., Noble, B., and B. Athey, "Improving Throughput 539 and Maintaining Fairness using Parallel TCP", Proceedings 540 of Infocom 2004, March 2004. 542 [Hac+08] Hacker, T. and P. Smith, "Stochastic TCP: A Statistical 543 Approach to Congestion Avoidance", Proceedings of 544 PFLDnet 2008, March 2008. 546 [Hay10] Hayes, D., "Timing enhancements to the FreeBSD kernel to 547 support delay and rate based TCP mechanisms", Technical 548 Report 100219A , Centre for Advanced Internet 549 Architectures, Swinburne University of Technology, 550 February 2010. 552 [Hen+00] Hengartner, U., Bolliger, J., and T. Gross, "TCP Vegas 553 revisited", Proceedings of Infocom 2000, March 2000. 555 [Key+04] Key, P., Massoulie, L., and B. Wang, "Emulating Low- 556 Priority Transport at the Application Layer: a Background 557 Transfer Service", Proceedings of ACM SIGMETRICS 2004, 558 January 2004. 560 [Kok+04] Kokku, R., Bohra, A., Ganguly, S., and A. Venkataramani, 561 "A Multipath Background Network Architecture", Proceedings 562 of Infocom 2007, May 2007. 564 [Kuo+08] Kuo, F. and X. Fu, "Probe-Aided MulTCP: an aggregate 565 congestion control mechanism", ACM SIGCOMM Computer 566 Communication Review vol. 38, no. 1 (January 2008), pp. 567 17-28, 2008. 569 [Kur+00] Kurata, K., Hasegawa, G., and M. Murata, "Fairness 570 Comparisons Between TCP Reno and TCP Vegas for Future 571 Deployment of TCP Vegas", Proceedings of INET 2000, 572 July 2000. 574 [Kuz+06] Kuzmanovic, A. and E. Knightly, "TCP-LP: low-priority 575 service via end-point congestion control", IEEE/ACM 576 Transactions on Networking (ToN) Volume 14, Issue 4, pp. 577 739-752., August 2006, 578 . 580 [Liu+07] Liu, S., Vojnovic, M., and D. Gunawardena, "Competitive 581 and Considerate Congestion Control for Bulk Data 582 Transfers", Proceedings of IWQoS 2007, June 2007. 584 [Liu+08] Liu, S., Basar, T., and R. Srikant, "TCP-Illinois: A loss- 585 and delay-based congestion control algorithm for high- 586 speed networks", Performance Evaluation , 65(6-7):417-440, 587 2008. 589 [Mar+03] Martin, J., Nilsson, A., and I. Rhee, "Delay-based 590 congestion avoidance for TCP", IEEE/ACM Transactions on 591 Networking , 11(3):356-369, June 2003. 593 [McC+08] McCullagh, G. and D. Leith, "Delay-based congestion 594 control: Sampling and correlation issues revisited", 595 Technical report , Hamilton Institute, 2008. 597 [Meh+03] Mehra, P., Zakhor, A., and C. De Vleeschouwer, "Receiver- 598 Driven Bandwidth Sharing for TCP", Proceedings of 599 Infocom 2003, April 2003. 601 [Mo+99] Mo, J., La, R., Anantharam, V., and J. Walrand, "Analysis 602 and Comparison of TCP Reno and TCP Vegas", Proceedings of 603 Infocom 1999, March 1999. 605 [Ott+03] Ott, B., Warnky, T., and V. Liberatore, "Congestion 606 control for low-priority filler traffic", SPIE QoS 2003 607 (Quality of Service over Next-Generation Internet), In 608 Proc. SPIE, Vol. 5245, 154, Monterey (CA), USA, July 2003. 610 [Ott+04] Ott, D., Sparks, T., and K. Mayer-Patel, "Aggregate 611 congestion control for distributed multimedia 612 applications", Proceedings of Infocom 2004, March 2004. 614 [Pra+04] Prasad, R., Jain, M., and C. Dovrolis, "On the 615 effectiveness of delay-based congestion avoidance", 616 Proceedings of PFLDnet , 2004. 618 [QBSS] "QBone Scavenger Service (QBSS)", Internet2 QBone 619 Initiative . 621 [RFC1323] Jacobson, V., Braden, B., and D. Borman, "TCP Extensions 622 for High Performance", RFC 1323, May 1992. 624 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 625 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 626 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 627 S., Wroclawski, J., and L. Zhang, "Recommendations on 628 Queue Management and Congestion Avoidance in the 629 Internet", RFC 2309, April 1998. 631 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 632 of Explicit Congestion Notification (ECN) to IP", 633 RFC 3168, September 2001. 635 [RFC3662] Bless, R., Nichols, K., and K. Wehrle, "A Lower Effort 636 Per-Domain Behavior (PDB) for Differentiated Services", 637 RFC 3662, December 2003. 639 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP 640 Friendly Rate Control (TFRC): Protocol Specification", 641 RFC 5348, September 2008. 643 [Rew+06] Rewaskar, S., Kaur, J., and D. Smith, "Why don't delay- 644 based congestion estimators work in the real-world?", 645 Technical report TR06-001 , University of North Carolina 646 at Chapel Hill, Dept. of Computer Science, January 2006. 648 [Sha+05] Shalunov, S., Dunn, L., Gu, Y., Low, S., Rhee, I., Senger, 649 S., Wydrowski, B., and L. Xu, "Design Space for a Bulk 650 Transport Tool", Technical Report , Internet2 Transport 651 Group, May 2005. 653 [Spr+00] Spring, N., Chesire, M., Berryman, M., Sahasranaman, V., 654 Anderson, T., and B. Bershad, "Receiver based management 655 of low bandwidth access links", Proceedings of 656 Infocom 2000, pp. 245-254, vol.1, 2000. 658 [Sri+08] Sridharan, M., Tan, K., Bansala, D., and D. Thaler, 659 "Compound TCP: A new TCP congestion control for high-speed 660 and long distance networks", Internet Draft 661 draft-sridharan-tcpm-ctcp , work in progress, 662 November 2008. 664 [Tan+06] Tan, K., Song, J., Zhang, Q., and M. Sridharan, "A 665 Compound TCP approach for high-speed and long distance 666 networks", Proceedings of IEEE INFOCOM 2006, Barcelona, 667 Spain, April 2008. 669 [Ven+02] Venkataramani, A., Kokku, R., and M. Dahlin, "TCP Nice: a 670 mechanism for background transfers", Proceedings of 671 OSDI '02, 2002. 673 [Ven+08] Venkataraman, V., Francis, P., Kodialam, M., and T. 674 Lakshman, "A priority-layered approach to transport for 675 high bandwidth-delay product networks", Proceedings of ACM 676 CoNEXT, Madrid, December 2008. 678 [Wei+05] Weigle, M., Jeffay, K., and F. Smith, "Delay-based early 679 congestion detection and adaptation in TCP: impact on web 680 performance", Computer Communications 28(8):837-850, 681 May 2005. 683 [Wei+06] Wei, D., Jin, C., Low, S., and S. Hegde, "FAST TCP: 684 Motivation, architecture, algorithms, performance", IEEE/ 685 ACM Transactions on Networking , 14(6):1246-1259, 686 December 2006. 688 Authors' Addresses 690 Michael Welzl 691 University of Oslo 692 Department of Informatics, PO Box 1080 Blindern 693 N-0316 Oslo, 694 Norway 696 Phone: +43 512 507 6110 697 Email: michawe@ifi.uio.no 699 David Ros 700 Telecom Bretagne 701 Rue de la Chataigneraie, CS 17607 702 35576 Cesson Sevigne cedex, 703 France 705 Phone: +33 2 99 12 70 46 706 Email: david.ros@telecom-bretagne.eu