idnits 2.17.1 draft-ietf-aqm-eval-guidelines-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The exact meaning of the all-uppercase expression 'NOT REQUIRED' is not defined in RFC 2119. If it is intended as a requirements expression, it should be rewritten using one of the combinations defined in RFC 2119; otherwise it should not be all-uppercase. -- The document date (September 18, 2014) is 3508 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-aqm-recommendation-01 -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force N. Kuhn, Ed. 3 Internet-Draft Telecom Bretagne 4 Intended status: Informational P. Natarajan, Ed. 5 Expires: March 22, 2015 Cisco Systems 6 D. Ros 7 Simula Research Laboratory AS 8 N. Khademi 9 University of Oslo 10 September 18, 2014 12 AQM Characterization Guidelines 13 draft-ietf-aqm-eval-guidelines-00 15 Abstract 17 Unmanaged large buffers in today's networks have given rise to a slew 18 of performance issues. These performance issues can be addressed by 19 some form of Active Queue Management (AQM), optionally in combination 20 with a packet scheduling scheme such as fair queuing. The IETF AQM 21 and packet scheduling working group was formed to standardize AQM 22 schemes that are robust, easily implemented, and successfully 23 deployed in today's networks. This document describes various 24 criteria for performing precautionary characterizations of AQM 25 proposals. This document also helps in ascertaining whether any 26 given AQM proposal should be taken up for standardization by the AQM 27 WG. 29 Status of This Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current/. 39 Internet-Drafts are draft documents valid for a maximum of six months 40 and may be updated, replaced, or obsoleted by other documents at any 41 time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress." 44 This Internet-Draft will expire on March 22, 2015. 46 Copyright Notice 48 Copyright (c) 2014 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 64 1.1. Guidelines for AQM designers . . . . . . . . . . . . . . 4 65 1.2. Reducing the latency and maximizing the goodput . . . . . 5 66 1.3. Glossary . . . . . . . . . . . . . . . . . . . . . . . . 5 67 1.4. Requirements Language . . . . . . . . . . . . . . . . . . 6 68 2. End-to-end metrics . . . . . . . . . . . . . . . . . . . . . 6 69 2.1. Flow Completion time . . . . . . . . . . . . . . . . . . 6 70 2.2. Packet loss . . . . . . . . . . . . . . . . . . . . . . . 6 71 2.3. Packet loss synchronization . . . . . . . . . . . . . . . 7 72 2.4. Goodput . . . . . . . . . . . . . . . . . . . . . . . . . 7 73 2.5. Latency and jitter . . . . . . . . . . . . . . . . . . . 8 74 2.6. Discussion on the trade-off between latency and goodput . 8 75 3. Generic set up for evaluations . . . . . . . . . . . . . . . 8 76 3.1. Topology and notations . . . . . . . . . . . . . . . . . 8 77 3.2. Buffer size . . . . . . . . . . . . . . . . . . . . . . . 10 78 3.3. Congestion controls . . . . . . . . . . . . . . . . . . . 10 79 4. Various TCP variants . . . . . . . . . . . . . . . . . . . . 10 80 4.1. TCP-friendly Sender . . . . . . . . . . . . . . . . . . . 11 81 4.2. Aggressive Transport Sender . . . . . . . . . . . . . . . 11 82 4.3. Unresponsive Transport Sender . . . . . . . . . . . . . . 11 83 4.4. TCP initial congestion window . . . . . . . . . . . . . . 12 84 4.5. Traffic Mix . . . . . . . . . . . . . . . . . . . . . . . 13 85 5. RTT fairness . . . . . . . . . . . . . . . . . . . . . . . . 14 86 5.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 14 87 5.2. Required tests . . . . . . . . . . . . . . . . . . . . . 14 88 5.3. Metrics to evaluate the RTT fairness . . . . . . . . . . 14 89 6. Burst absorption . . . . . . . . . . . . . . . . . . . . . . 15 90 6.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 15 91 6.2. Required tests . . . . . . . . . . . . . . . . . . . . . 15 92 7. Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 16 93 7.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 16 94 7.2. Required tests . . . . . . . . . . . . . . . . . . . . . 17 95 7.2.1. Mild Congestion . . . . . . . . . . . . . . . . . . . 17 96 7.2.2. Medium Congestion . . . . . . . . . . . . . . . . . . 17 97 7.2.3. Heavy Congestion . . . . . . . . . . . . . . . . . . 17 98 7.2.4. Varying congestion levels . . . . . . . . . . . . . . 17 99 7.2.5. Varying Available Bandwidth . . . . . . . . . . . . . 18 100 7.3. Parameter sensitivity and stability analysis . . . . . . 18 101 8. Implementation cost . . . . . . . . . . . . . . . . . . . . . 19 102 8.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 19 103 8.2. Required discussion . . . . . . . . . . . . . . . . . . . 19 104 9. Operator control knobs and auto-tuning . . . . . . . . . . . 19 105 10. Interaction with ECN . . . . . . . . . . . . . . . . . . . . 20 106 10.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 20 107 10.2. Required discussion . . . . . . . . . . . . . . . . . . 20 108 11. Interaction with scheduling . . . . . . . . . . . . . . . . . 20 109 11.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 20 110 11.2. Required discussion . . . . . . . . . . . . . . . . . . 21 111 12. Discussion on methodology, metrics, AQM comparisons and 112 packet sizes . . . . . . . . . . . . . . . . . . . . . . . . 21 113 12.1. Methodology . . . . . . . . . . . . . . . . . . . . . . 21 114 12.2. Comments on metrics measurement . . . . . . . . . . . . 21 115 12.3. Comparing AQM schemes . . . . . . . . . . . . . . . . . 21 116 12.3.1. Performance comparison . . . . . . . . . . . . . . . 22 117 12.3.2. Deployment comparison . . . . . . . . . . . . . . . 22 118 12.4. Packet sizes and congestion notification . . . . . . . . 23 119 13. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 23 120 14. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 23 121 15. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 122 16. Security Considerations . . . . . . . . . . . . . . . . . . . 23 123 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 124 17.1. Normative References . . . . . . . . . . . . . . . . . . 23 125 17.2. Informative References . . . . . . . . . . . . . . . . . 24 126 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 128 1. Introduction 130 Active Queue Management (AQM) addresses the concerns arising from 131 using unnecessarily large and unmanaged buffers in order to improve 132 network and application performance. Several AQM algorithms have 133 been proposed in the past years, most notable being Random Early 134 Detection (RED), BLUE, and Proportional Integral controller (PI), and 135 more recently CoDel [CODEL] and PIE [PIE]. In general, these 136 algorithms actively interact with the Transmission Control Protocol 137 (TCP) and any other transport protocol that deploys a congestion 138 control scheme to manage the amount of data they keep in the network. 139 The available buffer space in the routers and switches is large 140 enough to accommodate the short-term buffering requirements. AQM 141 schemes aim at reducing mean buffer occupancy, and therefore both 142 end-to-end delay and jitter. Some of these algorithms, notably RED, 143 have also been widely implemented in some network devices. However, 144 any potential benefits of the RED AQM scheme have not been realized 145 since RED is reported to be usually turned off. The main reason of 146 this reluctance to use RED in today's deployments is its sensitivity 147 to the operating conditions in the network and the difficulty of 148 tuning its parameters. 150 A buffer is a physical volume of memory in which a queue or set of 151 queues are stored. In real implementations of switches, a global 152 memory is shared between the available devices: the size of the 153 buffer for a given communication does not make sense, as its 154 dedicated memory may vary over the time and real world buffering 155 architectures are complex. For the sake of simplicity, when speaking 156 of a specific queue in this document, "buffer size" refers to the 157 maximum amount of data the buffer may store, which may be measured in 158 bytes or packets. The rest of this memo therefore refers to the 159 maximum queue depth as the size of the buffer for a given 160 communication. 162 In order to meet mostly throughput-based SLA requirements and to 163 avoid packet drops, many home gateway manufacturers resort to 164 increasing the available memory beyond "reasonable values". This 165 increase is also referred to as Bufferbloat [BB2011]. Deploying 166 large unmanaged buffers on the Internet has lead to the increase in 167 end-to-end delay, resulting in poor performance for latency sensitive 168 applications such as real-time multimedia (e.g., voice, video, 169 gaming, etc.). The degree to which this affects modern networking 170 equipment, especially consumer-grade equipment, produces problems 171 even with commonly used web services. Active queue management is 172 thus essential to control queuing delay and decrease network latency. 174 The AQM and Packet Scheduling working group was recently formed 175 within the TSV area to address the problems with large unmanaged 176 buffers in the Internet. Specifically, the AQM WG is tasked with 177 standardizing AQM schemes that not only address concerns with such 178 buffers, but also that are robust under a wide variety of operating 179 conditions. In order to ascertain whether the WG should undertake 180 standardizing an AQM proposal, the WG requires guidelines for 181 assessing AQM proposals. This document provides the necessary 182 characterization guidelines. 184 1.1. Guidelines for AQM designers 186 One of the key objectives behind formulating the guidelines is to 187 help ascertain whether a specific AQM is not only better than drop- 188 tail but also safe to deploy. The guidelines help to quantify AQM 189 schemes' performance in terms of latency reduction, goodput 190 maximization and the trade-off between the two. The guidelines also 191 help to discuss AQM's safe deployment, including self adaptation, 192 stability analysis, fairness, design/implementation complexity and 193 robustness to different operating conditions. 195 This memo details generic characterization scenarios that any AQM 196 proposal MUST be evaluated against. Irrespective of whether or not 197 an AQM is standardized by the WG, we recommend the relevant scenarios 198 and metrics discussed in this document to be considered. This 199 document presents central aspects of an AQM algorithm that MUST be 200 considered whatever the context is, such as burst absorption 201 capacity, RTT fairness or resilience to fluctuating network 202 conditions. These guidelines could not cover every possible aspect 203 of a particular algorithm. In addition, it is worth noting that the 204 proposed criteria are not bound to a particular evaluation toolset. 205 These guidelines do not present context dependent scenarios (such as 206 Wi-Fi, data-centers or rural broadband). 208 This document details how an AQM designer can rate the feasibility of 209 their proposal in different types of network devices (switches, 210 routers, firewalls, hosts, drivers, etc.) where an AQM may be 211 implemented. 213 1.2. Reducing the latency and maximizing the goodput 215 The trade-off between reducing the latency and maximizing the goodput 216 is intrinsically linked to each AQM scheme and is key to evaluating 217 its performance. This trade-off MUST be considered in various 218 scenarios to ensure the safety of an AQM deployment. Whenever 219 possible, solutions should aim at both maximizing goodput and 220 minimizing latency. This document proposes guidelines that enable 221 the reader to quantify (1) reduction of latency, (2) maximization of 222 goodput and (3) the trade-off between the two. 224 Testers SHOULD discuss in a reference document the performance of 225 their proposal in terms of performance and deployment in regards with 226 those of drop-tail: basically, these guidelines provide the tools to 227 understand the deployment costs versus the potential gain in 228 performance of the introduction of the proposed scheme. 230 1.3. Glossary 232 o AQM: there may be confusion whether a scheduling scheme is added 233 to an AQM or is a part of the AQM. The rest of this memo refers 234 to AQM as a dropping policy that does not feature a scheduling 235 scheme. 237 o buffer: a physical volume of memory in which a queue or set of 238 queues are stored. 240 o buffer size: the maximum amount of data that may be stored in a 241 buffer, measured in bytes or packets. 243 1.4. Requirements Language 245 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 246 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 247 document are to be interpreted as described in RFC 2119 [RFC2119]. 249 2. End-to-end metrics 251 End-to-end delay is the result of propagation delay, serialization 252 delay, service delay in a switch, medium-access delay and queuing 253 delay, summed over the network elements in the path. AQM algorithms 254 may reduce the queuing delay by providing signals to the sender on 255 the emergence of congestion, but any impact on the goodput must be 256 carefully considered. This section presents the metrics that COULD 257 be used to better quantify (1) the reduction of latency, (2) 258 maximization of goodput and (3) the trade-off between the two. These 259 metrics SHOULD be considered to better assess the performance of an 260 AQM scheme. 262 The metrics listed in this section are not necessarily suited to 263 every type of traffic detailed in the rest of this document. It is 264 therefore NOT REQUIRED to measure all of following metrics. 266 2.1. Flow Completion time 268 The flow completion time is an important performance metric for the 269 end user. Considering the fact that an AQM scheme may drop packets, 270 the flow completion time is directly linked to the dropping policy of 271 the AQM scheme. This metric helps to better assess the performance 272 of an AQM depending on the flow size. 274 2.2. Packet loss 276 Packet losses, that may occur in a queue, impact on the end-to-end 277 performance at the receiver's side. 279 The tester MUST evaluate, at the receiver: 281 o the packet loss probability: this metric should be frequently 282 measured during the experiment as the long term loss probability 283 is of interests for steady state scenarios only; 285 o the interval between consecutive losses: the time between two 286 losses should be measured. From the set of interval times, the 287 tester should present the median value, the minimum and maximum 288 values and the 10th and 90th percentiles. 290 2.3. Packet loss synchronization 292 One goal of an AQM algorithm should be to help with avoiding global 293 synchronization of flows going through the bottleneck buffer on which 294 the AQM operates ([RFC2309]). It is therefore important to assess 295 the "degree" of packet-loss synchronization between flows, with and 296 without the AQM under consideration. 298 As discussed e.g. in [LOSS-SYNCH-MET-08], loss synchronization among 299 flows may be quantified by several, slightly different, metrics that 300 capture different aspects of the same issue. However, in real-world 301 measurements the choice of metric may be imposed by practical 302 considerations (e.g., is there fine-grained information on packet 303 losses in the bottleneck available or not). For the purpose of AQM 304 characterization, a good candidate metric is the global 305 synchronization ratio, measuring the proportion of flows losing 306 packets during a loss event. [YU06] used this metric in real-world 307 experiments to characterize synchronization along arbitrary Internet 308 paths; the full methodology is described in [YU06]. 310 2.4. Goodput 312 Measuring the goodput enables an end-to-end appreciation of how well 313 the AQM improves transport and application performance. The measured 314 end-to-end goodput is linked to the AQM scheme's dropping policy -- 315 the smaller the packet drops, the fewer packets need retransmission, 316 minimizing AQM's impact on transport and application performance. 317 End-to-end goodput values help evaluate the AQM scheme's 318 effectiveness in minimizing packet drops that impact application 319 performance. 321 The measurement of the goodput let the tester evaluate to which 322 extend the AQM is able to keep an high link utilization. This metric 323 should be obtained frequently during the experiment: the long term 324 goodput makes sense for steady-state scenarios only and may not 325 reflect how the introduction of AQM actually impacts on the link 326 utilization. It is worth pointing out that the fluctuations of this 327 measurement may depend on other things than the introduction of an 328 AQM, such as physical layer losses, fluctuating bandwidths (Wi-Fi), 329 heavy congestion levels or transport layer congestion controls. 331 2.5. Latency and jitter 333 The end-to-end latency differs from the queuing delay: it is linked 334 to the network topology and the path characteristics. Moreover, the 335 jitter strongly depends on the traffic and the topology as well. The 336 introduction of an AQM scheme would impact on these metrics and the 337 end-to-end evaluation of performance SHOULD consider them to better 338 assess the AQM schemes. 340 The guidelines advice that the tester SHOULD determine the minimum, 341 average and maximum measurements for these metrics and the 342 coefficient of variation for their average values as well. 344 2.6. Discussion on the trade-off between latency and goodput 346 The metrics presented in this section MAY be considered, in order to 347 discuss and quantify the trade-off between latency and goodput. 349 This trade-off can also be illustrated with figures following the 350 recommendations of the section 5 of [TCPEVAL2013]. Each of the end- 351 to-end delay and the goodput should be measured every second. From 352 each of this sets of measurements, the 10th and 90th percentile and 353 the median value should be computed. For each scenario, a graph can 354 be generated, with the x-axis could show the end-to-end delay and the 355 y-axis the goodput. This graph provides part of a better 356 understanding (1) of the delay/goodput trade-off for a given 357 congestion control mechanism, and (2) of how the goodput and average 358 queue size vary as a function of the traffic load. 360 3. Generic set up for evaluations 362 This section presents the topology that can be used for each of the 363 following scenarios, the corresponding notations and discuss various 364 assumptions that have been made in the document. 366 3.1. Topology and notations 367 +---------+ +-----------+ 368 |senders A| |receivers B| 369 +---------+ +-----------+ 371 +--------------+ +--------------+ 372 |traffic class1| |traffic class1| 373 |--------------| |--------------| 374 | SEN.Flow1.1 +---------+ +-----------+ REC.Flow1.1 | 375 | + | | | | + | 376 | | | | | | | | 377 | + | | | | + | 378 | SEN.Flow1.X +-----+ | | +--------+ REC.Flow1.X | 379 +--------------+ | | | | +--------------+ 380 + +-+---+---+ +--+--+---+ + 381 | |Router L | |Router R | | 382 | |---------| |---------| | 383 | | AQM | | | | 384 | | BuffSize| | | | 385 | | (Bsize) +-----+ | | 386 | +-----+--++ ++-+------+ | 387 + | | | | + 388 +--------------+ | | | | +--------------+ 389 |traffic classN| | | | | |traffic classN| 390 |--------------| | | | | |--------------| 391 | SEN.FlowN.1 +---------+ | | +-----------+ REC.FlowN.1 | 392 | + | | | | + | 393 | | | | | | | | 394 | + | | | | + | 395 | SEN.FlowN.Y +------------+ +-------------+ REC.FlowN.Y | 396 +--------------+ +--------------+ 398 Figure 1: Topology and notations 400 Figure 1 is a generic topology where: 402 o various classes of traffic can be introduced; 404 o the timing of each flow (i.e., when does each flow start and stop) 405 may be different; 407 o each class of traffic can consider various number of flows; 409 o each link is characterized by a couple (RTT,capacity); 411 o Flows are generated between A and B, sharing a bottleneck (Routers 412 L and R); 414 o The links are supposed to be asymmetric in terms of bandwidth: the 415 capacity from senders to receivers is higher than the one from 416 receivers to senders. 418 This topology may not perfectly reflect actual topologies, however, 419 this simple topology is commonly used in the world of simulations and 420 small testbeds. This topology can be considered as adequate to 421 evaluate AQM proposals, such as proposed in [TCPEVAL2013]. The 422 tester should pay attention to the topology that has been used to 423 evaluate the AQM scheme against which he compares his proposal. 425 3.2. Buffer size 427 The size of the buffers MAY be carefully set considering the 428 bandwidth-delay product. However, if the context or the application 429 requires a specific buffer size, the tester MUST justify and detail 430 the way the maximum queue depth is set while presenting the results 431 of its evaluation. Indeed, the size of the buffer may impact on the 432 AQM performance and is a dimensioning parameter that will be 433 considered for a fair comparison between AQM proposals. 435 3.3. Congestion controls 437 This memo features three kind of congestion controls: 439 o TCP-friendly congestion controls: a base-line congestion control 440 for this category is TCP New Reno, as explained in [RFC5681]. 442 o Aggressive congestion controls: a base-line congestion control for 443 this category is TCP Cubic. 445 o Less-than Best Effort (LBE) congestion controls: an LBE congestion 446 control 'results in smaller bandwidth and/or delay impact on 447 standard TCP than standard TCP itself, when sharing a bottleneck 448 with it.' [RFC6297] 450 Recent transport layer protocols are not mentioned in the following 451 sections, for the sake of simplicity. 453 4. Various TCP variants 455 Network and end devices need to be configured with a reasonable 456 amount of buffers in order to absorb transient bursts. In some 457 situations, network providers configure devices with large buffers to 458 avoid packet drops and increase goodput. Transmission Control 459 Protocol (TCP) fills up these unmanaged buffers until the TCP sender 460 receives a signal (packet drop) to cut down the sending rate. The 461 larger the buffer, the higher the buffer occupancy, and therefore the 462 queuing delay. On the other hand, an efficient AQM scheme sends out 463 early congestion signals to TCP senders so that the queuing delay is 464 brought under control. 466 Not all applications run over the same flavor of TCP. Variety of 467 senders generate different classes of traffic which may not react to 468 congestion signals (aka unresponsive flows) or may not cut down their 469 sending rate as expected (aka aggressive flows): AQM schemes aim at 470 maintaining the queuing delay under control, which is challenged if 471 blasting traffics are present. 473 This section provides guidelines to assess the performance of an AQM 474 proposal based on various metrics presented in Section 2 irrespective 475 of traffic profiles involved -- different senders (TCP variants, 476 unresponsive, aggressive), traffic mix with different applications, 477 etc. 479 4.1. TCP-friendly Sender 481 This scenario helps to evaluate how an AQM scheme reacts to a TCP- 482 friendly transport sender. A single long-lived, non application 483 limited, TCP New Reno flow transmits data between sender A and 484 receiver B. Other TCP friendly congestion control schemes such as 485 TCP-friendly rate control [RFC5348] etc MAY also be considered. 487 For each TCP-friendly transport considered, the graph described in 488 Section 2.6 could be generated. 490 4.2. Aggressive Transport Sender 492 This scenario helps to evaluate how an AQM scheme reacts to a 493 transport sender whose sending rate is more aggressive than a single 494 TCP-friendly sender. A single long-lived, non application limited, 495 TCP Cubic flow transmits data between sender A and receiver B. Other 496 aggressive congestion control schemes MAY also be considered. 498 For each flavor of aggressive transport, the graph described in 499 Section 2.6 could be generated. 501 4.3. Unresponsive Transport Sender 503 This scenario helps evaluate how an AQM scheme reacts to a transport 504 sender who is not responsive to congestion signals (ECN marks and/or 505 packet drops) from the AQM scheme. Note that faulty transport 506 implementations on end hosts and/or faulty network elements en-route 507 that "hide" congestion signals in packet headers 508 [I-D.ietf-aqm-recommendation] may also lead to a similar situation, 509 such that the AQM scheme needs to adapt to unresponsive traffic. To 510 this end, these guidelines propose the two following scenarios. 512 The first scenario is the following. In order to create a test 513 environment that results in queue build up, we consider unresponsive 514 flow(s) whose sending rate is greater than the bottleneck link 515 capacity between routers L and R. This scenario consists of a long- 516 lived non application limited UDP flow transmits data between sender 517 A and receiver B. Graphs described in Section 2.6 could be generated. 519 The second scenario is the following. In order to test to which 520 extend the AQM scheme is able to keep responsive fraction under 521 control, this scenario considers a mixture of TCP-friendly and 522 unresponsive traffics. This scenario consists of a long-lived non 523 application limited UDP flow and a single long-lived, non application 524 limited, TCP New Reno flow that transmit data between sender A and 525 receiver B. As opposed to the first scenario, the rate of the UDP 526 traffic should not be greater than the bottleneck capacity, and 527 should not be higher than half of the bottleneck capacity. For each 528 type of traffic, the graph described in Section 2.6 COULD be 529 generated. 531 4.4. TCP initial congestion window 533 This scenario helps evaluate how an AQM scheme adapts to a traffic 534 mix consisting of TCP flows with different values for the initial 535 congestion window (IW). 537 For this scenario, we consider two types of flow that MUST be 538 generated between sender A and receiver B: 540 o a single long-lived non application limited TCP New Reno flow; 542 o a single long-lived application limited TCP New Reno flow, with an 543 IW set to 3 or 10 packets. The size of the data transmitted MUST 544 be strictly higher than 10 packets and should be lower than 100 545 packets. 547 The transmission of both flows must not start simultaneously: a 548 steady state must be achieved before the transmission of the 549 application limited flow. As a result, the transmission of the non 550 application limited flow MUST start before the transmission of the 551 application limited flow. 553 For each of these scenarios, the graph described in Section 2.6 could 554 be generated for each class of traffic. The completion time of the 555 application limited TCP flow could be measured. 557 4.5. Traffic Mix 559 This scenario helps to evaluate how an AQM scheme reacts to a traffic 560 mix consisting of different applications such as bulk transfer, web, 561 voice, video traffic. These testing cases presented in this 562 subsection have been inspired by the table 2 of [DOCSIS2013]: 564 o Bulk TCP transfer 566 o Web traffic 568 o VoIP 570 o Constant bit rate UDP traffic 572 o Adaptive video streaming 574 Figure 2 presents the various cases for the traffic that MUST be 575 generated between sender A and receiver B. 577 +----+-----------------------------+ 578 |Case| Number of flows | 579 + +----+----+----+---------+----+ 580 | |VoIP|Webs|CBR |AdaptVid |FTP | 581 +----+----+----+----+---------+----+ 582 |I | 1 | 1 | 0 | 0 | 0 | 583 | | | | | | | 584 |II | 1 | 1 | 0 | 0 | 1 | 585 | | | | | | | 586 |III | 1 | 1 | 0 | 0 | 5 | 587 | | | | | | | 588 |IV | 1 | 1 | 1 | 0 | 5 | 589 | | | | | | | 590 |V | 1 | 1 | 0 | 1 | 5 | 591 | | | | | | | 592 +----+----+----+----+---------+----+ 594 Figure 2: Traffic Mix scenarios 596 For each of these scenarios, the graph described in Section 2.6 could 597 be generated for each class of traffic. In addition, other metrics 598 such as end-to-end latency, jitter and flow completion time MUST be 599 generated. 601 5. RTT fairness 603 5.1. Motivation 605 The capability of AQM schemes to control the queuing delay highly 606 depends on the way end-to-end protocols react to congestion signals. 607 When the RTT varies, the behaviour of congestion controls is impacted 608 and so the capability of AQM schemes to control the queue. It is 609 therefore important to assess the AQM schemes against a set of RTTs 610 (e.g., from 5 ms to 200 ms). 612 Also, asymmetry in terms of RTT between various paths SHOULD be 613 considered so that the fairness between the flows can be discussed as 614 one may react faster to congestion than another. The introduction of 615 AQM schemes may improve this fairness. 617 Moreover, introducing an AQM scheme may result in the absence of 618 fairness between the flows, even when the RTTs are identical. This 619 potential lack of fairness SHOULD be evaluated. 621 5.2. Required tests 623 The topology that SHOULD be used is detailed in Figure 1: 625 o to evaluate the inter-RTT fairness, for each run, ten flows 626 divided into two categories. Category I (Flow1.1, ..., Flow1.5) 627 which RTT between sender A and Router L SHOULD be 5ms. Category 628 II (Flow2.1, ..., Flow 2.5) which RTT between sender A and Router 629 L SHOULD be in [5ms;200ms]. 631 o to evaluate the impact of the RTT value on the AQM performance and 632 the intra-protocol fairness, for each run, ten flows (Flow1.1, 633 ..., Flow1.5 and Flow2.1, ..., Flow2.5) SHOULD be introduced. For 634 each experiment, the set of RTT SHOULD be the same for all the 635 flows and in [5ms;200ms]. 637 These flows MUST use the same congestion control algorithm. 639 5.3. Metrics to evaluate the RTT fairness 641 The output that MUST be measured is: 643 o for the inter-RTT fairness: (1) the cumulated average goodput of 644 the flows from Category I, goodput_Cat_I (Section 2.4); (2) the 645 cumulated average goodput of the flows from Category II, 646 goodput_Cat_II (Section 2.4); (3) the ratio goodput_Cat_II/ 647 goodput_Cat_I; (4) the average packet drop rate for each category 648 (Section 2.2). 650 o for the intra-protocol RTT fairness: (1) the cumulated averga 651 goodput of the ten flows (Section 2.4); (2) the average packet 652 drop rate for the ten flows(Section 2.2). 654 6. Burst absorption 656 6.1. Motivation 658 Packet arrivals can be bursty due to various reasons. Dropping one 659 or more packets from a burst may result in performance penalties for 660 the corresponding flows since the dropped packets have to be 661 retransmitted. Performance penalties may turn into unmet SLAs and be 662 disincentives to AQM adoption. Therefore, an AQM scheme SHOULD be 663 designed to accommodate transient bursts. AQM schemes do not present 664 the same tolerance to bursts of packets arriving in the buffer: this 665 tolerance MUST be quantified. 667 Note that accommodating bursts translates to higher queue length and 668 queuing delay. Naturally, it is important that the AQM scheme brings 669 bursty traffic under control quickly. On the other hand, spiking 670 packet drops in order to bring packet bursts quickly under control 671 could result in multiple drops per flow and severely impact transport 672 and application performance. Therefore, an AQM scheme SHOULD bring 673 bursts under control by balancing both aspects -- (1) queuing delay 674 spikes are minimized and (2) performance penalties for ongoing flows 675 in terms of packet drops are minimized. 677 An AQM scheme maintains short queues to allow the remaining space in 678 the queue for bursts of packets. The tolerance to bursts of packets 679 depends on the number of packets in the queue, which is directly 680 linked to the AQM algorithm. Moreover, one AQM scheme may implement 681 a feature controlling the maximum size of accepted bursts, that may 682 depend on the buffer occupancy or the currently estimated queuing 683 delay. Also, the impact of the buffer size on the burst allowance 684 MAY be evaluated. 686 6.2. Required tests 688 For this scenario, the following traffic MUST be generated from 689 sender A to receiver B: 691 o IW10: TCP transfer with initial congestion window set to 10 of 692 5MB; 694 o Bursty video frames; 696 o Web traffic; 697 o Constant bit rate UDP traffic. 699 Figure 3 presents the various cases for the traffic that MUST be 700 generated between sender A and receiver B. 702 +-----------------------------------------+ 703 |Case| Number of traffic | 704 | +-----+----+----+--------------------+ 705 | |Video|Webs| CBR| Bulk Traffic (IW10)| 706 +----|-----|----|----|--------------------| 707 |I | 0 | 1 | 1 | 0 | 708 |----|-----|----|----|--------------------| 709 |II | 0 | 1 | 1 | 1 | 710 |----|-----|----|----|--------------------| 711 |III | 1 | 1 | 0 | 0 | 712 +----|-----|----|----|--------------------| 713 |IV | 1 | 1 | 1 | 0 | 714 +----|-----|----|----|--------------------| 715 |V | 1 | 1 | 1 | 1 | 716 +----+-----+----+----+--------------------+ 718 Figure 3: Bursty traffic scenarios 720 For each of these scenarios, the graph described in Section 2.6 could 721 be generated. In addition, other metrics such as end-to-end latency, 722 jitter, flow completion time MUST be generated. 724 7. Stability 726 7.1. Motivation 728 Network devices experience varying operating conditions depending on 729 factors such as time of day, deployment scenario etc. For example: 731 o Traffic and congestion levels are higher during peak hours than 732 off-peak hours. 734 o In the presence of scheduler, a queue's draining rate may vary 735 depending on other queues: a low load on a high priority queue 736 implies higher draining rate for lower priority queues. 738 o The available capacity on the physical layer may vary over time 739 such as in the context of lossy channels. 741 Whether the target context is a not stable environment, the 742 capability of an AQM scheme to actually maintain its control on the 743 queuing delay and buffer occupancy is challenged. This document 744 propose guidelines to assess the behaviour of AQM schemes under 745 varying congestion levels and varying draining rates. 747 7.2. Required tests 749 7.2.1. Mild Congestion 751 This scenario helps to evaluate how an AQM scheme reacts to a light 752 load of incoming traffic resulting in mild congestion -- packet drop 753 rates less than 1%. Each single-lived non application limited TCP 754 flow transfers data. 756 For this scenario, the graph described in Section 2.6 could be 757 generated. 759 7.2.2. Medium Congestion 761 This scenario helps to evaluate how an AQM scheme reacts to incoming 762 traffic resulting in medium congestion -- packet drop rates between 763 1%-3%. Each single-lived non application limited TCP flow transfers 764 data. 766 For this scenario, the graph described in Section 2.6 could be 767 generated. 769 7.2.3. Heavy Congestion 771 This scenario helps to evaluate how an AQM scheme reacts to incoming 772 traffic resulting in heavy congestion -- packet drop rates between 773 5%-10%. Each single lived non application limited TCP flow transfers 774 data. 776 For this scenario, the graph described in Section 2.6 could be 777 generated. 779 7.2.4. Varying congestion levels 781 This scenario helps to evaluate how an AQM scheme reacts to incoming 782 traffic resulting in various level of congestions during the 783 experiment. In this scenario, the congestion level varies according 784 to a large time scale. The following phases may be considered: phase 785 I - mild congestion during 0-5s; phase II - medium congestion during 786 5-10s; phase III - heavy congestion during 10-15s; phase I again, ... 787 and so on. Each single lived non application limited TCP flow 788 transfers data. 790 For this scenario, the graph described in Section 2.6 could be 791 generated. Moreover, one graph could be generated for each of the 792 phases previously detailed. 794 7.2.5. Varying Available Bandwidth 796 This scenario helps evaluate how an AQM scheme adapts to varying 797 available bandwidth on the outgoing link. 799 To simulate varying draining rates, the bottleneck bandwidth between 800 nodes 'Router L' and 'Router R' varies over the course of the 801 experiment as follows: 803 o Experiment 1: the capacity varies between two values according to 804 a large time scale. As an example, the following phases may be 805 considered: phase I - 100Mbps during 0-5s; phase II - 10Mbps 806 during 5-10s: phase I again, ... and so on. 808 o Experiment 2: the capacity varies between two values according to 809 a short time scale. As an example, the following phases may be 810 considered: phase I - 100Mbps during 100ms; phase II - 10Mbps 811 during 100ms; phase I again during 100ms, ... and so on. 813 More realistic fluctuating bandwidth patterns MAY be considered. 815 The scenario consists of TCP New Reno flows between sender A and 816 receiver B. In order to better assess the impact of draining rates on 817 the AQM behavior, the tester MUST compare its performance with those 818 of drop-tail. 820 For this scenario, the graph described in Section 2.6 could be 821 generated. Moreover, one graph SHOULD be generated for each of the 822 phases previously detailed. 824 7.3. Parameter sensitivity and stability analysis 826 An AQM scheme's control law is the primary means by which the AQM 827 controls queuing delay. Hence understanding the AQM control law is 828 critical to understanding AQM behavior. The AQM's control law may 829 include several input parameters whose values affect the AQM output 830 behavior and stability. Additionally, AQM schemes may auto-tune 831 parameter values in order to maintain stability under different 832 network conditions (such as different congestion levels, draining 833 rates or network environments). The stability of these auto-tuning 834 techniques is also important to understand. 836 AQM proposals SHOULD provide background material showing control 837 theoretic analysis of the AQM control law and the input parameter 838 space within which the control law operates as expected; or could use 839 other ways to discuss its stability. For parameters that are auto- 840 tuned, the material SHOULD include stability analysis of the auto- 841 tuning mechanism(s) as well. Such analysis helps to understand an 842 AQM's control law better and the network conditions/deployments under 843 which the AQM is stable. 845 8. Implementation cost 847 8.1. Motivation 849 An AQM's successful deployment is directly related to its ease of 850 implementation. Network devices may need hardware or software 851 implementations of the AQM. Depending on a device's capabilities and 852 limitations, the device may or may not be able to implement some or 853 all parts of the AQM logic. 855 AQM proposals SHOULD provide pseudo-code for the complete AQM scheme, 856 highlighting generic implementation-specific aspects of the scheme 857 such as "drop-tail" vs. "drop-head", inputs (current queuing delay, 858 queue length), computations involved, need for timers etc. This 859 helps identify costs associated with implementing the AQM on a 860 particular hardware or software device. Also, it helps the WG 861 understand which kind of devices can easily support the AQM and which 862 cannot. 864 8.2. Required discussion 866 AQM proposals SHOULD highlight parts of AQM logic that are device 867 dependent and discuss if and how AQM behavior could be impacted by 868 the device. For example, a queue-delay based AQM scheme requires 869 current queuing delay as input from the device. If the device 870 already maintains this value, then it is trivial to implement the AQM 871 logic on the device. On the other hand, if the device provides 872 indirect means to estimate queuing delay (for example: timestamps, 873 dequeing rate etc.), then the AQM behavior is sensitive to how good 874 the queuing delay estimate turns out on that device. Highlighting 875 the AQM's sensitivity to queuing delay estimate helps implementers 876 identify optimal means of implementing the AQM on a device. 878 9. Operator control knobs and auto-tuning 880 One of the biggest hurdles for RED deployment was/is its parameter 881 sensitivity to operating conditions -- how difficult it is to tune 882 important RED parameters for a deployment in order to get maximum 883 benefit from the RED implementation. Fluctuating congestion levels 884 and network conditions add to the complexity. Incorrect parameter 885 values lead to poor performance. This is one reason why RED is 886 reported to be usually turned off. 888 Any AQM scheme is likely to have parameters whose values affect the 889 AQM's control law and behavior. Exposing all these parameters as 890 control knobs to a network operator (or user) can easily result in an 891 unsafe AQM deployment. Unexpected AQM behavior ensues when parameter 892 values are not set properly. A minimal number of control knobs 893 minimizes the number of ways a, possible naive, user can break the 894 AQM system. Fewer control knobs make the AQM scheme more user- 895 friendly and easier to deploy and debug. 897 We recommend that an AQM scheme SHOULD minimize the number of control 898 knobs exposed for operator tuning. An AQM scheme SHOULD expose only 899 those knobs that control the macroscopic AQM behavior such as queue 900 delay threshold, queue length threshold, etc. 902 Additionally, an AQM scheme's safety is directly related to its 903 stability under varying operating conditions such as varying traffic 904 profiles and fluctuating network conditions, as described in 905 Section 7. Operating conditions vary often and hence it is necessary 906 that the AQM MUST remain stable under these conditions without the 907 need for additional external tuning. If AQM parameters require 908 tuning under these conditions, then the AQM MUST self-adapt necessary 909 parameter values by employing auto-tuning techniques. 911 10. Interaction with ECN 913 10.1. Motivation 915 Apart from packet drops, Explicit Congestion Notification (ECN) is an 916 alternative means to signal data senders about network congestion. 917 The AQM recommendation document [I-D.ietf-aqm-recommendation] 918 describes some of the benefits of using ECN with AQM. 920 10.2. Required discussion 922 An AQM scheme MAY support ECN, in which case testers MUST discuss and 923 describe the support of ECN. 925 11. Interaction with scheduling 927 11.1. Motivation 929 Coupled with an AQM scheme, a router may schedule the transmission of 930 packets in a specific manner by introducing a scheduling scheme. 931 This algorithm may create sub-queues and integrate a dropping policy 932 on each of these sub-queues. Another scheduling policy may modify 933 the way packets are sequenced, modifying the timestamp of each 934 packet. 936 11.2. Required discussion 938 The scheduling and the AQM conjointly impact on the end-to-end 939 performance. During the characterization process of a dropping 940 policy, the tester MAY discuss the feasibility to add scheduling on 941 top of its algorithm. This discussion MAY detail if the dropping 942 policy is applied while packets are enqueued or dequeued. 944 12. Discussion on methodology, metrics, AQM comparisons and packet 945 sizes 947 12.1. Methodology 949 A sufficiently detailed description of the test setup SHOULD be 950 provided. Indeed, that would allow other to replicate the tests if 951 needed. This test setup MAY include software and hardware versions. 952 The tester MAY make its data available. 954 The proposals SHOULD be experimented on real systems, or they MAY be 955 evaluated with event-driven simulations (such as NS-2, NS-3, OMNET, 956 etc.). The proposed scenarios are not bound to a particular 957 evaluation toolset. 959 12.2. Comments on metrics measurement 961 In this document, we present the end-to-end metrics that SHOULD be 962 evaluated to evaluate the trade-off between latency and goodput. The 963 queue-related metrics enable a better understanding of the AQM 964 behavior under tests and the impact of its internal parameters. 965 Whenever it is possible, these guidelines advice to consider queue- 966 related metrics, such as link utilization, queuing delay, queue size 967 or packet loss. 969 These guidelines could hardly detail the way the metrics can be 970 measured depends highly on the evaluation toolset. 972 12.3. Comparing AQM schemes 974 This memo recognizes that the guidelines mentioned above may be used 975 for comparing AQM schemes. This memo recommends that AQM schemes 976 MUST be compared against both performance and deployment categories. 977 In addition, this section details how best to achieve a fair 978 comparison of AQM schemes by avoiding certain pitfalls. 980 12.3.1. Performance comparison 982 AQM schemes MUST be compared against all the generic scenarios 983 presented in this memo. AQM schemes MAY be compared for specific 984 network environments such as data center, home networks etc. If an 985 AQM scheme's parameter(s) were externally tuned for optimization or 986 other purposes, these values MUST be disclosed. 988 Note that AQM schemes belong to different varieties such as queue- 989 length based scheme (ex: RED) or queue-delay based scheme (ex: CoDel, 990 PIE). Also, AQM schemes expose different control knobs associated 991 with different semantics. For example, while both PIE and CoDel are 992 queue-delay based schemes and each expose a knob to control the 993 queueing delay -- PIE's "queueing delay reference" vs. CoDel's 994 "queueing delay target", the two schemes' knobs have different 995 semantics resulting in different control points. Such differences in 996 AQM schemes can be easily overlooked while making comparisons. 998 This document recommends the following procedures for a fair 999 performance comparison of two AQM schemes: 1001 1. comparable control parameters and comparable input values: 1002 carefully identify the set of parameters that control similar 1003 behavior between the two AQM schemes and ensure these parameters 1004 have comparable input values. For example, while comparing how 1005 well a queue-length based AQM X controls queueing delay vs. 1006 queue-delay based AQM Y, identify the two schemes' parameters 1007 that control queue delay and ensure that their input values are 1008 comparable. Similarly, to compare two AQM schemes on how well 1009 they accommodate bursts, identify burst-related control 1010 parameters and ensure they are configured with similar values. 1012 2. compare over a range of input configurations: there could be 1013 situations when the set of control parameters that affect a 1014 specific behavior have different semantics between the two AQM 1015 schemes. As mentioned above, PIE's knob to control queue delay 1016 has different semantics from CoDel's. In such situations, the 1017 schemes MUST be compared over a range of input configurations. 1018 For example, compare PIE vs. CoDel over the range of delay input 1019 configurations -- 5ms, 10ms, 15ms etc. 1021 12.3.2. Deployment comparison 1023 AQM schemes MUST be compared against deployment criteria such as the 1024 parameter sensitivity (Section 7.3), the auto-tuning (Section 9) or 1025 the implementation cost (Section 8). 1027 12.4. Packet sizes and congestion notification 1029 An AQM scheme may be considering packet sizes while generating 1030 congestion signals. [RFC7141] discusses the motivations behind the 1031 same. For example, control packets such as DNS requests/responses, 1032 TCP SYNs/ACKs are small, and their loss can severely impact 1033 application performance. An AQM scheme may therefore be biased 1034 towards small packets by dropping them with smaller probability 1035 compared to larger packets. However, such an AQM scheme is unfair to 1036 data senders generating larger packets. Data senders, malicious or 1037 otherwise, are motivated to take advantage of the AQM scheme by 1038 transmitting smaller packets, and could result in unsafe deployments 1039 and unhealthy transport and/or application designs. 1041 An AQM scheme SHOULD adhere to recommendations outlined in [RFC7141], 1042 and SHOULD NOT provide undue advantage to flows with smaller packets. 1044 13. Acknowledgements 1046 This work has been partially supported by the European Community 1047 under its Seventh Framework Programme through the Reducing Internet 1048 Transport Latency (RITE) project (ICT-317700). 1050 14. Contributors 1052 Many thanks to S. Akhtar, A.B. Bagayoko, F. Baker, D. Collier-Brown, 1053 G. Fairhurst, T. Hoiland-Jorgensen, C. Kulatunga, W. Lautenschlager, 1054 R. Pan, D. Taht and M. Welzl for detailed and wise feedback on this 1055 document. 1057 15. IANA Considerations 1059 This memo includes no request to IANA. 1061 16. Security Considerations 1063 This document, by itself, presents no new privacy nor security 1064 issues. 1066 17. References 1068 17.1. Normative References 1070 [I-D.ietf-aqm-recommendation] 1071 Baker, F. and G. Fairhurst, "IETF Recommendations 1072 Regarding Active Queue Management", draft-ietf-aqm- 1073 recommendation-01 (work in progress), January 2014. 1075 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1076 Requirement Levels", RFC 2119, 1997. 1078 [RFC7141] Briscoe, B. and J. Manner, "Byte and Packet Congestion 1079 Notification", RFC 7141, 2014. 1081 17.2. Informative References 1083 [BB2011] "BufferBloat: what's wrong with the internet?", ACM Queue 1084 vol. 9, 2011. 1086 [CODEL] Nichols, K. and V. Jacobson, "Controlling Queue Delay", 1087 ACM Queue , 2012. 1089 [DOCSIS2013] 1090 White, G. and D. Rice, "Active Queue Management Algorithms 1091 for DOCSIS 3.0", Technical report - Cable Television 1092 Laboratories , 2013. 1094 [LOSS-SYNCH-MET-08] 1095 Hassayoun, S. and D. Ros, "Loss Synchronization and Router 1096 Buffer Sizing with High-Speed Versions of TCP", IEEE 1097 INFOCOM Workshops , 2008. 1099 [PIE] Pan, R., Natarajan, P., Piglione, C., Prabhu, MS., 1100 Subramanian, V., Baker, F., and B. VerSteeg, "PIE: A 1101 lightweight control scheme to address the bufferbloat 1102 problem", IEEE HPSR , 2013. 1104 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 1105 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 1106 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 1107 S., Wroclawski, J., and L. Zhang, "Recommendations on 1108 Queue Management and Congestion Avoidance in the 1109 Internet", RFC 2309, April 1998. 1111 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP 1112 Friendly Rate Control (TFRC): Protocol Specification", RFC 1113 5348, September 2008. 1115 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1116 Control", RFC 5681, September 2009. 1118 [RFC6297] Welzl, M. and D. Ros, "A Survey of Lower-than-Best-Effort 1119 Transport Protocols", RFC 6297, June 2011. 1121 [TCPEVAL2013] 1122 Hayes, D., Ros, D., Andrew, L., and S. Floyd, "Common TCP 1123 Evaluation Suite", IRTF ICCRG , 2013. 1125 [YU06] Jay, P., Fu, Q., and G. Armitage, "A preliminary analysis 1126 of loss synchronisation between concurrent TCP flows", 1127 Australian Telecommunication Networks and Application 1128 Conference (ATNAC) , 2006. 1130 Authors' Addresses 1132 Nicolas Kuhn (editor) 1133 Telecom Bretagne 1134 2 rue de la Chataigneraie 1135 Cesson-Sevigne 35510 1136 France 1138 Phone: +33 2 99 12 70 46 1139 Email: nicolas.kuhn@telecom-bretagne.eu 1141 Preethi Natarajan (editor) 1142 Cisco Systems 1143 510 McCarthy Blvd 1144 Milpitas, California 1145 United States 1147 Email: prenatar@cisco.com 1149 David Ros 1150 Simula Research Laboratory AS 1151 P.O. Box 134 1152 Lysaker, 1325 1153 Norway 1155 Phone: +33 299 25 21 21 1156 Email: dros@simula.no 1158 Naeem Khademi 1159 University of Oslo 1160 Department of Informatics, PO Box 1080 Blindern 1161 N-0316 Oslo 1162 Norway 1164 Phone: +47 2285 24 93 1165 Email: naeemk@ifi.uio.no