idnits 2.17.1 draft-ietf-aqm-eval-guidelines-13.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (June 14, 2016) is 2873 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Byte' is mentioned on line 337, but not defined ** Obsolete normative reference: RFC 2679 (Obsoleted by RFC 7679) ** Obsolete normative reference: RFC 2680 (Obsoleted by RFC 7680) == Outdated reference: A later version (-10) exists of draft-ietf-aqm-codel-04 == Outdated reference: A later version (-10) exists of draft-ietf-aqm-pie-08 == Outdated reference: A later version (-07) exists of draft-ietf-tcpm-cubic-01 -- Obsolete informational reference (is this intentional?): RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) Summary: 2 errors (**), 0 flaws (~~), 6 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force N. Kuhn, Ed. 3 Internet-Draft CNES, Telecom Bretagne 4 Intended status: Informational P. Natarajan, Ed. 5 Expires: December 16, 2016 Cisco Systems 6 N. Khademi, Ed. 7 University of Oslo 8 D. Ros 9 Simula Research Laboratory AS 10 June 14, 2016 12 AQM Characterization Guidelines 13 draft-ietf-aqm-eval-guidelines-13 15 Abstract 17 Unmanaged large buffers in today's networks have given rise to a slew 18 of performance issues. These performance issues can be addressed by 19 some form of Active Queue Management (AQM) mechanism, optionally in 20 combination with a packet scheduling scheme such as fair queuing. 21 This document describes various criteria for performing 22 characterizations of AQM schemes, that can be used in lab testing 23 during development, prior to deployment. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on December 16, 2016. 42 Copyright Notice 44 Copyright (c) 2016 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 60 1.1. Reducing the latency and maximizing the goodput . . . . . 5 61 1.2. Goals of this document . . . . . . . . . . . . . . . . . 5 62 1.3. Requirements Language . . . . . . . . . . . . . . . . . . 6 63 1.4. Glossary . . . . . . . . . . . . . . . . . . . . . . . . 6 64 2. End-to-end metrics . . . . . . . . . . . . . . . . . . . . . 7 65 2.1. Flow completion time . . . . . . . . . . . . . . . . . . 7 66 2.2. Flow start up time . . . . . . . . . . . . . . . . . . . 8 67 2.3. Packet loss . . . . . . . . . . . . . . . . . . . . . . . 8 68 2.4. Packet loss synchronization . . . . . . . . . . . . . . . 9 69 2.5. Goodput . . . . . . . . . . . . . . . . . . . . . . . . . 9 70 2.6. Latency and jitter . . . . . . . . . . . . . . . . . . . 10 71 2.7. Discussion on the trade-off between latency and goodput . 10 72 3. Generic setup for evaluations . . . . . . . . . . . . . . . . 11 73 3.1. Topology and notations . . . . . . . . . . . . . . . . . 11 74 3.2. Buffer size . . . . . . . . . . . . . . . . . . . . . . . 13 75 3.3. Congestion controls . . . . . . . . . . . . . . . . . . . 13 76 4. Methodology, Metrics, AQM Comparisons, Packet Sizes, 77 Scheduling and ECN . . . . . . . . . . . . . . . . . . . . . 14 78 4.1. Methodology . . . . . . . . . . . . . . . . . . . . . . . 14 79 4.2. Comments on metrics measurement . . . . . . . . . . . . . 14 80 4.3. Comparing AQM schemes . . . . . . . . . . . . . . . . . . 15 81 4.3.1. Performance comparison . . . . . . . . . . . . . . . 15 82 4.3.2. Deployment comparison . . . . . . . . . . . . . . . . 16 83 4.4. Packet sizes and congestion notification . . . . . . . . 16 84 4.5. Interaction with ECN . . . . . . . . . . . . . . . . . . 17 85 4.6. Interaction with Scheduling . . . . . . . . . . . . . . . 17 86 5. Transport Protocols . . . . . . . . . . . . . . . . . . . . . 18 87 5.1. TCP-friendly sender . . . . . . . . . . . . . . . . . . . 18 88 5.1.1. TCP-friendly sender with the same initial congestion 89 window . . . . . . . . . . . . . . . . . . . . . . . 18 90 5.1.2. TCP-friendly sender with different initial congestion 91 windows . . . . . . . . . . . . . . . . . . . . . . . 19 92 5.2. Aggressive transport sender . . . . . . . . . . . . . . . 19 93 5.3. Unresponsive transport sender . . . . . . . . . . . . . . 19 94 5.4. Less-than Best Effort transport sender . . . . . . . . . 20 95 6. Round Trip Time Fairness . . . . . . . . . . . . . . . . . . 21 96 6.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 21 97 6.2. Recommended tests . . . . . . . . . . . . . . . . . . . . 21 98 6.3. Metrics to evaluate the RTT fairness . . . . . . . . . . 21 99 7. Burst Absorption . . . . . . . . . . . . . . . . . . . . . . 22 100 7.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 22 101 7.2. Recommended tests . . . . . . . . . . . . . . . . . . . . 22 102 8. Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 23 103 8.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 23 104 8.2. Recommended tests . . . . . . . . . . . . . . . . . . . . 24 105 8.2.1. Definition of the congestion Level . . . . . . . . . 24 106 8.2.2. Mild congestion . . . . . . . . . . . . . . . . . . . 25 107 8.2.3. Medium congestion . . . . . . . . . . . . . . . . . . 25 108 8.2.4. Heavy congestion . . . . . . . . . . . . . . . . . . 25 109 8.2.5. Varying the congestion level . . . . . . . . . . . . 25 110 8.2.6. Varying available capacity . . . . . . . . . . . . . 25 111 8.3. Parameter sensitivity and stability analysis . . . . . . 26 112 9. Various Traffic Profiles . . . . . . . . . . . . . . . . . . 27 113 9.1. Traffic mix . . . . . . . . . . . . . . . . . . . . . . . 27 114 9.2. Bi-directional traffic . . . . . . . . . . . . . . . . . 28 115 10. Example of multi-AQM scenario . . . . . . . . . . . . . . . . 28 116 10.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 28 117 10.2. Details on the evaluation scenario . . . . . . . . . . . 28 118 11. Implementation cost . . . . . . . . . . . . . . . . . . . . . 29 119 11.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 29 120 11.2. Recommended discussion . . . . . . . . . . . . . . . . . 29 121 12. Operator Control and Auto-tuning . . . . . . . . . . . . . . 30 122 12.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . 30 123 12.2. Recommended discussion . . . . . . . . . . . . . . . . . 30 124 13. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 125 14. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 32 126 15. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 32 127 16. Security Considerations . . . . . . . . . . . . . . . . . . . 32 128 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 32 129 17.1. Normative References . . . . . . . . . . . . . . . . . . 32 130 17.2. Informative References . . . . . . . . . . . . . . . . . 33 131 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 36 133 1. Introduction 135 Active Queue Management (AQM) addresses the concerns arising from 136 using unnecessarily large and unmanaged buffers to improve network 137 and application performance, such as presented in the section 1.2 of 138 the AQM recommendations document [RFC7567]. Several AQM algorithms 139 have been proposed in the past years, most notably Random Early 140 Detection (RED) [FLOY1993], BLUE [FENG2002], and Proportional 141 Integral controller (PI) [HOLLO2001], and more recently CoDel 142 [I-D.ietf-aqm-codel] and PIE [I-D.ietf-aqm-pie]. In general, these 143 algorithms actively interact with the Transmission Control Protocol 144 (TCP) and any other transport protocol that deploys a congestion 145 control scheme to manage the amount of data they keep in the network. 146 The available buffer space in the routers and switches should be 147 large enough to accommodate the short-term buffering requirements. 148 AQM schemes aim at reducing buffer occupancy, and therefore the end- 149 to-end delay. Some of these algorithms, notably RED, have also been 150 widely implemented in some network devices. However, the potential 151 benefits of the RED scheme have not been realized since RED is 152 reported to be usually turned off. 154 A buffer is a physical volume of memory in which a queue or set of 155 queues are stored. When speaking of a specific queue in this 156 document, "buffer occupancy" refers to the amount of data (measured 157 in bytes or packets) that are in the queue, and the "maximum buffer 158 size" refers to the maximum buffer occupancy. In switches and 159 routers, a global memory space is often shared between the available 160 interfaces, and thus, the maximum buffer size for any given interface 161 may vary over the time. 163 Bufferbloat [BB2011] is the consequence of deploying large unmanaged 164 buffers on the Internet -- the buffering has often been measured to 165 be ten times or hundred times larger than needed. Large buffer sizes 166 in combination with TCP and/or unresponsive flows increases end-to- 167 end delay. This results in poor performance for latency-sensitive 168 applications such as real-time multimedia (e.g., voice, video, 169 gaming, etc). The degree to which this affects modern networking 170 equipment, especially consumer-grade equipment's, produces problems 171 even with commonly used web services. Active queue management is 172 thus essential to control queuing delay and decrease network latency. 174 The Active Queue Management and Packet Scheduling Working Group (AQM 175 WG) was chartered to address the problems with large unmanaged 176 buffers in the Internet. Specifically, the AQM WG is tasked with 177 standardizing AQM schemes that not only address concerns with such 178 buffers, but also are robust under a wide variety of operating 179 conditions. This document provides characterization guidelines that 180 can be used to assess the applicability, performance and 181 deployability of an AQM, whether it is candidate for standardization 182 at IETF or not. 184 AQM algorithm implemented in a router can be separated from the 185 scheduling of packets sent out by the router as discussed in the AQM 186 recommendations document [RFC7567]. The rest of this memo refers to 187 the AQM as a dropping/marking policy as a separate feature to any 188 interface scheduling scheme. This document may be complemented with 189 another one on guidelines for assessing combination of packet 190 scheduling and AQM. We note that such a document will inherit all 191 the guidelines from this document plus any additional scenarios 192 relevant for packet scheduling such as flow starvation evaluation or 193 impact of the number of hash buckets. 195 1.1. Reducing the latency and maximizing the goodput 197 The trade-off between reducing the latency and maximizing the goodput 198 is intrinsically linked to each AQM scheme and is key to evaluating 199 its performance. To ensure the safety deployment of an AQM, its 200 behaviour should be assessed in a variety of scenarios. Whenever 201 possible, solutions ought to aim at both maximizing goodput and 202 minimizing latency. 204 1.2. Goals of this document 206 This document recommends a generic list of scenarios against which an 207 AQM proposal should be evaluated, considering both potential 208 performance gain and safety of deployment. The guidelines help to 209 quantify performance of AQM schemes in terms of latency reduction, 210 goodput maximization and the trade-off between these two. The 211 document presents central aspects of an AQM algorithm that should be 212 considered whatever the context, such as burst absorption capacity, 213 RTT fairness or resilience to fluctuating network conditions. The 214 guidelines also discuss methods to understand the various aspects 215 associated with safely deploying and operating the AQM scheme. Thus, 216 one of the key objectives behind formulating the guidelines is to 217 help ascertain whether a specific AQM is not only better than drop- 218 tail (i.e. without AQM and with a BDP-sized buffer) but also safe to 219 deploy: the guidelines can be used to compare several AQM proposals 220 with each other, but should be used to compare a proposal with drop- 221 tail. 223 This memo details generic characterization scenarios against which 224 any AQM proposal should be evaluated, irrespective of whether or not 225 an AQM is standardized by the IETF. This documents recommends the 226 relevant scenarios and metrics to be considered. The document 227 presents central aspects of an AQM algorithm that should be 228 considered whatever the context, such as burst absorption capacity, 229 RTT fairness or resilience to fluctuating network conditions. 231 These guidelines do not define and are not bound to a particular 232 deployment scenario or evaluation toolset. Instead the guidelines 233 can be used to assert the potential gain of introducing an AQM for 234 the particular environment, which is of interest to the testers. 235 These guidelines do not cover every possible aspect of a particular 236 algorithm. These guidelines do not present context-dependent 237 scenarios (such as 802.11 WLANs, data-centers or rural broadband 238 networks). To keep the guidelines generic, a number of potential 239 router components and algorithms (such as DiffServ) are omitted. 241 The goals of this document can thus be summarized as follows: 243 o The present characterization guidelines provide a non-exhaustive 244 list of scenarios to help ascertain whether an AQM is not only 245 better than drop-tail (with a BDP-sized buffer), but also safe to 246 deploy; the guidelines can also be used to compare several AQM 247 proposals with each other. 249 o The present characterization guidelines (1) are not bound to a 250 particular evaluation toolset and (2) can be used for various 251 deployment contexts; testers are free to select a toolset that is 252 best suited for the environment in which their proposal will be 253 deployed. 255 o The present characterization guidelines are intended to provide 256 guidance for better selecting an AQM for a specific environment; 257 it is not required that an AQM proposal is evaluated following 258 these guidelines for its standardization. 260 1.3. Requirements Language 262 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 263 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 264 document are to be interpreted as described in RFC 2119 [RFC2119]. 266 1.4. Glossary 268 o application-limited traffic: a type of traffic that does not have 269 an unlimited amount of data to transmit. 271 o AQM: the Active Queue Managment (AQM) algorithm implemented in a 272 router can be separated from the scheduling of packets sent by the 273 router. The rest of this memo refers to the AQM as a dropping/ 274 marking policy as a separate feature to any interface scheduling 275 scheme [RFC7567]. 277 o BDP: Bandwidth Delay Product. 279 o buffer: a physical volume of memory in which a queue or set of 280 queues are stored. 282 o buffer occupancy: amount of data that are stored in a buffer, 283 measured in bytes or packets. 285 o buffer size: maximum buffer occupancy, that is the maximum amount 286 of data that may be stored in a buffer, measured in bytes or 287 packets. 289 o IW10: TCP initial congestion window set to 10 packets. 291 o latency: one-way delay of packets across Internet paths. This 292 definition suits transport layer definition of the latency, that 293 shall not be confused with an application layer view of the 294 latency. 296 o goodput: goodput is defined as the number of bits per unit of time 297 forwarded to the correct destination minus any bits lost or 298 retransmitted [RFC2647]. The goodput should be determined for 299 each flow and not for aggregates of flows. 301 o SQRT: the square root function. 303 o ROUND: the round function. 305 2. End-to-end metrics 307 End-to-end delay is the result of propagation delay, serialization 308 delay, service delay in a switch, medium-access delay and queuing 309 delay, summed over the network elements along the path. AQM schemes 310 may reduce the queuing delay by providing signals to the sender on 311 the emergence of congestion, but any impact on the goodput must be 312 carefully considered. This section presents the metrics that could 313 be used to better quantify (1) the reduction of latency, (2) 314 maximization of goodput and (3) the trade-off between these two. 315 This section provides normative requirements for metrics that can be 316 used to assess the performance of an AQM scheme. 318 Some metrics listed in this section are not suited to every type of 319 traffic detailed in the rest of this document. It is therefore not 320 necessary to measure all of the following metrics: the chosen metric 321 may not be relevant to the context of the evaluation scenario (e.g., 322 latency vs. goodput trade-off in application-limited traffic 323 scenarios). Guidance is provided for each metric. 325 2.1. Flow completion time 327 The flow completion time is an important performance metric for the 328 end-user when the flow size is finite. The definition of the flow 329 size may be source of contradictions, thus, this metric can consider 330 a flow as a single file. Considering the fact that an AQM scheme may 331 drop/mark packets, the flow completion time is directly linked to the 332 dropping/marking policy of the AQM scheme. This metric helps to 333 better assess the performance of an AQM depending on the flow size. 334 The Flow Completion Time (FCT) is related to the flow size (Fs) and 335 the goodput for the flow (G) as follows: 337 FCT [s] = Fs [Byte] / ( G [Bit/s] / 8 [Bit/Byte] ) 339 Where flow size is the size of the transport-layer payload in bits 340 and goodput is the transport-layer payload transfer time (described 341 in Section 2.5). 343 If this metric is used to evaluate the performance of web transfers, 344 it is suggested to rather consider the time needed to download all 345 the objects that compose the web page, as this makes more sense in 346 terms of user experience than assessing the time needed to download 347 each object. 349 2.2. Flow start up time 351 The flow start up time is the time between the request has been sent 352 from the client and the server starts to transmit data. The amount 353 of packets dropped by an AQM may seriously affect the waiting period 354 during which the data transfer has not started. This metric would 355 specifically focus on the operations such as DNS lookups, TCP opens 356 and SSL handshakes. 358 2.3. Packet loss 360 Packet loss can occur en-route, this can impact the end-to-end 361 performance measured at receiver. 363 The tester should evaluate loss experienced at the receiver using one 364 of the two metrics: 366 o the packet loss ratio: this metric is to be frequently measured 367 during the experiment. The long-term loss ratio is of interest 368 for steady-state scenarios only; 370 o the interval between consecutive losses: the time between two 371 losses is to be measured. 373 The packet loss ratio can be assessed by simply evaluating the loss 374 ratio as a function of the number of lost packets and the total 375 number of packets sent. This might not be easily done in laboratory 376 testing, for which these guidelines advice the tester: 378 o to check that for every packet, a corresponding packet was 379 received within a reasonable time, as presented in the document 380 that proposes a metric for one-way packet loss across Internet 381 paths [RFC2680]. 383 o to keep a count of all packets sent, and a count of the non- 384 duplicate packets received, as discussed in RFC that presents a 385 benchmarking methodology [RFC2544]. 387 The interval between consecutive losses, which is also called a gap, 388 is a metric of interest for VoIP traffic [RFC3611]. 390 2.4. Packet loss synchronization 392 One goal of an AQM algorithm is to help to avoid global 393 synchronization of flows sharing a bottleneck buffer on which the AQM 394 operates ([RFC2309],[RFC7567]). The "degree" of packet-loss 395 synchronization between flows should be assessed, with and without 396 the AQM under consideration. 398 Loss synchronization among flows may be quantified by several 399 slightly different metrics that capture different aspects of the same 400 issue [HASS2008]. However, in real-world measurements the choice of 401 metric could be imposed by practical considerations -- e.g., whether 402 fine-grained information on packet losses at the bottleneck is 403 available or not. For the purpose of AQM characterization, a good 404 candidate metric is the global synchronization ratio, measuring the 405 proportion of flows losing packets during a loss event. This metric 406 can be used in real-world experiments to characterize synchronization 407 along arbitrary Internet paths [JAY2006]. 409 If an AQM scheme is evaluated using real-life network environments, 410 it is worth pointing out that some network events, such as failed 411 link restoration may cause synchronized losses between active flows 412 and thus confuse the meaning of this metric. 414 2.5. Goodput 416 The goodput has been defined as the number of bits per unit of time 417 forwarded to the correct destination interface, minus any bits lost 418 or retransmitted, such as proposed in the secton 3.17 of the RFC 419 describing the benchmarking terminology for firewall performances 420 [RFC2647]. This definition requires that the test setup needs to be 421 qualified to assure that it is not generating losses on its own. 423 Measuring the end-to-end goodput provides an appreciation of how well 424 an AQM scheme improves transport and application performance. The 425 measured end-to-end goodput is linked to the dropping/marking policy 426 of the AQM scheme -- e.g., the fewer the number of packet drops, the 427 fewer packets need retransmission, minimizing the impact of AQM on 428 transport and application performance. Additionally, an AQM scheme 429 may resort to Explicit Congestion Notification (ECN) marking as an 430 initial means to control delay. Again, marking packets instead of 431 dropping them reduces the number of packet retransmissions and 432 increases goodput. End-to-end goodput values help to evaluate the 433 AQM scheme's effectiveness of an AQM scheme in minimizing packet 434 drops that impact application performance and to estimate how well 435 the AQM scheme works with ECN. 437 The measurement of the goodput allows the tester to evaluate to which 438 extent an AQM is able to maintain a high bottleneck utilization. 439 This metric should also be obtained frequently during an experiment 440 as the long-term goodput is relevant for steady-state scenarios only 441 and may not necessarily reflect how the introduction of an AQM 442 actually impacts the link utilization during at a certain period of 443 time. Fluctuations in the values obtained from these measurements 444 may depend on other factors than the introduction of an AQM, such as 445 link layer losses due to external noise or corruption, fluctuating 446 bandwidths (802.11 WLANs), heavy congestion levels or transport 447 layer's rate reduction by congestion control mechanism. 449 2.6. Latency and jitter 451 The latency, or the one-way delay metric, is discussed in [RFC2679]. 452 There is a consensus on an adequate metric for the jitter, that 453 represents the one-way delay variations for packets from the same 454 flow: the Packet Delay Variation (PDV) serves well all use cases 455 [RFC5481]. 457 The end-to-end latency includes components other than just the 458 queuing delay, such as the signal processing delay, transmission 459 delay and the processing delay. Moreover, the jitter is caused by 460 variations in queuing and processing delay (e.g., scheduling 461 effects). The introduction of an AQM scheme would impact end-to-end 462 latency and jitter, and therefore these metrics should be considered 463 in the end-to-end evaluation of performance. 465 2.7. Discussion on the trade-off between latency and goodput 467 The metrics presented in this section may be considered in order to 468 discuss and quantify the trade-off between latency and goodput. 470 With regards to the goodput, and in addition to the long-term 471 stationary goodput value, it is recommended to take measurements 472 every multiple of the minimum RTT (minRTT) between A and B. It is 473 suggested to take measurements at least every K x minRTT (to smooth 474 out the fluctuations), with K=10. Higher values for K can be 475 considered whenever it is more appropriate for the presentation of 476 the results, since the value for K may depend on the network's path 477 characteristics. The measurement period must be disclosed for each 478 experiment and when results/values are compared across different AQM 479 schemes, the comparisons should use exactly the same measurement 480 periods. With regards to latency, it is recommended to take the 481 samples on per-packet basis whenever possible depending on the 482 features provided by hardware/software and the impact of sampling 483 itself on the hardware performance. 485 From each of these sets of measurements, the cumulative density 486 function (CDF) of the considered metrics should be computed. If the 487 considered scenario introduces dynamically varying parameters, 488 temporal evolution of the metrics could also be generated. For each 489 scenario, the following graph may be generated: the x-axis shows 490 queuing delay (that is the average per-packet delay in excess of 491 minimum RTT), the y-axis the goodput. Ellipses are computed such as 492 detailed in [WINS2014]: "We take each individual [...] run [...] as 493 one point, and then compute the 1-epsilon elliptic contour of the 494 maximum-likelihood 2D Gaussian distribution that explains the points. 495 [...] we plot the median per-sender throughput and queueing delay as 496 a circle. [...] The orientation of an ellipse represents the 497 covariance between the throughput and delay measured for the 498 protocol." This graph provides part of a better understanding of (1) 499 the delay/goodput trade-off for a given congestion control mechanism 500 (Section 5), and (2) how the goodput and average queue delay vary as 501 a function of the traffic load (Section 8.2). 503 3. Generic setup for evaluations 505 This section presents the topology that can be used for each of the 506 following scenarios, the corresponding notations and discusses 507 various assumptions that have been made in the document. 509 3.1. Topology and notations 510 +--------------+ +--------------+ 511 |sender A_i | |receive B_i | 512 |--------------| |--------------| 513 | SEN.Flow1.1 +---------+ +-----------+ REC.Flow1.1 | 514 | + | | | | + | 515 | | | | | | | | 516 | + | | | | + | 517 | SEN.Flow1.X +-----+ | | +--------+ REC.Flow1.X | 518 +--------------+ | | | | +--------------+ 519 + +-+---+---+ +--+--+---+ + 520 | |Router L | |Router R | | 521 | |---------| |---------| | 522 | | AQM | | | | 523 | | BuffSize| | BuffSize| | 524 | | (Bsize) +-----+ (Bsize) | | 525 | +-----+--++ ++-+------+ | 526 + | | | | + 527 +--------------+ | | | | +--------------+ 528 |sender A_n | | | | | |receive B_n | 529 |--------------| | | | | |--------------| 530 | SEN.FlowN.1 +---------+ | | +-----------+ REC.FlowN.1 | 531 | + | | | | + | 532 | | | | | | | | 533 | + | | | | + | 534 | SEN.FlowN.Y +------------+ +-------------+ REC.FlowN.Y | 535 +--------------+ +--------------+ 537 Figure 1: Topology and notations 539 Figure 1 is a generic topology where: 541 o traffic profile is a set of flows with similar characteristics - 542 RTT, congestion control scheme, transport protocol, etc.; 544 o senders with different traffic characteristics (i.e., traffic 545 profiles) can be introduced; 547 o the timing of each flow could be different (i.e., when does each 548 flow start and stop); 550 o each traffic profile can comprise various number of flows; 552 o each link is characterized by a couple (one-way delay, capacity); 554 o sender A_i is instantiated for each traffic profile. A 555 corresponding receiver B_i is instantiated for receiving the flows 556 in the profile; 558 o flows sharing a bottleneck (the link between routers L and R); 560 o the tester should consider both scenarios of asymmetric and 561 symmetric bottleneck links in terms of bandwidth. In case of 562 asymmetric link, the capacity from senders to receivers is higher 563 than the one from receivers to senders; the symmetric link 564 scenario provides a basic understanding of the operation of the 565 AQM mechanism whereas the asymmetric link scenario evaluates an 566 AQM mechanism in a more realistic setup; 568 o in asymmetric link scenarios, the tester should study the bi- 569 directional traffic between A and B (downlink and uplink) with the 570 AQM mechanism deployed on one direction only. The tester may 571 additionally consider a scenario with AQM mechanism being deployed 572 on both directions. In each scenario, the tester should 573 investigate the impact of drop policy of the AQM on TCP ACK 574 packets and its impact on the performance (Section 9.2). 576 Although this topology may not perfectly reflect actual topologies, 577 the simple topology is commonly used in the world of simulations and 578 small testbeds. It can be considered as adequate to evaluate AQM 579 proposals [I-D.irtf-iccrg-tcpeval]. Testers ought to pay attention 580 to the topology that has been used to evaluate an AQM scheme when 581 comparing this scheme with a newly proposed AQM scheme. 583 3.2. Buffer size 585 The size of the buffers should be carefully chosen, and may be set to 586 the bandwidth-delay product; the bandwidth being the bottleneck 587 capacity and the delay the largest RTT in the considered network. 588 The size of the buffer can impact the AQM performance and is a 589 dimensioning parameter that will be considered when comparing AQM 590 proposals. 592 If a specific buffer size is required, the tester must justify and 593 detail the way the maximum queue size is set. Indeed, the maximum 594 size of the buffer may affect the AQM's performance and its choice 595 should be elaborated for a fair comparison between AQM proposals. 596 While comparing AQM schemes the buffer size should remain the same 597 across the tests. 599 3.3. Congestion controls 601 This document considers running three different congestion control 602 algorithms between A and B 604 o Standard TCP congestion control: the base-line congestion control 605 is TCP NewReno with SACK [RFC5681]. 607 o Aggressive congestion controls: a base-line congestion control for 608 this category is TCP Cubic [I-D.ietf-tcpm-cubic]. 610 o Less-than Best Effort (LBE) congestion controls: an LBE congestion 611 control 'results in smaller bandwidth and/or delay impact on 612 standard TCP than standard TCP itself, when sharing a bottleneck 613 with it.': a base-line congestion control for this category is 614 LEDBAT [RFC6817]. 616 Other transport congestion controls can OPTIONALLY be evaluated in 617 addition. Recent transport layer protocols are not mentioned in the 618 following sections, for the sake of simplicity. 620 4. Methodology, Metrics, AQM Comparisons, Packet Sizes, Scheduling and 621 ECN 623 4.1. Methodology 625 A description of each test setup should be detailed to allow this 626 test to be compared with other tests. This also allows others to 627 replicate the tests if needed. This test setup should detail 628 software and hardware versions. The tester could make its data 629 available. 631 The proposals should be evaluated on real-life systems, or they may 632 be evaluated with event-driven simulations (such as ns-2, ns-3, 633 OMNET, etc). The proposed scenarios are not bound to a particular 634 evaluation toolset. 636 The tester is encouraged to make the detailed test setup and the 637 results publicly available. 639 4.2. Comments on metrics measurement 641 The document presents the end-to-end metrics that ought to be used to 642 evaluate the trade-off between latency and goodput in Section 2. In 643 addition to the end-to-end metrics, the queue-level metrics (normally 644 collected at the device operating the AQM) provide a better 645 understanding of the AQM behavior under study and the impact of its 646 internal parameters. Whenever it is possible (e.g., depending on the 647 features provided by the hardware/software), these guidelines advise 648 to consider queue-level metrics, such as link utilization, queuing 649 delay, queue size or packet drop/mark statistics in addition to the 650 AQM-specific parameters. However, the evaluation must be primarily 651 based on externally observed end-to-end metrics. 653 These guidelines do not aim to detail on the way these metrics can be 654 measured, since the way these metrics are measured is expected to 655 depend on the evaluation toolset. 657 4.3. Comparing AQM schemes 659 This document recognizes that these guidelines may be used for 660 comparing AQM schemes. 662 AQM schemes need to be compared against both performance and 663 deployment categories. In addition, this section details how best to 664 achieve a fair comparison of AQM schemes by avoiding certain 665 pitfalls. 667 4.3.1. Performance comparison 669 AQM schemes should be compared against the generic scenarios that are 670 summarized in Section 13. AQM schemes may be compared for specific 671 network environments such as data centers, home networks, etc. If an 672 AQM scheme has parameter(s) that were externally tuned for 673 optimization or other purposes, these values must be disclosed. 675 AQM schemes belong to different varieties such as queue-length based 676 schemes (ex. RED) or queueing-delay based scheme (ex. CoDel, PIE). 677 AQM schemes expose different control knobs associated with different 678 semantics. For example, while both PIE and CoDel are queueing-delay 679 based schemes and each expose a knob to control the queueing delay -- 680 PIE's "queueing delay reference" vs. CoDel's "queueing delay target", 681 the two tuning parameters of the two schemes have different 682 semantics, resulting in different control points. Such differences 683 in AQM schemes can be easily overlooked while making comparisons. 685 This document recommends the following procedures for a fair 686 performance comparison between the AQM schemes: 688 1. similar control parameters and implications: Testers should be 689 aware of the control parameters of the different schemes that 690 control similar behavior. Testers should also be aware of the 691 input value ranges and corresponding implications. For example, 692 consider two different schemes - (A) queue-length based AQM 693 scheme, and (B) queueing-delay based scheme. A and B are likely 694 to have different kinds of control inputs to control the target 695 delay - target queue length in A vs. target queuing delay in B, 696 for example. Setting parameter values such as 100MB for A vs. 697 10ms for B will have different implications depending on 698 evaluation context. Such context-dependent implications must be 699 considered before drawing conclusions on performance comparisons. 700 Also, it would be preferable if an AQM proposal listed such 701 parameters and discussed how each relates to network 702 characteristics such as capacity, average RTT etc. 704 2. compare over a range of input configurations: there could be 705 situations when the set of control parameters that affect a 706 specific behavior have different semantics between the two AQM 707 schemes. As mentioned above, PIE has tuning parameters to 708 control queue delay that has a different semantics from those 709 used in CoDel. In such situations, these schemes need to be 710 compared over a range of input configurations. For example, 711 compare PIE vs. CoDel over the range of target delay input 712 configurations. 714 4.3.2. Deployment comparison 716 AQM schemes must be compared against deployment criteria such as the 717 parameter sensitivity (Section 8.3), auto-tuning (Section 12) or 718 implementation cost (Section 11). 720 4.4. Packet sizes and congestion notification 722 An AQM scheme may be considering packet sizes while generating 723 congestion signals [RFC7141]. For example, control packets such as 724 DNS requests/responses, TCP SYNs/ACKs are small, but their loss can 725 severely impact application performance. An AQM scheme may therefore 726 be biased towards small packets by dropping them with lower 727 probability compared to larger packets. However, such an AQM scheme 728 is unfair to data senders generating larger packets. Data senders, 729 malicious or otherwise, are motivated to take advantage of such AQM 730 scheme by transmitting smaller packets, and could result in unsafe 731 deployments and unhealthy transport and/or application designs. 733 An AQM scheme should adhere to the recommendations outlined in the 734 best current practive for dropping and marking packets document 735 [RFC7141], and should not provide undue advantage to flows with 736 smaller packets, such as discussed in the section 4.4 of the AQM 737 recommendation document [RFC7567]. In order to evaluate if an AQM 738 scheme is biased towards flows with smaller size packets, traffic can 739 be generated, such as defined in Section 8.2.2, where half of the 740 flows have smaller packets (e.g. 500 bytes packets) than the other 741 half of the flow (e.g. 1500 bytes packets). In this case, the 742 metrics reported could be the same as in Section 6.3, where Category 743 I is the set of flows with smaller packets and Category II the one 744 with larger packets. The bidirectional scenario could also be 745 considered (Section 9.2). 747 4.5. Interaction with ECN 749 ECN [RFC3168] is an alternative that allows AQM schemes to signal 750 receivers about network congestion that does not use packet drop. 751 There are benefits of providing ECN support for an AQM scheme 752 [WELZ2015]. 754 If the tested AQM scheme can support ECN, the testers must discuss 755 and describe the support of ECN, such as discussed in the AQM 756 recommendation [RFC7567]. Also, the AQM's ECN support can be studied 757 and verified by replicating tests in Section 8.1 with ECN turned ON 758 at the TCP senders. The results can be used to not only evaluate the 759 performance of the tested AQM with and without ECN markings, but also 760 quantify the interest of enabling ECN. 762 4.6. Interaction with Scheduling 764 A network device may use per-flow or per-class queuing with a 765 scheduling algorithm to either prioritize certain applications or 766 classes of traffic, limit the rate of transmission, or to provide 767 isolation between different traffic flows within a common class, such 768 as discussed in the section 2.1 of the AQM recommendation document 769 [RFC7567]. 771 The scheduling and the AQM conjointly impact on the end-to-end 772 performance. Therefore, the AQM proposal must discuss the 773 feasibility to add scheduling combined with the AQM algorithm. It 774 can be explained whether the dropping policy is applied when packets 775 are being enqueued or dequeued. 777 These guidelines do not propose guidelines to assess the performance 778 of scheduling algorithms. Indeed, as opposed to characterizing AQM 779 schemes that is related to their capacity to control the queuing 780 delay in a queue, characterizing scheduling schemes is related to the 781 scheduling itself and its interaction with the AQM scheme. As one 782 example, the scheduler may create sub-queues and the AQM scheme may 783 be applied on each of the sub-queues, and/or the AQM could be applied 784 on the whole queue. Also, schedulers might, such as FQ-CoDel 785 [HOEI2015] or FavorQueue [ANEL2014], introduce flow prioritization. 786 In these cases, specific scenarios should be proposed to ascertain 787 that these scheduler schemes not only helps in tackling the 788 bufferbloat, but also are robust under a wide variety of operating 789 conditions. This is out of the scope of this document that focus on 790 dropping and/or marking AQM schemes. 792 5. Transport Protocols 794 Network and end-devices need to be configured with a reasonable 795 amount of buffer space to absorb transient bursts. In some 796 situations, network providers tend to configure devices with large 797 buffers to avoid packet drops triggered by a full buffer and to 798 maximize the link utilization for standard loss-based TCP traffic. 800 AQM algorithms are often evaluated by considering Transmission 801 Control Protocol (TCP) [RFC0793] with a limited number of 802 applications. TCP is a widely deployed transport. It fills up 803 available buffers until a sender transfering a bulk flow with TCP 804 receives a signal (packet drop) that reduces the sending rate. The 805 larger the buffer, the higher the buffer occupancy, and therefore the 806 queuing delay. An efficient AQM scheme sends out early congestion 807 signals to TCP to bring the queuing delay under control. 809 Not all endpoints (or applications) using TCP use the same flavor of 810 TCP. Variety of senders generate different classes of traffic which 811 may not react to congestion signals (aka non-responsive flows in the 812 section 3 of the AQM recommendation document [RFC7567]) or may not 813 reduce their sending rate as expected (aka Transport Flows that are 814 less responsive than TCP, such as proposed in the section 3 of the 815 AQM recommendation document [RFC7567], also called "aggressive 816 flows"). In these cases, AQM schemes seek to control the queuing 817 delay. 819 This section provides guidelines to assess the performance of an AQM 820 proposal for various traffic profiles -- different types of senders 821 (with different TCP congestion control variants, unresponsive, 822 aggressive). 824 5.1. TCP-friendly sender 826 5.1.1. TCP-friendly sender with the same initial congestion window 828 This scenario helps to evaluate how an AQM scheme reacts to a TCP- 829 friendly transport sender. A single long-lived, non application- 830 limited, TCP NewReno flow, with an Initial congestion Window (IW) set 831 to 3 packets, transfers data between sender A and receiver B. Other 832 TCP friendly congestion control schemes such as TCP-friendly rate 833 control [RFC5348] etc may also be considered. 835 For each TCP-friendly transport considered, the graph described in 836 Section 2.7 could be generated. 838 5.1.2. TCP-friendly sender with different initial congestion windows 840 This scenario can be used to evaluate how an AQM scheme adapts to a 841 traffic mix consisting of TCP flows with different values of the IW. 843 For this scenario, two types of flows must be generated between 844 sender A and receiver B: 846 o A single long-lived non application-limited TCP NewReno flow; 848 o A single application-limited TCP NewReno flow, with an IW set to 3 849 or 10 packets. The size of the data transferred must be strictly 850 higher than 10 packets and should be lower than 100 packets. 852 The transmission of the non application-limited flow must start first 853 and the transmission of the application-limited flow starts after the 854 non application-limited flow has reached steady state. The steady 855 state can be assumed when the goodput is stable. 857 For each of these scenarios, the graph described in Section 2.7 could 858 be generated for each class of traffic (application-limited and non 859 application-limited). The completion time of the application-limited 860 TCP flow could be measured. 862 5.2. Aggressive transport sender 864 This scenario helps testers to evaluate how an AQM scheme reacts to a 865 transport sender that is more aggressive than a single TCP-friendly 866 sender. We define 'aggressiveness' as a higher increase factor than 867 standard upon a successful transmission and/or a lower than standard 868 decrease factor upon a unsuccessful transmission (e.g., in case of 869 congestion controls with Additive-Increase Multiplicative-Decrease 870 (AIMD) principle, a larger AI and/or MD factors). A single long- 871 lived, non application-limited, TCP Cubic flow transfers data between 872 sender A and receiver B. Other aggressive congestion control schemes 873 may also be considered. 875 For each flavor of aggressive transports, the graph described in 876 Section 2.7 could be generated. 878 5.3. Unresponsive transport sender 880 This scenario helps testers to evaluate how an AQM scheme reacts to a 881 transport sender that is less responsive than TCP. Note that faulty 882 transport implementations on an end host and/or faulty network 883 elements en-route that "hide" congestion signals in packet headers 884 may also lead to a similar situation, such that the AQM scheme needs 885 to adapt to unresponsive traffic (see the section 3 of the AQM 886 recommendation document [RFC7567]). To this end, these guidelines 887 propose the two following scenarios. 889 The first scenario can be used to evaluate queue build up. It 890 considers unresponsive flow(s) whose sending rate is greater than the 891 bottleneck link capacity between routers L and R. This scenario 892 consists of a long-lived non application limited UDP flow transmits 893 data between sender A and receiver B. Graphs described in 894 Section 2.7 could be generated. 896 The second scenario can be used to evaluate if the AQM scheme is able 897 to keep the responsive fraction under control. This scenario 898 considers a mixture of TCP-friendly and unresponsive traffics. It 899 consists of a long-lived UDP flow from unresponsive application and a 900 single long-lived, non application-limited (unlimited data available 901 to the transport sender from application layer), TCP New Reno flow 902 that transmit data between sender A and receiver B. As opposed to 903 the first scenario, the rate of the UDP traffic should not be greater 904 than the bottleneck capacity, and should be higher than half of the 905 bottleneck capacity. For each type of traffic, the graph described 906 in Section 2.7 could be generated. 908 5.4. Less-than Best Effort transport sender 910 This scenario helps to evaluate how an AQM scheme reacts to LBE 911 congestion controls that 'results in smaller bandwidth and/or delay 912 impact on standard TCP than standard TCP itself, when sharing a 913 bottleneck with it.' [RFC6297]. There are potential fateful 914 interactions when AQM and LBE techniques are combined [GONG2014]; 915 this scenario helps to evaluate whether the coexistence of the 916 proposed AQM and LBE techniques may be possible. 918 A single long-lived non application-limited TCP NewReno flow 919 transfers data between sender A and receiver B. Other TCP-friendly 920 congestion control schemes may also be considered. Single long-lived 921 non application-limited LEDBAT [RFC6817] flows transfer data between 922 sender A and receiver B. We recommend to set the target delay and 923 gain values of LEDBAT respectively to 5 ms and 10 [TRAN2014]. Other 924 LBE congestion control schemes may also be considered and are listed 925 in the IETF survey of LBE protocols [RFC6297]. 927 For each of the TCP-friendly and LBE transports, the graph described 928 in Section 2.7 could be generated. 930 6. Round Trip Time Fairness 932 6.1. Motivation 934 An AQM scheme's congestion signals (via drops or ECN marks) must 935 reach the transport sender so that a responsive sender can initiate 936 its congestion control mechanism and adjust the sending rate. This 937 procedure is thus dependent on the end-to-end path RTT. When the RTT 938 varies, the onset of congestion control is impacted, and in turn 939 impacts the ability of an AQM scheme to control the queue. It is 940 therefore important to assess the AQM schemes for a set of RTTs 941 between A and B (e.g., from 5 ms to 200 ms). 943 The asymmetry in terms of difference in intrinsic RTT between various 944 paths sharing the same bottleneck should be considered, so that the 945 fairness between the flows can be discussed. In this scenario, a 946 flow traversing on shorter RTT path may react faster to congestion 947 and recover faster from it compared to another flow on a longer RTT 948 path. The introduction of AQM schemes may potentially improve the 949 RTT fairness. 951 Introducing an AQM scheme may cause the unfairness between the flows, 952 even if the RTTs are identical. This potential unfairness should be 953 investigated as well. 955 6.2. Recommended tests 957 The recommended topology is detailed in Figure 1. 959 To evaluate the RTT fairness, for each run, two flows are divided 960 into two categories. Category I whose RTT between sender A and 961 receiver B should be 100ms. Category II which RTT between sender A 962 and receiver B should be in the range [5ms;560ms] inclusive. The 963 maximum value for the RTT represents the RTT of a satellite link 964 [RFC2488]. 966 A set of evaluated flows must use the same congestion control 967 algorithm: all the generated flows could be single long-lived non 968 application-limited TCP NewReno flows. 970 6.3. Metrics to evaluate the RTT fairness 972 The outputs that must be measured are: (1) the cumulative average 973 goodput of the flow from Category I, goodput_Cat_I (Section 2.5); (2) 974 the cumulative average goodput of the flow from Category II, 975 goodput_Cat_II (Section 2.5); (3) the ratio goodput_Cat_II/ 976 goodput_Cat_I; (4) the average packet drop rate for each category 977 (Section 2.3). 979 7. Burst Absorption 981 "AQM mechanisms need to control the overall queue sizes, to ensure 982 that arriving bursts can be accommodated without dropping packets" 983 [RFC7567]. 985 7.1. Motivation 987 An AQM scheme can face bursts of packet arrivals due to various 988 reasons. Dropping one or more packets from a burst can result in 989 performance penalties for the corresponding flows, since dropped 990 packets have to be retransmitted. Performance penalties can result 991 in failing to meet SLAs and be a disincentive to AQM adoption. 993 The ability to accommodate bursts translates to larger queue length 994 and hence more queuing delay. On the one hand, it is important that 995 an AQM scheme quickly brings bursty traffic under control. On the 996 other hand, a peak in the packet drop rates to bring a packet burst 997 quickly under control could result in multiple drops per flow and 998 severely impact transport and application performance. Therefore, an 999 AQM scheme ought to bring bursts under control by balancing both 1000 aspects -- (1) queuing delay spikes are minimized and (2) performance 1001 penalties for ongoing flows in terms of packet drops are minimized. 1003 An AQM scheme that maintains short queues allows some remaining space 1004 in the buffer for bursts of arriving packets. The tolerance to 1005 bursts of packets depends upon the number of packets in the queue, 1006 which is directly linked to the AQM algorithm. Moreover, an AQM 1007 scheme may implement a feature controlling the maximum size of 1008 accepted bursts, that can depend on the buffer occupancy or the 1009 currently estimated queuing delay. The impact of the buffer size on 1010 the burst allowance may be evaluated. 1012 7.2. Recommended tests 1014 For this scenario, tester must evaluate how the AQM performs with a 1015 traffic mixed that could be composed of (from sender A to receiver 1016 B): 1018 o Burst of packets at the beginning of a transmission, such as web 1019 traffic with IW10; 1021 o Applications that send large bursts of data, such as bursty video 1022 frames; 1024 o Background traffic, such as Constant Bit Rate (CBR) UDP traffic 1025 and/or A single non application-limited bulk TCP flow as 1026 background traffic. 1028 Figure 2 presents the various cases for the traffic that must be 1029 generated between sender A and receiver B. 1031 +-------------------------------------------------+ 1032 |Case| Traffic Type | 1033 | +-----+------------+----+--------------------+ 1034 | |Video|Web (IW 10)| CBR| Bulk TCP Traffic | 1035 +----|-----|------------|----|--------------------| 1036 |I | 0 | 1 | 1 | 0 | 1037 +----|-----|------------|----|--------------------| 1038 |II | 0 | 1 | 1 | 1 | 1039 |----|-----|------------|----|--------------------| 1040 |III | 1 | 1 | 1 | 0 | 1041 +----|-----|------------|----|--------------------| 1042 |IV | 1 | 1 | 1 | 1 | 1043 +----+-----+------------+----+--------------------+ 1045 Figure 2: Bursty traffic scenarios 1047 A new web page download could start after the previous web page 1048 download is finished. Each web page could be composed by at least 50 1049 objects and the size of each object should be at least 1kB. 6 TCP 1050 parallel connections should be generated to download the objects, 1051 each parallel connections having an initial congestion window set to 1052 10 packets. 1054 For each of these scenarios, the graph described in Section 2.7 could 1055 be generated for each application. Metrics such as end-to-end 1056 latency, jitter, flow completion time may be generated. For the 1057 cases of frame generation of bursty video traffic as well as the 1058 choice of web traffic pattern, these details and their presentation 1059 are left to the testers. 1061 8. Stability 1063 8.1. Motivation 1065 The safety of an AQM scheme is directly related to its stability 1066 under varying operating conditions such as varying traffic profiles 1067 and fluctuating network conditions. Since operating conditions can 1068 vary often the AQM needs to remain stable under these conditions 1069 without the need for additional external tuning. 1071 Network devices can experience varying operating conditions depending 1072 on factors such as time of the day, deployment scenario, etc. For 1073 example: 1075 o Traffic and congestion levels are higher during peak hours than 1076 off-peak hours. 1078 o In the presence of a scheduler, the draining rate of a queue can 1079 vary depending on the occupancy of other queues: a low load on a 1080 high priority queue implies a higher draining rate for the lower 1081 priority queues. 1083 o The capacity available can vary over time (e.g., a lossy channel, 1084 a link supporting traffic in a higher diffserv class). 1086 Whether the target context is a not stable environment, the ability 1087 of an AQM scheme to maintain its control over the queuing delay and 1088 buffer occupancy can be challenged. This document proposes 1089 guidelines to assess the behavior of AQM schemes under varying 1090 congestion levels and varying draining rates. 1092 8.2. Recommended tests 1094 Note that the traffic profiles explained below comprises non 1095 application-limited TCP flows. For each of the below scenarios, the 1096 graphs described in Section 2.7 should be generated, and the goodput 1097 of the various flows should be cumulated. For Section 8.2.5 and 1098 Section 8.2.6 they should incorporate the results in per-phase basis 1099 as well. 1101 Wherever the notion of time has explicitly mentioned in this 1102 subsection, time 0 starts from the moment all TCP flows have already 1103 reached their congestion avoidance phase. 1105 8.2.1. Definition of the congestion Level 1107 In these guidelines, the congestion levels are represented by the 1108 projected packet drop rate, had a drop-tail queue was chosen instead 1109 of an AQM scheme. When the bottleneck is shared among non 1110 application-limited TCP flows. l_r, the loss rate projection can be 1111 expressed as a function of N, the number of bulk TCP flows, and S, 1112 the sum of the bandwidth-delay product and the maximum buffer size, 1113 both expressed in packets, based on Eq. 3 of [MORR2000]: 1115 l_r = 0.76 * N^2 / S^2 1117 N = S * SQRT(1/0.76) * SQRT (l_r) 1119 These guidelines use the loss rate to define the different congestion 1120 levels, but they do not stipulate that in other circumstances, 1121 measuring the congestion level gives you an accurate estimation of 1122 the loss rate or vice-versa. 1124 8.2.2. Mild congestion 1126 This scenario can be used to evaluate how an AQM scheme reacts to a 1127 light load of incoming traffic resulting in mild congestion -- packet 1128 drop rates around 0.1%. The number of bulk flows required to achieve 1129 this congestion level, N_mild, is then: 1131 N_mild = ROUND (0.036*S) 1133 8.2.3. Medium congestion 1135 This scenario can be used to evaluate how an AQM scheme reacts to 1136 incoming traffic resulting in medium congestion -- packet drop rates 1137 around 0.5%. The number of bulk flows required to achieve this 1138 congestion level, N_med, is then: 1140 N_med = ROUND (0.081*S) 1142 8.2.4. Heavy congestion 1144 This scenario can be used to evaluate how an AQM scheme reacts to 1145 incoming traffic resulting in heavy congestion -- packet drop rates 1146 around 1%. The number of bulk flows required to achieve this 1147 congestion level, N_heavy, is then: 1149 N_heavy = ROUND (0.114*S) 1151 8.2.5. Varying the congestion level 1153 This scenario can be used to evaluate how an AQM scheme reacts to 1154 incoming traffic resulting in various levels of congestion during the 1155 experiment. In this scenario, the congestion level varies within a 1156 large time-scale. The following phases may be considered: phase I - 1157 mild congestion during 0-20s; phase II - medium congestion during 1158 20-40s; phase III - heavy congestion during 40-60s; phase I again, 1159 and so on. 1161 8.2.6. Varying available capacity 1163 This scenario can be used to help characterize how the AQM behaves 1164 and adapts to bandwidth changes. The experiments are not meant to 1165 reflect the exact conditions of Wi-Fi environments since it is hard 1166 to design repetitive experiments or accurate simulations for such 1167 scenarios. 1169 To emulate varying draining rates, the bottleneck capacity between 1170 nodes 'Router L' and 'Router R' varies over the course of the 1171 experiment as follows: 1173 o Experiment 1: the capacity varies between two values within a 1174 large time-scale. As an example, the following phases may be 1175 considered: phase I - 100Mbps during 0-20s; phase II - 10Mbps 1176 during 20-40s; phase I again, and so on. 1178 o Experiment 2: the capacity varies between two values within a 1179 short time-scale. As an example, the following phases may be 1180 considered: phase I - 100Mbps during 0-100ms; phase II - 10Mbps 1181 during 100-200ms; phase I again, and so on. 1183 The tester may choose a phase time-interval value different than what 1184 is stated above, if the network's path conditions (such as bandwidth- 1185 delay product) necessitate. In this case the choice of such time- 1186 interval value should be stated and elaborated. 1188 The tester may additionally evaluate the two mentioned scenarios 1189 (short-term and long-term capacity variations), during and/or 1190 including TCP slow-start phase. 1192 More realistic fluctuating capacity patterns may be considered. The 1193 tester may choose to incorporate realistic scenarios with regards to 1194 common fluctuation of bandwidth in state-of-the-art technologies. 1196 The scenario consists of TCP NewReno flows between sender A and 1197 receiver B. To better assess the impact of draining rates on the AQM 1198 behavior, the tester must compare its performance with those of drop- 1199 tail and should provide a reference document for their proposal 1200 discussing performance and deployment compared to those of drop-tail. 1201 Burst traffic, such as presented in Section 7.2, could also be 1202 considered to assess the impact of varying available capacity on the 1203 burst absorption of the AQM. 1205 8.3. Parameter sensitivity and stability analysis 1207 The control law used by an AQM is the primary means by which the 1208 queuing delay is controlled. Hence understanding the control law is 1209 critical to understanding the behavior of the AQM scheme. The 1210 control law could include several input parameters whose values 1211 affect the AQM scheme's output behavior and its stability. 1212 Additionally, AQM schemes may auto-tune parameter values in order to 1213 maintain stability under different network conditions (such as 1214 different congestion levels, draining rates or network environments). 1215 The stability of these auto-tuning techniques is also important to 1216 understand. 1218 Transports operating under the control of AQM experience the effect 1219 of multiple control loops that react over different timescales. It 1220 is therefore important that proposed AQM schemes are seen to be 1221 stable when they are deployed at multiple points of potential 1222 congestion along an Internet path. The pattern of congestion signals 1223 (loss or ECN-marking) arising from AQM methods also need to not 1224 adversely interact with the dynamics of the transport protocols that 1225 they control. 1227 AQM proposals should provide background material showing control 1228 theoretic analysis of the AQM control law and the input parameter 1229 space within which the control law operates as expected; or could use 1230 another way to discuss the stability of the control law. For 1231 parameters that are auto-tuned, the material should include stability 1232 analysis of the auto-tuning mechanism(s) as well. Such analysis 1233 helps to understand an AQM control law better and the network 1234 conditions/deployments under which the AQM is stable. 1236 9. Various Traffic Profiles 1238 This section provides guidelines to assess the performance of an AQM 1239 proposal for various traffic profiles such as traffic with different 1240 applications or bi-directional traffic. 1242 9.1. Traffic mix 1244 This scenario can be used to evaluate how an AQM scheme reacts to a 1245 traffic mix consisting of different applications such as: 1247 o Bulk TCP transfer 1249 o Web traffic 1251 o VoIP 1253 o Constant Bit Rate (CBR) UDP traffic 1255 o Adaptive video streaming (either unidirectional or bidirectional) 1257 Various traffic mixes can be considered. These guidelines recommend 1258 to examine at least the following example: 1 bi-directional VoIP; 6 1259 Web pages download (such as detailed in Section 7.2); 1 CBR; 1 1260 Adaptive Video; 5 bulk TCP. Any other combinations could be 1261 considered and should be carefully documented. 1263 For each scenario, the graph described in Section 2.7 could be 1264 generated for each class of traffic. Metrics such as end-to-end 1265 latency, jitter and flow completion time may be reported. 1267 9.2. Bi-directional traffic 1269 Control packets such as DNS requests/responses, TCP SYNs/ACKs are 1270 small, but their loss can severely impact the application 1271 performance. The scenario proposed in this section will help in 1272 assessing whether the introduction of an AQM scheme increases the 1273 loss probability of these important packets. 1275 For this scenario, traffic must be generated in both downlink and 1276 uplink, such as defined in Section 3.1. The amount of asymmetry 1277 between the uplink and the downlink depends on the context. These 1278 guidelines recommend to consider a mild congestion level and the 1279 traffic presented in Section 8.2.2 in both directions. In this case, 1280 the metrics reported must be the same as in Section 8.2 for each 1281 direction. 1283 The traffic mix presented in Section 9.1 may also be generated in 1284 both directions. 1286 10. Example of multi-AQM scenario 1288 10.1. Motivation 1290 Transports operating under the control of AQM experience the effect 1291 of multiple control loops that react over different timescales. It 1292 is therefore important that proposed AQM schemes are seen to be 1293 stable when they are deployed at multiple points of potential 1294 congestion along an Internet path. The pattern of congestion signals 1295 (loss or ECN-marking) arising from AQM methods also need to not 1296 adversely interact with the dynamics of the transport protocols that 1297 they control. 1299 10.2. Details on the evaluation scenario 1301 +---------+ +-----------+ 1302 |senders A|---+ +---|receivers A| 1303 +---------+ | | +-----------+ 1304 +-----+---+ +---------+ +--+-----+ 1305 |Router L |--|Router M |--|Router R| 1306 |AQM A | |AQM M | |No AQM | 1307 +---------+ +--+------+ +--+-----+ 1308 +---------+ | | +-----------+ 1309 |senders B|-------------+ +---|receivers B| 1310 +---------+ +-----------+ 1312 Figure 3: Topology for the Multi-AQM scenario 1314 Figure Figure 3 describes topology options for evaluating multi-AQM 1315 scenarios. The AQM schemes are applied in sequence and impact the 1316 induced latency reduction, the induced goodput maximization and the 1317 trade-off between these two. Note that AQM schemes A and B 1318 introduced in Routers L and M could be (I) same scheme with identical 1319 parameter values, (ii) same scheme with different parameter values, 1320 or (iii) two different schemed. To best understand the interactions 1321 and implications, the mild congestion scenario as described in 1322 Section 8.2.2 is recommended such that the number of flows is equally 1323 shared among senders A and B. Other relevant combination of 1324 congestion levels could also be considered. We recommend to measure 1325 the metrics presented in Section 8.2. 1327 11. Implementation cost 1329 11.1. Motivation 1331 Successful deployment of AQM is directly related to its cost of 1332 implementation. Network devices can need hardware or software 1333 implementations of the AQM mechanism. Depending on a device's 1334 capabilities and limitations, the device may or may not be able to 1335 implement some or all parts of their AQM logic. 1337 AQM proposals should provide pseudo-code for the complete AQM scheme, 1338 highlighting generic implementation-specific aspects of the scheme 1339 such as "drop-tail" vs. "drop-head", inputs (e.g., current queuing 1340 delay, queue length), computations involved, need for timers, etc. 1341 This helps to identify costs associated with implementing the AQM 1342 scheme on a particular hardware or software device. This also 1343 facilitates discsusions around which kind of devices can easily 1344 support the AQM and which cannot. 1346 11.2. Recommended discussion 1348 AQM proposals should highlight parts of their AQM logic that are 1349 device dependent and discuss if and how AQM behavior could be 1350 impacted by the device. For example, a queueing-delay based AQM 1351 scheme requires current queuing delay as input from the device. If 1352 the device already maintains this value, then it can be trivial to 1353 implement the their AQM logic on the device. If the device provides 1354 indirect means to estimate the queuing delay (for example: 1355 timestamps, dequeuing rate), then the AQM behavior is sensitive to 1356 the precision of the queuing delay estimations are for that device. 1357 Highlighting the sensitivity of an AQM scheme to queuing delay 1358 estimations helps implementers to identify appropriate means of 1359 implementing the mechanism on a device. 1361 12. Operator Control and Auto-tuning 1363 12.1. Motivation 1365 One of the biggest hurdles of RED deployment was/is its parameter 1366 sensitivity to operating conditions -- how difficult it is to tune 1367 RED parameters for a deployment to achieve acceptable benefit from 1368 using RED. Fluctuating congestion levels and network conditions add 1369 to the complexity. Incorrect parameter values lead to poor 1370 performance. 1372 Any AQM scheme is likely to have parameters whose values affect the 1373 control law and behaviour of an AQM. Exposing all these parameters 1374 as control parameters to a network operator (or user) can easily 1375 result in a unsafe AQM deployment. Unexpected AQM behavior ensues 1376 when parameter values are set improperly. A minimal number of 1377 control parameters minimizes the number of ways a user can break a 1378 system where an AQM scheme is deployed at. Fewer control parameters 1379 make the AQM scheme more user-friendly and easier to deploy and 1380 debug. 1382 "AQM algorithms should not require tuning of initial or configuration 1383 parameters in common use cases." such as stated in the section 4.3 of 1384 the AQM recommendation document [RFC7567]. A scheme ought to expose 1385 only those parameters that control the macroscopic AQM behavior such 1386 as queue delay threshold, queue length threshold, etc. 1388 Additionally, the safety of an AQM scheme is directly related to its 1389 stability under varying operating conditions such as varying traffic 1390 profiles and fluctuating network conditions, as described in 1391 Section 8. Operating conditions vary often and hence the AQM needs 1392 to remain stable under these conditions without the need for 1393 additional external tuning. If AQM parameters require tuning under 1394 these conditions, then the AQM must self-adapt necessary parameter 1395 values by employing auto-tuning techniques. 1397 12.2. Recommended discussion 1399 In order to understand an AQM's deployment considerations and 1400 performance under a specific environment, AQM proposals should 1401 describe the parameters that control the macroscopic AQM behavior, 1402 and identify any parameters that require tuning to operational 1403 conditions. It could be interesting to also discuss that even if an 1404 AQM scheme may not adequately auto-tune its parameters, the resulting 1405 performance may not be optimal, but close to something reasonable. 1407 If there are any fixed parameters within the AQM, their setting 1408 should be discussed and justified, to help understand whether a fixed 1409 parameter value is applicable for a particular environment. 1411 If an AQM scheme is evaluated with parameter(s) that were externally 1412 tuned for optimization or other purposes, these values must be 1413 disclosed. 1415 13. Summary 1417 Figure 4 lists the scenarios for an extended characterization of an 1418 AQM scheme. This table comes along with a set of requirements to 1419 present more clearly the weight and importance of each scenario. The 1420 requirements listed here are informational and their relevance may 1421 depend on the deployment scenario. 1423 +------------------------------------------------------------------+ 1424 |Scenario |Sec. |Informational requirement | 1425 +------------------------------------------------------------------+ 1426 +------------------------------------------------------------------+ 1427 |Interaction with ECN | 4.5 |must be discussed if supported | 1428 +------------------------------------------------------------------+ 1429 |Interaction with Scheduling| 4.6 |should be discussed | 1430 +------------------------------------------------------------------+ 1431 |Transport Protocols |5. | | 1432 | TCP-friendly sender | 5.1 |scenario must be considered | 1433 | Aggressive sender | 5.2 |scenario must be considered | 1434 | Unresponsive sender | 5.3 |scenario must be considered | 1435 | LBE sender | 5.4 |scenario may be considered | 1436 +------------------------------------------------------------------+ 1437 |Round Trip Time Fairness | 6.2 |scenario must be considered | 1438 +------------------------------------------------------------------+ 1439 |Burst Absorption | 7.2 |scenario must be considered | 1440 +------------------------------------------------------------------+ 1441 |Stability |8. | | 1442 | Varying congestion levels | 8.2.5|scenario must be considered | 1443 | Varying available capacity| 8.2.6|scenario must be considered | 1444 | Parameters and stability | 8.3 |this should be discussed | 1445 +------------------------------------------------------------------+ 1446 |Various Traffic Profiles |9. | | 1447 | Traffic mix | 9.1 |scenario is recommended | 1448 | Bi-directional traffic | 9.2 |scenario may be considered | 1449 +------------------------------------------------------------------+ 1450 |Multi-AQM | 10.2 |Scenario may be considered | 1451 +------------------------------------------------------------------+ 1453 Figure 4: Summary of the scenarios and their requirements 1455 14. Acknowledgements 1457 This work has been partially supported by the European Community 1458 under its Seventh Framework Programme through the Reducing Internet 1459 Transport Latency (RITE) project (ICT-317700). 1461 Many thanks to S. Akhtar, A.B. Bagayoko, F. Baker, R. Bless, D. 1462 Collier-Brown, G. Fairhurst, J. Gettys, P. Goltsman, T. Hoiland- 1463 Jorgensen, K. Kilkki, C. Kulatunga, W. Lautenschlager, A.C. 1464 Morton, R. Pan, G. Skinner, D. Taht and M. Welzl for detailed and 1465 wise feedback on this document. 1467 15. IANA Considerations 1469 This memo includes no request to IANA. 1471 16. Security Considerations 1473 Some security considerations for AQM are identified in [RFC7567].This 1474 document, by itself, presents no new privacy nor security issues. 1476 17. References 1478 17.1. Normative References 1480 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1481 Requirement Levels", RFC 2119, 1997. 1483 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1484 Network Interconnect Devices", RFC 2544, 1485 DOI 10.17487/RFC2544, March 1999, 1486 . 1488 [RFC2647] Newman, D., "Benchmarking Terminology for Firewall 1489 Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, 1490 . 1492 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 1493 Delay Metric for IPPM", RFC 2679, DOI 10.17487/RFC2679, 1494 September 1999, . 1496 [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 1497 Packet Loss Metric for IPPM", RFC 2680, 1498 DOI 10.17487/RFC2680, September 1999, 1499 . 1501 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 1502 Applicability Statement", RFC 5481, DOI 10.17487/RFC5481, 1503 March 2009, . 1505 [RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF 1506 Recommendations Regarding Active Queue Management", 1507 BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, 1508 . 1510 17.2. Informative References 1512 [ANEL2014] 1513 Anelli, P., Diana, R., and E. Lochin, "FavorQueue: a 1514 Parameterless Active Queue Management to Improve TCP 1515 Traffic Performance", Computer Networks vol. 60, 2014. 1517 [BB2011] "BufferBloat: what's wrong with the internet?", ACM 1518 Queue vol. 9, 2011. 1520 [FENG2002] 1521 Feng, W., Shin, K., Kandlur, D., and D. Saha, "The BLUE 1522 active queue management algorithms", IEEE Trans. Netw. , 1523 2002. 1525 [FLOY1993] 1526 Floyd, S. and V. Jacobson, "Random Early Detection (RED) 1527 Gateways for Congestion Avoidance", IEEE Trans. Netw. , 1528 1993. 1530 [GONG2014] 1531 Gong, Y., Rossi, D., Testa, C., Valenti, S., and D. Taht, 1532 "Fighting the bufferbloat: on the coexistence of AQM and 1533 low priority congestion control", Computer Networks, 1534 Elsevier, 2014, 60, pp.115 - 128 , 2014. 1536 [HASS2008] 1537 Hassayoun, S. and D. Ros, "Loss Synchronization and Router 1538 Buffer Sizing with High-Speed Versions of TCP", IEEE 1539 INFOCOM Workshops , 2008. 1541 [HOEI2015] 1542 Hoeiland-Joergensen, T., McKenney, P., Taht, D., Gettys, 1543 J., and E. Dumazet, "FlowQueue-Codel", IETF (Work-in- 1544 Progress) , January 2015. 1546 [HOLLO2001] 1547 Hollot, C., Misra, V., Towsley, V., and W. Gong, "On 1548 Designing Improved Controller for AQM Routers Supporting 1549 TCP Flows", IEEE Infocom , 2001. 1551 [I-D.ietf-aqm-codel] 1552 Nichols, K., Jacobson, V., McGregor, A., and J. Iyengar, 1553 "Controlled Delay Active Queue Management", draft-ietf- 1554 aqm-codel-04 (work in progress), June 2016. 1556 [I-D.ietf-aqm-pie] 1557 Pan, R., Natarajan, P., Baker, F., and G. White, "PIE: A 1558 Lightweight Control Scheme To Address the Bufferbloat 1559 Problem", draft-ietf-aqm-pie-08 (work in progress), June 1560 2016. 1562 [I-D.ietf-tcpm-cubic] 1563 Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 1564 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 1565 draft-ietf-tcpm-cubic-01 (work in progress), January 2016. 1567 [I-D.irtf-iccrg-tcpeval] 1568 Hayes, D., Ros, D., Andrew, L., and S. Floyd, "Common TCP 1569 Evaluation Suite", draft-irtf-iccrg-tcpeval-01 (work in 1570 progress), July 2014. 1572 [JAY2006] Jay, P., Fu, Q., and G. Armitage, "A preliminary analysis 1573 of loss synchronisation between concurrent TCP flows", 1574 Australian Telecommunication Networks and Application 1575 Conference (ATNAC) , 2006. 1577 [MORR2000] 1578 Morris, R., "Scalable TCP congestion control", IEEE 1579 INFOCOM , 2000. 1581 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 1582 RFC 793, DOI 10.17487/RFC0793, September 1981, 1583 . 1585 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 1586 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 1587 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 1588 S., Wroclawski, J., and L. Zhang, "Recommendations on 1589 Queue Management and Congestion Avoidance in the 1590 Internet", RFC 2309, April 1998. 1592 [RFC2488] Allman, M., Glover, D., and L. Sanchez, "Enhancing TCP 1593 Over Satellite Channels using Standard Mechanisms", 1594 BCP 28, RFC 2488, DOI 10.17487/RFC2488, January 1999, 1595 . 1597 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1598 of Explicit Congestion Notification (ECN) to IP", 1599 RFC 3168, DOI 10.17487/RFC3168, September 2001, 1600 . 1602 [RFC3611] Friedman, T., Ed., Caceres, R., Ed., and A. Clark, Ed., 1603 "RTP Control Protocol Extended Reports (RTCP XR)", 1604 RFC 3611, DOI 10.17487/RFC3611, November 2003, 1605 . 1607 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP 1608 Friendly Rate Control (TFRC): Protocol Specification", 1609 RFC 5348, DOI 10.17487/RFC5348, September 2008, 1610 . 1612 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1613 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 1614 . 1616 [RFC6297] Welzl, M. and D. Ros, "A Survey of Lower-than-Best-Effort 1617 Transport Protocols", RFC 6297, DOI 10.17487/RFC6297, June 1618 2011, . 1620 [RFC6817] Shalunov, S., Hazel, G., Iyengar, J., and M. Kuehlewind, 1621 "Low Extra Delay Background Transport (LEDBAT)", RFC 6817, 1622 DOI 10.17487/RFC6817, December 2012, 1623 . 1625 [RFC7141] Briscoe, B. and J. Manner, "Byte and Packet Congestion 1626 Notification", RFC 7141, 2014. 1628 [TRAN2014] 1629 Trang, S., Kuhn, N., Lochin, E., Baudoin, C., Dubois, E., 1630 and P. Gelard, "On The Existence Of Optimal LEDBAT 1631 Parameters", IEEE ICC 2014 - Communication QoS, 1632 Reliability and Modeling Symposium , 2014. 1634 [WELZ2015] 1635 Welzl, M. and G. Fairhurst, "The Benefits to Applications 1636 of using Explicit Congestion Notification (ECN)", IETF 1637 (Work-in-Progress) , June 2015. 1639 [WINS2014] 1640 Winstein, K., "Transport Architectures for an Evolving 1641 Internet", PhD thesis, Massachusetts Institute of 1642 Technology , 2014. 1644 Authors' Addresses 1646 Nicolas Kuhn (editor) 1647 CNES, Telecom Bretagne 1648 18 avenue Edouard Belin 1649 Toulouse 31400 1650 France 1652 Phone: +33 5 61 27 32 13 1653 Email: nicolas.kuhn@cnes.fr 1655 Preethi Natarajan (editor) 1656 Cisco Systems 1657 510 McCarthy Blvd 1658 Milpitas, California 1659 United States 1661 Email: prenatar@cisco.com 1663 Naeem Khademi (editor) 1664 University of Oslo 1665 Department of Informatics, PO Box 1080 Blindern 1666 N-0316 Oslo 1667 Norway 1669 Phone: +47 2285 24 93 1670 Email: naeemk@ifi.uio.no 1672 David Ros 1673 Simula Research Laboratory AS 1674 P.O. Box 134 1675 Lysaker, 1325 1676 Norway 1678 Phone: +33 299 25 21 21 1679 Email: dros@simula.no