idnits 2.17.1 draft-ietf-aqm-codel-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([CODEL2012]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 15, 2016) is 2964 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC896' is mentioned on line 102, but not defined ** Obsolete undefined reference: RFC 896 (Obsoleted by RFC 7805) == Missing Reference: 'NATAL2010' is mentioned on line 115, but not defined == Unused Reference: 'RFC0896' is defined on line 1174, but no explicit reference was found in the text == Unused Reference: 'CMNTS' is defined on line 1197, but no explicit reference was found in the text == Outdated reference: A later version (-06) exists of draft-ietf-aqm-fq-codel-03 -- Obsolete informational reference (is this intentional?): RFC 2581 (Obsoleted by RFC 5681) -- Obsolete informational reference (is this intentional?): RFC 896 (Obsoleted by RFC 7805) -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) Summary: 2 errors (**), 0 flaws (~~), 6 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 AQM K. Nichols 3 Internet-Draft Pollere, Inc. 4 Intended status: Informational V. Jacobson 5 Expires: September 16, 2016 A. McGregor, ed. 6 J. Iyengar, ed. 7 Google 8 March 15, 2016 10 Controlled Delay Active Queue Management 11 draft-ietf-aqm-codel-03 13 Abstract 15 This document describes a general framework called CoDel (Controlled 16 Delay) [CODEL2012] that controls bufferbloat-generated excess delay 17 in modern networking environments. CoDel consists of an estimator, a 18 setpoint, and a control loop. It requires no configuration in normal 19 Internet deployments. CoDel comprises some major technical 20 innovations and has been made available as open source so that the 21 framework can be applied by the community to a range of problems. It 22 has been implemented in Linux (and available in the Linux 23 distribution) and deployed in some networks at the consumer edge. In 24 addition, the framework has been successfully applied in other ways. 26 Note: Code Components extracted from this document must include the 27 license as included with the code in Section 5. 29 Status of This Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current/. 39 Internet-Drafts are draft documents valid for a maximum of six months 40 and may be updated, replaced, or obsoleted by other documents at any 41 time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress." 44 This Internet-Draft will expire on September 16, 2016. 46 Copyright Notice 48 Copyright (c) 2016 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2. Conventions used in this document . . . . . . . . . . . . . . 5 65 3. Building Blocks of Queue Management . . . . . . . . . . . . . 5 66 3.1. Estimator . . . . . . . . . . . . . . . . . . . . . . . . 6 67 3.2. Setpoint . . . . . . . . . . . . . . . . . . . . . . . . 8 68 3.3. Control Loop . . . . . . . . . . . . . . . . . . . . . . 9 69 4. Putting it together: queue management for the network edge . 12 70 4.1. Overview of CoDel AQM . . . . . . . . . . . . . . . . . . 12 71 4.2. Non-starvation . . . . . . . . . . . . . . . . . . . . . 13 72 4.3. Using the interval . . . . . . . . . . . . . . . . . . . 13 73 4.4. The target setpoint . . . . . . . . . . . . . . . . . . . 14 74 4.5. Use with multiple queues . . . . . . . . . . . . . . . . 15 75 4.6. Use of stochastic bins or sub-queues to improve 76 performance . . . . . . . . . . . . . . . . . . . . . . . 15 77 4.7. Setting up CoDel AQM . . . . . . . . . . . . . . . . . . 16 78 5. Annotated Pseudo-code for CoDel AQM . . . . . . . . . . . . . 17 79 5.1. Data Types . . . . . . . . . . . . . . . . . . . . . . . 18 80 5.2. Per-queue state (codel_queue_t instance variables) . . . 19 81 5.3. Constants . . . . . . . . . . . . . . . . . . . . . . . . 19 82 5.4. Enqueue routine . . . . . . . . . . . . . . . . . . . . . 19 83 5.5. Dequeue routine . . . . . . . . . . . . . . . . . . . . . 19 84 5.6. Helper routines . . . . . . . . . . . . . . . . . . . . . 21 85 5.7. Implementation considerations . . . . . . . . . . . . . . 22 86 6. Adapting and applying CoDel's building blocks . . . . . . . . 23 87 6.1. Validations and available code . . . . . . . . . . . . . 23 88 6.2. CoDel in the datacenter . . . . . . . . . . . . . . . . . 24 89 7. Security Considerations . . . . . . . . . . . . . . . . . . . 25 90 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 91 9. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 25 92 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 25 93 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 25 94 11.1. Normative References . . . . . . . . . . . . . . . . . . 25 95 11.2. Informative References . . . . . . . . . . . . . . . . . 25 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 98 1. Introduction 100 The need for queue management has been evident for decades. The 101 "persistently full buffer" problem has been discussed in the IETF 102 community since the early 80's [RFC896]. The IRTF's End-to-End 103 Working Group called for the deployment of active queue management 104 (AQM) to solve the problem in 1998 [RFC2309]. Despite this 105 awareness, the problem has only gotten worse as Moore's Law growth in 106 memory density fueled an exponential increase in buffer pool size. 107 Efforts to deploy AQM have been frustrated by difficult configuration 108 and negative impact on network utilization. This problem, recently 109 christened "bufferbloat", [TSV2011] [BB2011] has become increasingly 110 important throughout the Internet but particularly at the consumer 111 edge. Recently, queue management has become more critical due to 112 increased consumer use of the Internet, mixing large video 113 transactions with time-critical VoIP and gaming. Gettys [TSV2011, 114 BB2011] has been instrumental in publicizing the problem and the 115 measurement work [CHARB2007, NATAL2010] and coining the term 116 bufferbloat. Large content distributors such as Google have observed 117 that bufferbloat is ubiquitous and adversely affects performance for 118 many users. The solution is an effective AQM that remediates 119 bufferbloat at a bottleneck while "doing no harm" at hops where 120 buffers are not bloated. 122 The development and deployment of effective active queue management 123 has been hampered by persistent misconceptions about the cause and 124 meaning of packet queues in network buffers. Network buffers exist 125 to absorb the packet bursts that occur naturally in statistically 126 multiplexed networks. Buffers helpfully absorb the queues created by 127 such reasonable packet network behavior as short-term mismatches in 128 traffic arrival and departure rates that arise from upstream resource 129 contention, transport conversation startup transients and/or changes 130 in the number of conversations sharing a link. Unfortunately, other 131 less useful network behaviors can cause queues to fill and their 132 effects are not nearly as benign. Discussion of these issues and the 133 reason why the solution is not simply smaller buffers can be found in 134 [RFC2309], [VANQ2006], [REDL1998], and [CODEL2012]. To understand 135 queue management, it is critical to understand the difference between 136 the necessary, useful "good" queue, and the counterproductive "bad" 137 queue. 139 Many approaches to active queue management (AQM) have been developed 140 over the past two decades but none has been widely deployed due to 141 performance problems. When designed with the wrong conceptual model 142 for queues, AQMs have limited operational range, require a lot of 143 configuration tweaking, and frequently impair rather than improve 144 performance. Today, the demands on an effective AQM are even 145 greater: many network devices must work across a range of bandwidths, 146 either due to link variations or due to the mobility of the device. 147 The CoDel approach is designed to meet the following goals: 149 o parameterless for normal operation - has no knobs for operators, 150 users, or implementers to adjust 152 o treat "good queue" and "bad queue" differently, that is, keep 153 delay low while permitting necessary bursts of traffic 155 o control delay while insensitive (or nearly so) to round trip 156 delays, link rates and traffic loads; this goal is to "do no harm" 157 to network traffic while controlling delay 159 o adapt to dynamically changing link rates with no negative impact 160 on utilization 162 o simple and efficient implementation (can easily span the spectrum 163 from low-end, linux-based access points and home routers up to 164 high-end commercial router silicon) 166 CoDel has five major innovations that distinguish it from prior AQMs: 167 use of local queue minimum to track congestion ("bad queue"), use of 168 an efficient single state variable representation of that tracked 169 statistic, use of packet sojourn time as the observed datum, rather 170 than packets, bytes, or rates, use of mathematically determined 171 setpoint derived from maximizing the network power metric, and a 172 modern state space controller. 174 CoDel configures itself based on a round-trip time metric which can 175 be set to 100ms for the normal, terrestrial Internet. With no 176 changes to parameters, we have found CoDel to work across a wide 177 range of conditions, with varying links and the full range of 178 terrestrial round trip times. 180 Since CoDel was first published [CODEL2012], a number of implementers 181 have been using and adapting it with promising results. Much of this 182 work is collected at http://www.bufferbloat.net/projects/codel . 183 CoDel has been implemented in Linux very efficiently and should lend 184 itself to silicon implementation. CoDel is well-adapted for use in 185 multiple queued devices and has been used by Eric Dumazet with 186 multiple queues in a sophisticated queue management approach, 187 fq_codel [FQ-CODEL-ID]. 189 2. Conventions used in this document 191 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 192 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 193 document are to be interpreted as described in [RFC2119]. 195 In this document, these words will appear with that interpretation 196 only when in ALL CAPS. Lower case uses of these words are not to be 197 interpreted as carrying [RFC2119] significance. 199 In this document, the characters ">>" preceding an indented line(s) 200 indicates a compliance requirement statement using the key words 201 listed above. This convention aids reviewers in quickly identifying 202 or finding the explicit compliance requirements of this RFC. 204 3. Building Blocks of Queue Management 206 Two decades of work on queue management failed to yield an approach 207 that could be widely deployed in the Internet. Careful tuning for 208 particular usages has enabled queue management techniques to "kind 209 of" work; that is, they have been able to decrease queueing delays, 210 but only at the undue cost of link utilization and/or fairness. At 211 the heart of queue management is the notion of "good queue" and "bad 212 queue" and the search for ways to get rid of the bad queue (which 213 only adds delay) while preserving the good queue (which provides for 214 good utilization). This section explains queueing, both good and 215 bad, and covers the innovative CoDel building blocks that can be used 216 to manage packet buffers to keep their queues in the "good" range. 218 Packet queues form in buffers facing bottleneck links, i.e., where 219 the line rate goes from high to low or where many links converge. 220 The well-known bandwidth-delay product (sometimes called "pipe size") 221 is the bottleneck's bandwidth multiplied by the sender-receiver- 222 sender round-trip delay, and is the amount of data that has to be in 223 transit between two hosts in order to run the bottleneck link at 100% 224 utilization. To explore how queues can form, consider a long-lived 225 TCP connection with a 25 packet window sending through a connection 226 with a bandwidth-delay product of 20 packets. After an initial burst 227 of packets the connection will settle into a five packet (+/-1) 228 standing queue; this standing queue size is determined by the 229 mismatch between the window size and the pipe size, and is unrelated 230 to the connection's sending rate. The connection has 25 packets in 231 flight at all times, but only 20 packets arrive at the destination 232 over a round trip time. If the TCP connection has a 30 packet 233 window, the queue will be ten packets with no change in sending rate. 234 Similarly, if the window is 20 packets, there will be no queue but 235 the sending rate is the same. Nothing can be inferred about the 236 sending rate from the queue size, and any queue other than transient 237 bursts only creates delays in the network. The sender needs to 238 reduce the number of packets in flight rather than sending rate. 240 In the above example, the five packet standing queue can be seen to 241 contribute nothing but delay to the connection, and thus is clearly 242 "bad queue". If, in our example, there is a single bottleneck link 243 and it is much slower than the link that feeds it (say, a high-speed 244 ethernet link into a limited DSL uplink) a 20 packet buffer at the 245 bottleneck might be necessary to temporarily hold the 20 packets in 246 flight to keep the bottleneck link's utilization high. The burst of 247 packets should drain completely (to 0 or 1 packets) within a round 248 trip time and this transient queue is "good queue" because it allows 249 the connection to keep the 20 packets in flight and for the 250 bottleneck link to be fully utilized. In terms of the delay 251 experienced, the "good queue" goes away in about a round trip time, 252 while "bad queue" hangs around for longer, causing delays. 254 Effective queue management detects "bad queue" while ignoring "good 255 queue" and takes action to get rid of the bad queue when it is 256 detected. The goal is a queue controller that accomplishes this 257 objective. To control a queue, we need three basic components 259 o Estimator - figure out what we've got 261 o Setpoint - know what what we want 263 o Control loop - if what we've got isn't what we want, we need a way 264 to move it there 266 3.1. Estimator 268 The estimator both observes the queue and detects when good queue 269 turns to bad queue and vice versa. CoDel has two innovations in its 270 estimator: what is observed as an indicator of queue and how the 271 observations are used to detect good/bad queue. 273 Queue length has been widely used as an observed indicator of 274 congestion and is frequently conflated with sending rate. Use of 275 queue length as a metric is sensitive to how and when the length is 276 observed. A high speed arrival link to a buffer serviced at a much 277 lower rate can rapidly build up a queue that might disperse 278 completely or down to a single packet before a round trip time has 279 elapsed. If the queue length is monitored at packet arrival (as in 280 original RED) or departure time, every packet will see a queue with 281 one possible exception. If the queue length itself is time sampled 282 (as recommended in [REDL1998], a truer picture of the queue's 283 occupancy can be gained at the expense of considerable implementation 284 complexity. 286 The use of queue length is further complicated in networks that are 287 subject to both short and long term changes in available link rate 288 (as in WiFi). Link rate drops can result in a spike in queue length 289 that should be ignored unless it persists. It is not the queue 290 length that should be controlled but the amount of excess delay 291 packets experience due to a persistent or standing queue, which means 292 that the packet sojourn time in the buffer is exactly what we want to 293 track. Tracking the packet sojourn times in the buffer observes the 294 actual delay experienced by each packet. Sojourn time allows queue 295 management to be independent of link rate, gives superior performance 296 to use of buffer size, and is directly related to user-visible 297 performance. It works regardless of line rate changes or link 298 sharing by multiple queues (which the individual queues may 299 experience as changing rates). 301 Consider a link shared by two queues with different priorities. 302 Packets that arrive at the high priority queue are sent as soon as 303 the link is available while packets in the other queue have to wait 304 until the high priority queue is empty (i.e., a strict priority 305 scheduler). The number of packets in the high priority queue might 306 be large but the queue is emptied quickly and the amount of time each 307 packet spends enqueued (the sojourn time) is not large. The other 308 queue might have a smaller number of packets, but packet sojourn 309 times will include the waiting time for the high priority packets to 310 be sent. This makes the sojourn time a good sample of the congestion 311 that each separate queue is experiencing. This example also shows 312 how the metric of sojourn time is independent of the number of queues 313 or the service discipline used, and is instead indicative of 314 congestion seen by the individual queues. 316 How can observed sojourn time be used to separate good queue from bad 317 queue? Although averages, especially of queue length, have 318 previously been widely used as an indicator of bad queue, their 319 efficacy is questionable. Consider the burst that disperses every 320 round trip time. The average queue will be one-half the burst size, 321 though this might vary depending on when the average is computed and 322 the timing of arrivals. The average queue sojourn time would be one- 323 half the time it takes to clear the burst. The average then would 324 indicate a persistent queue where there is none. Instead of averages 325 we recommend tracking the minimum sojourn time, then, if there is one 326 packet that has a zero sojourn time then there is no persistent 327 queue. The value of the minimum in detecting persistent queue is 328 apparent when looking at graphs of queue delay. 330 A persistent queue can be detected by tracking the (local) minimum 331 queue delay packets experience. To ensure that this minimum value 332 does not become stale, it has to have been experienced recently, i.e. 333 during an appropriate past time interval. This interval is the 334 maximum amount of time a minimum value is considered to be in effect, 335 and is related to the amount of time it takes for the largest 336 expected burst to drain. Conservatively, this interval should be at 337 least a round trip time to avoid falsely detecting a persistent queue 338 and not a lot more than a round trip time to avoid delay in detecting 339 the persistent queue. This suggests that the appropriate interval 340 value is the maximum round-trip time of all the connections sharing 341 the buffer. 343 (The following key insight makes computation of the local minimum 344 efficient: It is sufficient to keep a single state variable of how 345 long the minimum has been above or below a target value rather than 346 retaining all the local values to compute the minimum, leading to 347 both storage and computational savings. We use this insight in the 348 pseudo-code for CoDel later in the draft.) 350 These two innovations, use of sojourn time as observed values and the 351 local minimum as the statistic to monitor queue congestion are key to 352 CoDel's estimator building block. The local minimum sojourn time 353 provides an accurate and robust measure of standing queue and has an 354 efficient implementation. In addition, use of the minimum sojourn 355 time has important advantages in implementation. The minimum packet 356 sojourn can only be decreased when a packet is dequeued which means 357 that all the work of CoDel can take place when packets are dequeued 358 for transmission and that no locks are needed in the implementation. 359 The minimum is the only statistic with this property. 361 A more detailed explanation with many pictures can be found in 362 http://pollere.net/Pdfdocs/QrantJul06.pdf and 363 http://www.ietf.org/proceedings/84/slides/slides-84-tsvarea-4.pdf . 365 3.2. Setpoint 367 Now that we have a robust way of detecting standing queue, we need a 368 setpoint that tells us when to act. If the controller is set to take 369 action as soon as the estimator has a non-zero value, the average 370 drop rate will be maximized, which minimizes TCP goodput 371 [MACTCP1997]. Also, this policy results in no backlog over time (no 372 persistent queue), which negates much of the value of having a 373 buffer, since it maximizes the bottleneck link bandwidth lost due to 374 normal stochastic variation in packet interarrival time. We want a 375 setpoint that maximizes utilization while minimizing delay. Early in 376 the history of packet networking, Kleinrock developed the analytic 377 machinery to do this using a quantity he called 'power', which is the 378 ratio of a normalized throughput to a normalized delay [KLEIN81]. 380 It is straightforward to derive an analytic expression for the 381 average goodput of a TCP conversation at a given round-trip time r 382 and setpoint f (where f is expressed as a fraction of r). Reno TCP, 383 for example, yields: 385 goodput = r (3 + 6f - f^2) / (4 (1+f)) 387 Since the peak queue delay is simply f r, power is solely a function 388 of f since the r's in the numerator and denominator cancel: 390 power is proportional to (1 + 2f - 1/3 f^2) / (1 + f)^2 392 As Kleinrock observed, the best operating point, in terms of 393 bandwidth / delay tradeoff, is the peak power point, since points off 394 the peak represent a higher cost (in delay) per unit of bandwidth. 395 The power vs. f curve for any AIMD TCP is monotone decreasing. But 396 the curve is very flat for f < 0.1 followed by a increasing curvature 397 with a knee around f = 0.2, then a steep, almost linear fall off 398 [TSV84]. Since the previous equation showed that goodput is monotone 399 increasing with f, the best operating point is near the right edge of 400 the flat top since that represents the highest goodput achievable for 401 a negligible increase in delay. However, since the r in the model is 402 a conservative upper bound, a target of 0.1r runs the risk of pushing 403 shorter RTT connections over the knee and giving them higher delay 404 for no significant goodput increase. Generally, a more conservative 405 target of 0.05r offers a good utilization vs. delay tradeoff while 406 giving enough headroom to work well with a large variation in real 407 RTT. 409 As the above analysis shows, a very small standing queue gives close 410 to 100% utilization of the bottleneck link. While this result was 411 for Reno TCP, the derivation uses only properties that must hold for 412 any 'TCP friendly' transport. We have verified by both analysis and 413 simulation that this result holds for Reno, Cubic, and 414 Westwood[TSV84]. This results in a particularly simple form for the 415 setpoint: the ideal range for the permitted standing queue is between 416 5% and 10% of the TCP connection's RTT. Thus target is simply 5% of 417 the interval of section 3.1. 419 3.3. Control Loop 421 Section 3.1 describes a simple, reliable way to measure bad 422 (persistent) queue. Section 3.2 shows that TCP congestion control 423 dynamics gives rise to a setpoint for this measure that's a provably 424 good balance between enhancing throughput and minimizing delay, and 425 that this setpoint is a constant fraction of the same 'largest 426 average RTT' interval used to distinguish persistent from transient 427 queue. The only remaining building block needed for a basic AQM is a 428 'control loop' algorithm to effectively drive the queueing system 429 from any 'persistent queue above target' state to a state where the 430 persistent queue is below target. 432 Control theory provides a wealth of approaches to the design of 433 control loops. Most of classical control theory deals with the 434 control of linear, time-invariant, single-input-single-output (SISO) 435 systems. Control loops for these systems generally come from a (well 436 understood) class known as Proportional-Integral-Derivative (PID) 437 controllers. Unfortunately, a queue is not a linear system and an 438 AQM operates at the point of maximum non-linearity (where the output 439 link bandwidth saturates so increased demand creates delay rather 440 than higher utilization). Output queues are also not time-invariant 441 since traffic is generally a mix of connections which start and stop 442 at arbitrary times and which can have radically different behaviors 443 ranging from "open loop" UDP audio/video to "closed-loop" congestion- 444 avoiding TCP. Finally, the constantly changing mix of connections 445 (which can't be converted to a single 'lumped parameter' model 446 because of their transfer function differences) makes the system 447 multi-input-multi-output (MIMO), not SISO. 449 Since queueing systems match none of the prerequisites for a 450 classical controller, a modern state-space controller is a better 451 approach with states 'no persistent queue' and 'has persistent 452 queue'. Since Internet traffic mixtures change rapidly and 453 unpredictably, a noise and error tolerant adaptation algorithm like 454 Stochastic Gradient is a good choice. Since there's essentially no 455 information in the amount of persistent queue [TSV84], the adaptation 456 should be driven by how long it has persisted. 458 Consider the two extremes of traffic behavior, a single open-loop UDP 459 video stream and a single, long-lived TCP bulk data transfer. If the 460 average bandwidth of the UDP video stream is greater that the 461 bottleneck link rate, the link's queue will grow and the controller 462 will eventually enter 'has persistent queue' state and start dropping 463 packets. Since the video stream is open loop, its arrival rate is 464 unaffected by drops so the queue will persist until the average drop 465 rate is greater than the output bandwidth deficit (= average arrival 466 rate - average departure rate) so the job of the adaptation algorithm 467 is to discover this rate. For this example, the adaptation could 468 consist of simply estimating the arrival and departure rates then 469 dropping at a rate slightly greater than their difference. But this 470 class of algorithm won't work at all for the bulk data TCP stream. 471 TCP runs in closed-loop flow balance [TSV84] so its arrival rate is 472 almost always exactly equal to the departure rate - the queue isn't 473 the result of a rate imbalance but rather a mismatch between the TCP 474 sender's window and the source-destination-source round-trip path 475 capacity (i.e., the connection's bandwidth-delay product). The 476 sender's TCP congestion avoidance algorithm will slowly increase the 477 send window (one packet per round-trip-time) [RFC2581] which will 478 eventually cause the bottleneck to enter 'has persistent queue' 479 state. But, since the average input rate is the same as the average 480 output rate, the rate deficit estimation that gave the correct drop 481 rate for the video stream would compute a drop rate of zero for the 482 TCP stream. However, if the output link drops one packet as it 483 enters 'has persistent queue' state, when the sender discovers this 484 (via TCP's normal packet loss repair mechanisms) it will reduce its 485 window by a factor of two [RFC2581] so, one round-trip-time after the 486 drop, the persistent queue will go away. 488 If there were N TCP conversations sharing the bottleneck, the 489 controller would have to drop O(N) packets, one from each 490 conversation, to make all the conversations reduce their window to 491 get rid of the persistent queue. If the traffic mix consists of 492 short (<= bandwidth-delay product) conversations, the aggregate 493 behavior becomes more like the open-loop video example since each 494 conversation is likely to have already sent all its packets by the 495 time it learns about a drop so each drop has negligible effect on 496 subsequent traffic. 498 The controller does not know the number, duration, or kind of 499 conversations creating its queue, so it has to learn the appropriate 500 response. Since single drops can have a large effect if the degree 501 of multiplexing (the number of active conversations) is small, 502 dropping at too high a rate is likely to have a catastrophic effect 503 on throughput. Dropping at a low rate (< 1 packet per round-trip- 504 time) then increasing the drop rate slowly until the persistent queue 505 goes below target is unlikely to overdrop and is guaranteed to 506 eventually dissipate the persistent queue. This stochastic gradient 507 learning procedure is the core of CoDel's control loop (the gradient 508 exists because a drop always reduces the (instantaneous) queue so an 509 increasing drop rate always moves the system "down" toward no 510 persistent queue, regardless of traffic mix). 512 The "next drop time" is decreased in inverse proportion to the square 513 root of the number of drops since the dropping state was entered, 514 using the well-known nonlinear relationship of drop rate to 515 throughput to get a linear change in throughput [REDL1998], 516 [MACTCP1997]. 518 Since the best rate to start dropping is at slightly more than one 519 packet per RTT, the controller's initial drop rate can be directly 520 derived from the estimator's interval, defined in section 3.1. When 521 the minimum sojourn time first crosses the target and CoDel drops a 522 packet, the earliest the controller could see the effect of the drop 523 is the round trip time (interval) + the local queue wait time 524 (target). If the next drop happens any earlier than this time 525 (interval + target), CoDel will overdrop. In practice, the local 526 queue waiting time tends to vary, so making the initial drop spacing 527 (i.e., the time to the second drop) be exactly the minimum possible 528 also leads to overdropping. Analysis of simulation and real-world 529 measured data shows that the 75th percentile magnitude of this 530 variation is less than the target, and so the initial drop spacing 531 SHOULD be set to the estimator's interval plus twice the target 532 (i.e., initial drop spacing = 1.1 * interval) to ensure that the 533 controller has accounted for acceptable congestion delays. 535 Use of the minimum statistic lets the controller be placed in the 536 dequeue routine with the estimator. This means that the control 537 signal (the drop) can be sent at the first sign of bad queue (as 538 indicated by the sojourn time) and that the controller can stop 539 acting as soon as the sojourn time falls below the setpoint. 540 Dropping at dequeue has both implementation and control advantages. 542 4. Putting it together: queue management for the network edge 544 CoDel was initially designed as a bufferbloat solution for the 545 consumer network edge. The CoDel building blocks are able to adapt 546 to different or time-varying link rates, to be easily used with 547 multiple queues, to have excellent utilization with low delay, and to 548 have a simple and efficient implementation. The only setting CoDel 549 requires is its interval value, and as 100ms satisfies that 550 definition for normal Internet usage, CoDel can be parameter-free for 551 consumer use. CoDel was released to the open source community, where 552 it has been widely promulgated and adapted to many problems. CoDel's 553 efficient implementation and lack of configuration are unique 554 features and make it suitable to manage modern packet buffers. For 555 more background and results on CoDel, see [CODEL2012] and 556 http://pollere.net/CoDel.html . 558 4.1. Overview of CoDel AQM 560 To ensure that link utilization is not adversely affected, CoDel's 561 estimator sets its target to the setpoint that optimizes power and 562 CoDel's controller does not drop packets when the drop would leave 563 the queue empty or with fewer than a maximum transmission unit (MTU) 564 worth of bytes in the buffer. Section 3.2 showed that the ideal 565 setpoint is 5-10% of the connection RTT. In the open terrestrial- 566 based Internet, especially at the consumer edge, we expect most 567 unbloated RTTs to have a ceiling of 100ms [CHARB2007]. Using this 568 RTT gives a minimum target of 5ms and the interval for tracking the 569 minimum is 100ms. In practice, uncongested links will see sojourn 570 times below target more often than once per RTT, so the estimator is 571 not overly sensitive to the value of the interval. 573 When the estimator finds a persistent delay above target, the 574 controller enters the drop state where a packet is dropped and the 575 next drop time is set. As discussed in section 3.3, the initial next 576 drop spacing is intended to be long enough to give the endpoints time 577 to react to the single drop so SHOULD be set to a value of 1.1 times 578 the interval. If the estimator's output falls below target, the 579 controller cancels the next drop and exits the drop state. (The 580 controller is more sensitive than the estimator to an overly short 581 interval, since an unnecessary drop would occur and lower link 582 utilization.) If next drop time is reached while the controller is 583 still in drop state, the packet being dequeued is dropped and the 584 next drop time is recalculated. Additional logic prevents re- 585 entering the dropping state too soon after exiting it and resumes the 586 dropping state at a recent control level, if one exists. 588 Note that CoDel AQM only enters its dropping state when the local 589 minimum sojourn delay has exceeded target for a time interval long 590 enough for normal bursts to dissipate, ensuring that a burst of 591 packets that fits in the pipe will not be dropped. 593 4.2. Non-starvation 595 CoDel's goals are to control delay with little or no impact on link 596 utilization and to be deployed on a wide range of link bandwidths, 597 including variable-rate links, without reconfiguration. To keep from 598 making drops when it would starve the output link, CoDel makes 599 another check before dropping to see if at least an MTU worth of 600 bytes remains in the buffer. If not, the packet SHOULD NOT be 601 dropped and, currently, CoDel exits the drop state. The MTU size can 602 be set adaptively to the largest packet seen so far or can be read 603 from the driver. 605 4.3. Using the interval 607 The interval is chosen to give endpoints time to react to a drop 608 without being so long that response times suffer. CoDel's estimator, 609 setpoint, and control loop all use the interval. Understanding their 610 derivation shows that CoDel is the most sensitive to the value of 611 interval for single long-lived TCPs with a decreased sensitivity for 612 traffic mixes. This is fortunate as RTTs vary across connections and 613 are not known apriori. The best policy is to use an interval 614 slightly larger than the RTT seen by most of the connections using a 615 link, a value that can be determined as the largest RTT seen if the 616 value is not an outlier (use of a 95-99th percentile value should 617 work). In practice, this value is not known or measured (though see 618 Section 6.2 for an application where interval is measured. An 619 interval setting of 100ms works well across a range of RTTs from 10ms 620 to 1 second (excellent performance is achieved in the range from 10 621 ms to 300ms). For devices intended for the normal terrestrial 622 Internet, interval SHOULD have a value of 100ms. This will only 623 cause overdropping where a long-lived TCP has an RTT longer than 624 100ms and there is little or no mixing with other connections through 625 the link. 627 Some confusion concerns the roles of the target setpoint and the 628 minimum-tracking interval. In particular, some experimenters believe 629 the value of target needs to be increased when the lower layers have 630 a bursty nature where packets are transmitted for short periods 631 interspersed with idle periods where the link is waiting for 632 permission to send. CoDel's estimator will "see" the effective 633 transmission rate over an interval and increasing target will just 634 lead to longer queue delays. On the other hand, where a significant 635 additional delay is added to the intrinsic round trip time of most or 636 all packets due to the waiting time for a transmission, it is 637 necessary to increase interval by that extra delay. That is, target 638 SHOULD NOT be adjusted but interval MAY need to be adjusted. For 639 more on this (and pictures) see http://pollere.net/Pdfdocs/ 640 noteburstymacs.pdf 642 4.4. The target setpoint 644 The target is the maximum acceptable persistent queue delay above 645 which CoDel is dropping or preparing to drop and below which CoDel 646 will not drop. The calculations of section 3.2 showed that the best 647 setpoint is 5-10% of the RTT, with the low end of 5% preferred. We 648 used simulation to explore the impact when TCPs are mixed with other 649 traffic and with connections of different RTTs. Accordingly, we 650 experimented extensively with values in the 5-10% of RTT range and, 651 overall, used target values between 1 and 20 milliseconds for RTTs 652 from 30 to 500ms and link bandwidths of 64Kbps to 100Mbps to 653 experimentally explore the setpoint that gives consistently high 654 utilization while controlling delay across a range of bandwidths, 655 RTTs, and traffic loads. Our results were notably consistent with 656 the mathematics of section 3.2. Below a target of 5ms, utilization 657 suffers for some conditions and traffic loads, and above 5ms we saw 658 very little or no improvement in utilization. Thus target SHOULD be 659 set to 5ms for normal Internet traffic. 661 A congested (but not overloaded) CoDel link with traffic composed 662 solely or primarily of long-lived TCP flows will have a median delay 663 through the link will tend to the target. For bursty traffic loads 664 and for overloaded conditions (where it is difficult or impossible 665 for all the arriving flows to be accommodated) the median queues will 666 be longer than target. 668 The non-starvation drop inhibit feature dominates where the link rate 669 becomes very small. By inhibiting drops when there is less than an 670 (outbound link) MTU worth of bytes in the buffer, CoDel adapts to 671 very low bandwidth links. This is shown in [CODEL2012] and 672 interested parties should see the discussion of results there. 673 Unpublished studies were carried out down to 64Kbps. The drop 674 inhibit condition can be expanded to include a test to retain 675 sufficient bytes or packets to fill an allocation in a request-and- 676 grant MAC. 678 Sojourn times must remain above the target for an entire interval in 679 order to enter the drop state. Any packet with a sojourn time less 680 than the target will reset the time that the queue was last below the 681 target. Since Internet traffic has very dynamic characteristics, the 682 actual sojourn delay experienced by packets varies greatly and is 683 often less than the target unless the overload is excessive. When a 684 link is not overloaded, it is not a bottleneck and packet sojourn 685 times will be small or nonexistent. In the usual case, there are 686 only one or two places along a path where packets will encounter a 687 bottleneck (usually at the edge), so the total amount of queueing 688 delay experienced by a packet should be less than 10ms even under 689 extremely congested conditions. This net delay is substantially 690 lower than common current queueing delays on the Internet that grow 691 to orders of seconds [NETAL2010, CHARB2007]. 693 4.5. Use with multiple queues 695 Unlike other AQMs, CoDel is easily adapted to multiple queue systems. 696 With other approaches there is always a question of how to account 697 for the fact that each queue receives less than the full link rate 698 over time and usually sees a varying rate over time. This is exactly 699 what CoDel excels at: using a packet's sojourn time in the buffer 700 completely circumvents this problem. In a multiple-queue setting, a 701 separate CoDel algorithm runs on each queue, but each CoDel instance 702 uses the packet sojourn time the same way a single-queue CoDel does. 703 Just as a single-queue CoDel adapts to changing link bandwidths 704 [CODEL2012], so does a multiple-queue CoDel system. As an 705 optimization to avoid queueing more than necessary, when testing for 706 queue occupancy before dropping, the total occupancy of all queues 707 sharing the same output link should be used. This property of CoDel 708 has been exploited in fq_codel, briefly discussed in the next section 709 and in more detail in [FQ-CODEL-ID]. 711 4.6. Use of stochastic bins or sub-queues to improve performance 713 Shortly after the release of the CoDel pseudocode, Eric Dumazet 714 created fq_codel, applying CoDel to each bin, or queue, used with 715 stochastic fair queueing. (To understand further, see [SFQ1990] or 716 the linux sfq documentation at http://linux.die.net/man/8/tc-sfq .) 717 fq_codel hashes on the packet header fields to determine a specific 718 bin, or sub-queue, for each five-tuple flow, and runs CoDel on each 719 bin or sub-queue thus creating a well-mixed output flow and obviating 720 issues of reverse path flows (including "ack compression"). 721 Dumazet's code is part of the CeroWrt project code at the 722 bufferbloat.net's web site and described in [FQ-CODEL-ID]. Andrew 723 McGregor has implemented a version of fq_codel for ns-3, which is 724 being prepared for inclusion at http://code.nsnam.org/tomh/ns-3-dev- 725 aqm/ . 727 We have also experimented with a similar multi-queue approach by 728 creating an ns-2 simulator code module, sfqcodel, which uses 729 Stochastic Fair Queueing (SFQ) to isolate flows into bins, each 730 running CoDel. This setup has provided excellent results: median 731 queues remain small across a range of traffic patterns that includes 732 bidirectional file transfers (that is, the same traffic sent in both 733 directions on a link), constant bit-rate VoIP-like flows, and 734 emulated web traffic and utilizations are consistently better than 735 single queue CoDel, generally very close to 100%. Our code, intended 736 for simulation experiments, is available at http://pollere.net/ 737 CoDel.html and being integrated into the ns-2 distribution. 739 A number of open issues should be studied. In particular, if the 740 number of different queues or bins is too large, the scheduling will 741 be the dominant factor, not the AQM; it is NOT the case that more 742 bins are always better. In our simulations, we have found good 743 behavior across mixed traffic types with smaller numbers of queues, 744 8-16 for a 5Mbps link. This configuration appears to give the best 745 behavior for voice, web browsing and file transfers where increased 746 numbers of bins seems to favor file transfers at the expense of the 747 other traffic. Our work has been very preliminary and we encourage 748 others to take this up and to explore analytic modeling. It would be 749 instructive to see the effects of different numbers of bins on a 750 range of traffic models, something like an updated version of 751 [BMPFQ]. 753 Implementers SHOULD use the fq_codel multiple queue approach if 754 possible as it deals with many problems beyond the reach of an AQM on 755 a single queue. 757 4.7. Setting up CoDel AQM 759 CoDel is set for use in devices in the open Internet. An interval of 760 100ms is used, target is set to 5% of interval, and the initial drop 761 spacing is also set to interval. These settings have been chosen so 762 that a device, such as a small WiFi router, can be sold without the 763 need for any values to be made adjustable, yielding a parameterless 764 implementation. In addition, CoDel is useful in environments with 765 significantly different characteristics from the normal Internet, for 766 example, in switches used as a cluster interconnect within a data 767 center. Since cluster traffic is entirely internal to the data 768 center, round trip latencies are low (typically <100us) but 769 bandwidths are high (1-40Gbps) so it's relatively easy for the 770 aggregation phase of a distributed computation (e.g., the Reduce part 771 of a Map/Reduce) to persistently fill then overflow the modest per- 772 port buffering available in most high speed switches. A CoDel 773 configured for this environment (target and interval in the 774 microsecond rather than millisecond range) can minimize drops (or ECN 775 marks) while keeping throughput high and latency low. 777 Devices destined for these environments MAY use a different interval, 778 where suitable. If appropriate analysis indicates, the target MAY be 779 set to some other value in the 5-10% of interval and the initial drop 780 spacing MAY be set to a value of 1.0 to 1.2 times the interval. But 781 these settings will cause problems such as overdropping and low 782 throughput if used on the open Internet, so devices that allow CoDel 783 to be configured SHOULD default to Internet-appropriate values given 784 in this document. 786 5. Annotated Pseudo-code for CoDel AQM 788 What follows is the CoDel algorithm in C++-like pseudo-code. Since 789 CoDel adds relatively little new code to a basic tail-drop fifo- 790 queue, we have attempted to highlight just these additions by 791 presenting CoDel as a sub-class of a basic fifo-queue base class. 792 The reference code is included to aid implementers who wish to apply 793 CoDel to queue management as described here or to adapt its 794 principles to other applications. 796 Implementors are strongly encouraged to also look at Eric Dumazet's 797 Linux kernel version of CoDel - a well-written, well tested, real- 798 world, C-based implementation. As of this writing, it is at 799 https://github.com/torvalds/linux/blob/master/net/sched/sch_codel.c. 801 The following pseudo-code is open-source with a dual BSD/GPL license: 803 Codel - The Controlled-Delay Active Queue Management algorithm. 804 Copyright (C) 2011-2014 Kathleen Nichols . 805 Redistribution and use in source and binary forms, with or without 806 modification, are permitted provided that the following conditions 807 are met: 809 o Redistributions of source code must retain the above copyright 810 notice, this list of conditions, and the following disclaimer, 811 without modification. 813 o Redistributions in binary form must reproduce the above copyright 814 notice, this list of conditions and the following disclaimer in 815 the documentation and/or other materials provided with the 816 distribution. 818 o The names of the authors may not be used to endorse or promote 819 products derived from this software without specific prior written 820 permission. 822 Alternatively, provided that this notice is retained in full, this 823 software may be distributed under the terms of the GNU General Public 824 License ("GPL") version 2, in which case the provisions of the GPL 825 apply INSTEAD OF those given above. 827 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 828 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 829 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 830 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 831 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 832 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 833 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 834 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 835 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 836 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 837 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 839 5.1. Data Types 841 time_t is an integer time value in units convenient for the system. 842 Resolution to at least a millisecond is required and better 843 resolution is useful up to the minimum possible packet time on the 844 output link; 64- or 32-bit widths are acceptable but with 32 bits the 845 resolution should be no finer than 2^{-16} to leave enough dynamic 846 range to represent a wide range of queue waiting times. Narrower 847 widths also have implementation issues due to overflow (wrapping) and 848 underflow (limit cycles because of truncation to zero) that are not 849 addressed in this pseudocode. The code presented here uses 0 as a 850 flag value to indicate "no time set." 852 packet_t* is a pointer to a packet descriptor. We assume it has a 853 tstamp field capable of holding a time_t and that field is available 854 for use by CoDel (it will be set by the enqueue routine and used by 855 the dequeue routine). 857 queue_t is a base class for queue objects (the parent class for 858 codel_queue_t objects). We assume it has enqueue() and dequeue() 859 methods that can be implemented in child classes. We assume it has a 860 bytes() method that returns the current queue size in bytes. This 861 can be an approximate value. The method is invoked in the dequeue() 862 method but shouldn't require a lock with the enqueue() method. 864 flag_t is a Boolean. 866 5.2. Per-queue state (codel_queue_t instance variables) 868 time_t first_above_time; // Time to declare sojourn time above target 869 time_t drop_next; // Time to drop next packet 870 uint32_t count; // Packets dropped since entering drop state 871 flag_t dropping; // Set to true if in drop state 873 5.3. Constants 875 time_t target = MS2TIME(5); // 5ms target queue delay 876 time_t interval = MS2TIME(100); // 100ms sliding-minimum window 877 u_int maxpacket = 512; // Maximum packet size in bytes 878 // (should use interface MTU) 880 5.4. Enqueue routine 882 All the work of CoDel is done in the dequeue routine. The only CoDel 883 addition to enqueue is putting the current time in the packet's 884 tstamp field so that the dequeue routine can compute the packet's 885 sojourn time. 887 void codel_queue_t::enqueue(packet_t* pkt) 888 { 889 pkt->timestamp() = clock(); 890 queue_t::enqueue(pkt); 891 } 893 5.5. Dequeue routine 895 This is the heart of CoDel. There are two branches based on whether 896 the controller is in dropping state: (i) if the controller is in 897 dropping state (that is, the minimum packet sojourn time is greater 898 than target) then the controller checks if it is time to leave 899 dropping state or schedules the next drop(s); or (ii) if the 900 controller is not in dropping state, it determines if it should enter 901 dropping state and do the initial drop. 903 Packet* CoDelQueue::dequeue() 904 { 905 double now = clock();; 906 dodequeueResult r = dodequeue(now); 908 if (dropping_) { 909 if (! r.ok_to_drop) { 910 // sojourn time below target - leave dropping state 911 dropping_ = false; 912 } 913 // Time for the next drop. Drop current packet and dequeue 914 // next. If the dequeue doesn't take us out of dropping 915 // state, schedule the next drop. A large backlog might 916 // result in drop rates so high that the next drop should 917 // happen now, hence the 'while' loop. 918 while (now >= drop_next_ && dropping_) { 919 drop(r.p); 920 r = dodequeue(now); 921 if (! r.ok_to_drop) { 922 // leave dropping state 923 dropping_ = false; 924 } else { 925 ++count_; 926 // schedule the next drop. 927 drop_next_ = control_law(drop_next_); 928 } 929 } 930 // If we get here we're not in dropping state. The 'ok_to_drop' 931 // return from dodequeue means that the sojourn time has been 932 // above 'target' for 'interval' so enter dropping state. 933 } else if (r.ok_to_drop) { 934 drop(r.p); 935 r = dodequeue(now); 936 dropping_ = true; 938 // If min went above target close to when it last went 939 // below, assume that the drop rate that controlled the 940 // queue on the last cycle is a good starting point to 941 // control it now. ('drop_next' will be at most 'interval' 942 // later than the time of the last drop so 'now - drop_next' 943 // is a good approximation of the time from the last drop 944 // until now.) 945 count_ = (count_ > 2 && now - drop_next_ < 8*interval_)? 946 count_ - 2 : 1; 947 drop_next_ = control_law(now); 948 } 949 return (r.p); 950 } 952 5.6. Helper routines 954 Since the degree of multiplexing and nature of the traffic sources is 955 unknown, CoDel acts as a closed-loop servo system that gradually 956 increases the frequency of dropping until the queue is controlled 957 (sojourn time goes below target). This is the control law that 958 governs the servo. It has this form because of the sqrt(p) 959 dependence of TCP throughput on drop probability. Note that for 960 embedded systems or kernel implementation, the inverse sqrt can be 961 computed efficiently using only integer multiplication. 963 time_t codel_queue_t::control_law(time_t t) 964 { 965 return t + interval / sqrt(count); 966 } 968 Next is a helper routine the does the actual packet dequeue and 969 tracks whether the sojourn time is above or below target and, if 970 above, if it has remained above continuously for at least interval. 971 It returns two values: a Boolean indicating if it is OK to drop 972 (sojourn time above target for at least interval), and the packet 973 dequeued. 975 typedef struct { 976 packet_t* p; 977 flag_t ok_to_drop; 978 } dodequeue_result; 980 dodequeue_result codel_queue_t::dodequeue(time_t now) 981 { 982 dodequeueResult r = { NULL, queue_t::dequeue() }; 983 if (r.p == NULL) { 984 // queue is empty - we can't be above target 985 first_above_time_ = 0; 986 return r; 987 } 989 // To span a large range of bandwidths, CoDel runs two 990 // different AQMs in parallel. One is sojourn-time-based 991 // and takes effect when the time to send an MTU-sized 992 // packet is less than target. The 1st term of the "if" 993 // below does this. The other is backlog-based and takes 994 // effect when the time to send an MTU-sized packet is >= 995 // target. The goal here is to keep the output link 996 // utilization high by never allowing the queue to get 997 // smaller than the amount that arrives in a typical 998 // interarrival time (MTU-sized packets arriving spaced 999 // by the amount of time it takes to send such a packet on 1000 // the bottleneck). The 2nd term of the "if" does this. 1001 time_t sojourn_time = now - r.p->tstamp; 1002 if (sojourn_time_ < target_ || bytes() <= maxpacket_) { 1003 // went below - stay below for at least interval 1004 first_above_time_ = 0; 1005 } else { 1006 if (first_above_time_ == 0) { 1007 // just went above from below. if still above at 1008 // first_above_time, will say it's ok to drop. 1009 first_above_time_ = now + interval_; 1010 } else if (now >= first_above_time_) { 1011 r.ok_to_drop = 1; 1012 } 1013 } 1014 return r; 1015 } 1017 5.7. Implementation considerations 1019 Since CoDel requires relatively little per-queue state and no direct 1020 communication or state sharing between the enqueue and dequeue 1021 routines, it is relatively simple to add CoDel to almost any packet 1022 processing pipeline, including ASIC- or NPU-based forwarding engines. 1024 One issue to consider is dodequeue()'s use of a 'bytes()' function to 1025 determine the current queue size in bytes. This value does not need 1026 to be exact. If the enqueue part of the pipeline keeps a running 1027 count of the total number of bytes it has put into the queue and the 1028 dequeue routine keeps a running count of the total bytes it has 1029 removed from the queue, 'bytes()' is simply the difference between 1030 these two counters (32-bit counters should be adequate.) Enqueue has 1031 to update its counter once per packet queued but it does not matter 1032 when (before, during or after the packet has been added to the 1033 queue). The worst that can happen is a slight, transient, 1034 underestimate of the queue size which might cause a drop to be 1035 briefly deferred. 1037 6. Adapting and applying CoDel's building blocks 1039 CoDel has been implemented and tested in a range of environments. 1041 6.1. Validations and available code 1043 An experiment by Stanford graduate students successfully used Linux 1044 CoDel to duplicate our published simulation work on CoDel's ability 1045 to adapt to drastic link rate changes. This experiment can be found 1046 at http://reproducingnetworkresearch.wordpress.com/2012/06/06/ 1047 solving-bufferbloat-the-codel-way/ . 1049 Our ns-2 simulations are available at http://pollere.net/CoDel.html . 1050 CableLabs has funded some additions to the simulator sfqcodel code, 1051 which have been made public. The basic algorithm of CoDel remains 1052 unchanged, but we continue to experiment with drop interval setting 1053 when resuming the drop state, inhibiting or canceling drop state when 1054 few bytes are in the queue, and other details. Our approach to 1055 changes is to only make them if we are convinced they do more good 1056 than harm, both operationally and in the implementation. With this 1057 in mind, some of these issues may continue to evolve as we get more 1058 deployment and as the building blocks are applied to a wider range of 1059 problems. 1061 CoDel is available in ns-2 version 2.35 and later. 1063 Andrew McGregor has an ns-3 implementation of both CoDel and 1064 fq_codel. CoDel is available in ns-3 version 3.21 and later at 1065 https://www.nsnam.org/ . At the time of this writing, the ns-3 1066 implementation of fq_codel is available at 1067 https://www.nsnam.org/wiki/GSOC2014Bufferbloat . 1069 CoDel is available in Linux. Eric Dumazet implemented CoDel in the 1070 Linux kernel. 1072 Dave Taht has been instrumental in the integration and distribution 1073 of CoDel as a bufferbloat solution 1074 (http://www.bufferbloat.net/projects/codel ). 1076 6.2. CoDel in the datacenter 1078 Nandita Dukkipati's team at Google was quick to realize that the 1079 CoDel building blocks could be applied to bufferbloat problems in 1080 datacenter servers, not just to Internet routers. The Linux CoDel 1081 queueing discipline (qdisc) was adapted in three ways to tackle this 1082 bufferbloat problem. 1084 1. The default CoDel action was modified to be a direct feedback 1085 from qdisc to the TCP layer at dequeue. The direct feedback 1086 simply reduces TCP's congestion window just as congestion control 1087 would do in the event of drop. The scheme falls back to ECN 1088 marking or packet drop if the TCP socket lock could not be 1089 acquired at dequeue. 1091 2. Being located in the server makes it possible to monitor the 1092 actual RTT to use as CoDel's interval rather than making a "best 1093 guess" of RTT. The CoDel interval is dynamically adjusted by 1094 using the maximum TCP round-trip time (RTT) of those connections 1095 sharing the same Qdisc/bucket. In particular, there is a history 1096 entry of the maximum RTT experienced over the last second. As a 1097 packet is dequeued, the RTT estimate is accessed from its TCP 1098 socket. If the estimate is larger than the current CoDel 1099 interval, the CoDel interval is immediately refreshed to the new 1100 value. If the CoDel interval is not refreshed for over a second, 1101 it is decreased it to the history entry and the process is 1102 repeated. The use of the dynamic TCP RTT estimate lets interval 1103 adapt to the actual maximum value currently seen and thus lets 1104 the controller space its drop intervals appropriately. 1106 3. Since the mathematics of computing the setpoint are invariant, a 1107 target of 5% of the RTT or CoDel interval was used here. 1109 Non-data packets were not dropped as these are typically small and 1110 sometimes critical control packets. Being located on the server, 1111 there is no concern with misbehaving users as there would be on the 1112 public Internet. 1114 In several data center workload benchmarks, which are typically 1115 bursty, CoDel reduced the queueing latencies at the qdisc, and 1116 thereby improved the mean and 99th-percentile latencies from several 1117 tens of milliseconds to less than one millisecond. The minimum 1118 tracking part of the CoDel framework proved useful in disambiguating 1119 "good" queue versus "bad" queue, particularly helpful in controlling 1120 qdisc buffers that are inherently bursty because of TCP Segmentation 1121 Offload (TSO). 1123 7. Security Considerations 1125 This document describes an active queue management algorithm for 1126 implementation in networked devices. There are no specific security 1127 exposures associated with CoDel. 1129 8. IANA Considerations 1131 This document does not require actions by IANA. 1133 9. Conclusions 1135 CoDel provides very general, efficient, parameterless building blocks 1136 for queue management that can be applied to single or multiple queues 1137 in a variety of data networking scenarios. It is a critical tool in 1138 solving bufferbloat. CoDel's settings MAY be modified for other 1139 special-purpose networking applications. We encourage 1140 experimentation, improvement, and rigorous testing. 1142 On-going projects are creating a deployable CoDel in Linux routers 1143 and experimenting with applying CoDel to stochastic queueing with 1144 promising results. 1146 10. Acknowledgments 1148 The authors wish to thank Jim Gettys for the constructive nagging 1149 that made us get the work "out there" before we thought it was ready. 1150 We also want to thank Dave Taht, Eric Dumazet, and the open source 1151 community for showing the value of getting it "out there" and for 1152 making it real. We also wish to thank Nandita Dukkipati for 1153 contributions to Section 6 and for comments which helped to 1154 substantially improve this draft. 1156 11. References 1158 11.1. Normative References 1160 [RFC2119] Bradner, S., "Key Words for use in RFCs to Indicate 1161 Requirement Levels", March 1997. 1163 11.2. Informative References 1165 [FQ-CODEL-ID] 1166 Hoeiland-Joergensen, T., McKenney, P., 1167 dave.taht@gmail.com, d., Gettys, J., and E. Dumazet, 1168 "FlowQueue-Codel", draft-ietf-aqm-fq-codel-03 (work in 1169 progress), November 2015. 1171 [RFC2581] Allman, M., Paxson, V., and W. Stevens, "TCP Congestion 1172 Control", RFC 2581, April 1999. 1174 [RFC0896] Nagle, J., "Congestion control in IP/TCP internetworks", 1175 RFC 896, January 1984. 1177 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 1178 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 1179 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 1180 S., Wroclawski, J., and L. Zhang, "Recommendations on 1181 Queue Management and Congestion Avoidance in the 1182 Internet", RFC 2309, April 1998. 1184 [TSV2011] Gettys, J., "Bufferbloat: Dark Buffers in the Internet", 1185 IETF 80 presentation to Transport Area Open Meeting, 1186 March, 2011, 1187 . 1189 [BB2011] Gettys, J. and K. Nichols, "Bufferbloat: Dark Buffers in 1190 the Internet", Communications of the ACM 9(11) pp. 57-65. 1192 [BMPFQ] Suter, B., "Buffer Management Schemes for Supporting TCP 1193 in Gigabit Routers with Per-flow Queueing", IEEE Journal 1194 on Selected Areas in Communications Vol. 17 Issue 6, June, 1195 1999, pp. 1159-1169. 1197 [CMNTS] Allman, M., "Comments on Bufferbloat", Computer 1198 Communication Review Vol. 43 No. 1, January, 2013, pp. 1199 31-37. 1201 [CODEL2012] 1202 Nichols, K. and V. Jacobson, "Controlling Queue Delay", 1203 Communications of the ACM Vol. 55 No. 11, July, 2012, pp. 1204 42-50. 1206 [VANQ2006] 1207 Jacobson, V., "A Rant on Queues", talk at MIT Lincoln 1208 Labs, Lexington, MA July, 2006, 1209 . 1211 [REDL1998] 1212 Nichols, K., Jacobson, V., and K. Poduri, "RED in a 1213 Different Light", Tech report, September, 1999, 1214 . 1217 [NETAL2010] 1218 Kreibich, C., et. al., "Netalyzr: Illuminating the Edge 1219 Network", Proceedings of the Internet Measurement 1220 Conference Melbourne, Australia, 2010. 1222 [TSV84] Jacobson, V., "CoDel talk at TSV meeting IETF 84", 1223 . 1226 [CHARB2007] 1227 Dischinger, M., et. al, "Characterizing Residential 1228 Broadband Networks", Proceedings of the Internet 1229 Measurement Conference San Diego, CA, 2007. 1231 [MACTCP1997] 1232 Mathis, M., Semke, J., and J. Mahdavi, "The Macroscopic 1233 Behavior of the TCP Congestion Avoidance Algorithm", ACM 1234 SIGCOMM Computer Communications Review Vol. 27 no. 1, Jan. 1235 2007. 1237 [SFQ1990] McKenney, P., "Stochastic Fairness Queuing", Proceedings 1238 of IEEE INFOCOMM 90 San Francisco, 1990. 1240 [KLEIN81] Kleinrock, L. and R. Gail, "An Invariant Property of 1241 Computer Network Power", International Conference on 1242 Communications June, 1981, 1243 . 1245 Authors' Addresses 1247 Kathleen Nichols 1248 Pollere, Inc. 1249 PO Box 370201 1250 Montara, CA 94037 1251 USA 1253 Email: nichols@pollere.com 1254 Van Jacobson 1255 Google 1257 Email: vanj@google.com 1259 Andrew McGregor 1260 Google 1262 Email: andrewmcgr@google.com 1264 Janardhan Iyengar 1265 Google 1267 Email: jri@google.com