idnits 2.17.1 draft-ietf-aqm-codel-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([CODEL2012]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 2, 2016) is 2878 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC896' is mentioned on line 102, but not defined ** Obsolete undefined reference: RFC 896 (Obsoleted by RFC 7805) == Unused Reference: 'RFC0896' is defined on line 1175, but no explicit reference was found in the text == Unused Reference: 'CMNTS' is defined on line 1198, but no explicit reference was found in the text == Outdated reference: A later version (-06) exists of draft-ietf-aqm-fq-codel-03 -- Obsolete informational reference (is this intentional?): RFC 2581 (Obsoleted by RFC 5681) -- Obsolete informational reference (is this intentional?): RFC 896 (Obsoleted by RFC 7805) -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) Summary: 2 errors (**), 0 flaws (~~), 5 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 AQM K. Nichols 3 Internet-Draft Pollere, Inc. 4 Intended status: Informational V. Jacobson 5 Expires: December 4, 2016 A. McGregor, ed. 6 J. Iyengar, ed. 7 Google 8 June 2, 2016 10 Controlled Delay Active Queue Management 11 draft-ietf-aqm-codel-04 13 Abstract 15 This document describes a general framework called CoDel (Controlled 16 Delay) [CODEL2012] that controls bufferbloat-generated excess delay 17 in modern networking environments. CoDel consists of an estimator, a 18 setpoint, and a control loop. It requires no configuration in normal 19 Internet deployments. CoDel comprises some major technical 20 innovations and has been made available as open source so that the 21 framework can be applied by the community to a range of problems. It 22 has been implemented in Linux (and available in the Linux 23 distribution) and deployed in some networks at the consumer edge. In 24 addition, the framework has been successfully applied in other ways. 26 Note: Code Components extracted from this document must include the 27 license as included with the code in Section 5. 29 Status of This Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current/. 39 Internet-Drafts are draft documents valid for a maximum of six months 40 and may be updated, replaced, or obsoleted by other documents at any 41 time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress." 44 This Internet-Draft will expire on December 4, 2016. 46 Copyright Notice 48 Copyright (c) 2016 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2. Conventions used in this document . . . . . . . . . . . . . . 5 65 3. Building Blocks of Queue Management . . . . . . . . . . . . . 5 66 3.1. Estimator . . . . . . . . . . . . . . . . . . . . . . . . 6 67 3.2. Setpoint . . . . . . . . . . . . . . . . . . . . . . . . 8 68 3.3. Control Loop . . . . . . . . . . . . . . . . . . . . . . 9 69 4. Putting it together: queue management for the network edge . 12 70 4.1. Overview of CoDel AQM . . . . . . . . . . . . . . . . . . 12 71 4.2. Non-starvation . . . . . . . . . . . . . . . . . . . . . 13 72 4.3. Using the interval . . . . . . . . . . . . . . . . . . . 13 73 4.4. The target setpoint . . . . . . . . . . . . . . . . . . . 14 74 4.5. Use with multiple queues . . . . . . . . . . . . . . . . 15 75 4.6. Use of stochastic bins or sub-queues to improve 76 performance . . . . . . . . . . . . . . . . . . . . . . . 15 77 4.7. Setting up CoDel AQM . . . . . . . . . . . . . . . . . . 16 78 5. Annotated Pseudo-code for CoDel AQM . . . . . . . . . . . . . 17 79 5.1. Data Types . . . . . . . . . . . . . . . . . . . . . . . 18 80 5.2. Per-queue state (codel_queue_t instance variables) . . . 19 81 5.3. Constants . . . . . . . . . . . . . . . . . . . . . . . . 19 82 5.4. Enqueue routine . . . . . . . . . . . . . . . . . . . . . 19 83 5.5. Dequeue routine . . . . . . . . . . . . . . . . . . . . . 19 84 5.6. Helper routines . . . . . . . . . . . . . . . . . . . . . 21 85 5.7. Implementation considerations . . . . . . . . . . . . . . 22 86 6. Adapting and applying CoDel's building blocks . . . . . . . . 23 87 6.1. Validations and available code . . . . . . . . . . . . . 23 88 6.2. CoDel in the datacenter . . . . . . . . . . . . . . . . . 24 89 7. Security Considerations . . . . . . . . . . . . . . . . . . . 25 90 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 91 9. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 25 92 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 25 93 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 25 94 11.1. Normative References . . . . . . . . . . . . . . . . . . 25 95 11.2. Informative References . . . . . . . . . . . . . . . . . 25 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 98 1. Introduction 100 The need for queue management has been evident for decades. The 101 "persistently full buffer" problem has been discussed in the IETF 102 community since the early 80's [RFC896]. The IRTF's End-to-End 103 Working Group called for the deployment of active queue management 104 (AQM) to solve the problem in 1998 [RFC2309]. Despite this 105 awareness, the problem has only gotten worse as Moore's Law growth in 106 memory density fueled an exponential increase in buffer pool size. 107 Efforts to deploy AQM have been frustrated by difficult configuration 108 and negative impact on network utilization. This problem, recently 109 christened "bufferbloat", [TSV2011] [BB2011] has become increasingly 110 important throughout the Internet but particularly at the consumer 111 edge. Recently, queue management has become more critical due to 112 increased consumer use of the Internet, mixing large video 113 transactions with time-critical VoIP and gaming. Gettys [TSV2011, 114 BB2011] has been instrumental in publicizing the problem and the 115 measurement work [CHARB2007, NETAL2010] and coining the term 116 bufferbloat. Large content distributors such as Google have observed 117 that bufferbloat is ubiquitous and adversely affects performance for 118 many users. The solution is an effective AQM that remediates 119 bufferbloat at a bottleneck while "doing no harm" at hops where 120 buffers are not bloated. 122 The development and deployment of effective active queue management 123 has been hampered by persistent misconceptions about the cause and 124 meaning of packet queues in network buffers. Network buffers exist 125 to absorb the packet bursts that occur naturally in statistically 126 multiplexed networks. Buffers helpfully absorb the queues created by 127 such reasonable packet network behavior as short-term mismatches in 128 traffic arrival and departure rates that arise from upstream resource 129 contention, transport conversation startup transients and/or changes 130 in the number of conversations sharing a link. Unfortunately, other 131 less useful network behaviors can cause queues to fill and their 132 effects are not nearly as benign. Discussion of these issues and the 133 reason why the solution is not simply smaller buffers can be found in 134 [RFC2309], [VANQ2006], [REDL1998], and [CODEL2012]. To understand 135 queue management, it is critical to understand the difference between 136 the necessary, useful "good" queue, and the counterproductive "bad" 137 queue. 139 Many approaches to active queue management (AQM) have been developed 140 over the past two decades but none has been widely deployed due to 141 performance problems. When designed with the wrong conceptual model 142 for queues, AQMs have limited operational range, require a lot of 143 configuration tweaking, and frequently impair rather than improve 144 performance. Today, the demands on an effective AQM are even 145 greater: many network devices must work across a range of bandwidths, 146 either due to link variations or due to the mobility of the device. 147 The CoDel approach is designed to meet the following goals: 149 o parameterless for normal operation - has no knobs for operators, 150 users, or implementers to adjust 152 o treat "good queue" and "bad queue" differently, that is, keep 153 delay low while permitting necessary bursts of traffic 155 o control delay while insensitive (or nearly so) to round trip 156 delays, link rates and traffic loads; this goal is to "do no harm" 157 to network traffic while controlling delay 159 o adapt to dynamically changing link rates with no negative impact 160 on utilization 162 o simple and efficient implementation (can easily span the spectrum 163 from low-end, linux-based access points and home routers up to 164 high-end commercial router silicon) 166 CoDel has five major innovations that distinguish it from prior AQMs: 167 use of local queue minimum to track congestion ("bad queue"), use of 168 an efficient single state variable representation of that tracked 169 statistic, use of packet sojourn time as the observed datum, rather 170 than packets, bytes, or rates, use of mathematically determined 171 setpoint derived from maximizing the network power metric, and a 172 modern state space controller. 174 CoDel configures itself based on a round-trip time metric which can 175 be set to 100ms for the normal, terrestrial Internet. With no 176 changes to parameters, we have found CoDel to work across a wide 177 range of conditions, with varying links and the full range of 178 terrestrial round trip times. 180 Since CoDel was first published [CODEL2012], a number of implementers 181 have been using and adapting it with promising results. Much of this 182 work is collected at http://www.bufferbloat.net/projects/codel . 183 CoDel has been implemented in Linux very efficiently and should lend 184 itself to silicon implementation. CoDel is well-adapted for use in 185 multiple queued devices and has been used by Eric Dumazet with 186 multiple queues in a sophisticated queue management approach, 187 fq_codel [FQ-CODEL-ID]. 189 2. Conventions used in this document 191 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 192 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 193 document are to be interpreted as described in [RFC2119]. 195 In this document, these words will appear with that interpretation 196 only when in ALL CAPS. Lower case uses of these words are not to be 197 interpreted as carrying [RFC2119] significance. 199 3. Building Blocks of Queue Management 201 Two decades of work on queue management failed to yield an approach 202 that could be widely deployed in the Internet. Careful tuning for 203 particular usages has enabled queue management techniques to "kind 204 of" work; that is, they have been able to decrease queueing delays, 205 but only at the undue cost of link utilization and/or fairness. At 206 the heart of queue management is the notion of "good queue" and "bad 207 queue" and the search for ways to get rid of the bad queue (which 208 only adds delay) while preserving the good queue (which provides for 209 good utilization). This section explains queueing, both good and 210 bad, and covers the innovative CoDel building blocks that can be used 211 to manage packet buffers to keep their queues in the "good" range. 213 Packet queues form in buffers facing bottleneck links, i.e., where 214 the line rate goes from high to low or where many links converge. 215 The well-known bandwidth-delay product (sometimes called "pipe size") 216 is the bottleneck's bandwidth multiplied by the sender-receiver- 217 sender round-trip delay, and is the amount of data that has to be in 218 transit between two hosts in order to run the bottleneck link at 100% 219 utilization. To explore how queues can form, consider a long-lived 220 TCP connection with a 25 packet window sending through a connection 221 with a bandwidth-delay product of 20 packets. After an initial burst 222 of packets the connection will settle into a five packet (+/-1) 223 standing queue; this standing queue size is determined by the 224 mismatch between the window size and the pipe size, and is unrelated 225 to the connection's sending rate. The connection has 25 packets in 226 flight at all times, but only 20 packets arrive at the destination 227 over a round trip time. If the TCP connection has a 30 packet 228 window, the queue will be ten packets with no change in sending rate. 229 Similarly, if the window is 20 packets, there will be no queue but 230 the sending rate is the same. Nothing can be inferred about the 231 sending rate from the queue size, and any queue other than transient 232 bursts only creates delays in the network. The sender needs to 233 reduce the number of packets in flight rather than sending rate. 235 In the above example, the five packet standing queue can be seen to 236 contribute nothing but delay to the connection, and thus is clearly 237 "bad queue". If, in our example, there is a single bottleneck link 238 and it is much slower than the link that feeds it (say, a high-speed 239 ethernet link into a limited DSL uplink) a 20 packet buffer at the 240 bottleneck might be necessary to temporarily hold the 20 packets in 241 flight to keep the bottleneck link's utilization high. The burst of 242 packets should drain completely (to 0 or 1 packets) within a round 243 trip time and this transient queue is "good queue" because it allows 244 the connection to keep the 20 packets in flight and for the 245 bottleneck link to be fully utilized. In terms of the delay 246 experienced, the "good queue" goes away in about a round trip time, 247 while "bad queue" hangs around for longer, causing delays. 249 Effective queue management detects "bad queue" while ignoring "good 250 queue" and takes action to get rid of the bad queue when it is 251 detected. The goal is a queue controller that accomplishes this 252 objective. To control a queue, we need three basic components 254 o Estimator - figure out what we've got 256 o Setpoint - know what what we want 258 o Control loop - if what we've got isn't what we want, we need a way 259 to move it there 261 3.1. Estimator 263 The estimator both observes the queue and detects when good queue 264 turns to bad queue and vice versa. CoDel has two innovations in its 265 estimator: what is observed as an indicator of queue and how the 266 observations are used to detect good/bad queue. 268 Queue length has been widely used as an observed indicator of 269 congestion and is frequently conflated with sending rate. Use of 270 queue length as a metric is sensitive to how and when the length is 271 observed. A high speed arrival link to a buffer serviced at a much 272 lower rate can rapidly build up a queue that might disperse 273 completely or down to a single packet before a round trip time has 274 elapsed. If the queue length is monitored at packet arrival (as in 275 original RED) or departure time, every packet will see a queue with 276 one possible exception. If the queue length itself is time sampled 277 (as recommended in [REDL1998], a truer picture of the queue's 278 occupancy can be gained at the expense of considerable implementation 279 complexity. 281 The use of queue length is further complicated in networks that are 282 subject to both short and long term changes in available link rate 283 (as in WiFi). Link rate drops can result in a spike in queue length 284 that should be ignored unless it persists. It is not the queue 285 length that should be controlled but the amount of excess delay 286 packets experience due to a persistent or standing queue, which means 287 that the packet sojourn time in the buffer is exactly what we want to 288 track. Tracking the packet sojourn times in the buffer observes the 289 actual delay experienced by each packet. Sojourn time allows queue 290 management to be independent of link rate, gives superior performance 291 to use of buffer size, and is directly related to user-visible 292 performance. It works regardless of line rate changes or link 293 sharing by multiple queues (which the individual queues may 294 experience as changing rates). 296 Consider a link shared by two queues with different priorities. 297 Packets that arrive at the high priority queue are sent as soon as 298 the link is available while packets in the other queue have to wait 299 until the high priority queue is empty (i.e., a strict priority 300 scheduler). The number of packets in the high priority queue might 301 be large but the queue is emptied quickly and the amount of time each 302 packet spends enqueued (the sojourn time) is not large. The other 303 queue might have a smaller number of packets, but packet sojourn 304 times will include the waiting time for the high priority packets to 305 be sent. This makes the sojourn time a good sample of the congestion 306 that each separate queue is experiencing. This example also shows 307 how the metric of sojourn time is independent of the number of queues 308 or the service discipline used, and is instead indicative of 309 congestion seen by the individual queues. 311 How can observed sojourn time be used to separate good queue from bad 312 queue? Although averages, especially of queue length, have 313 previously been widely used as an indicator of bad queue, their 314 efficacy is questionable. Consider the burst that disperses every 315 round trip time. The average queue will be one-half the burst size, 316 though this might vary depending on when the average is computed and 317 the timing of arrivals. The average queue sojourn time would be one- 318 half the time it takes to clear the burst. The average then would 319 indicate a persistent queue where there is none. Instead of averages 320 we recommend tracking the minimum sojourn time, then, if there is one 321 packet that has a zero sojourn time then there is no persistent 322 queue. The value of the minimum in detecting persistent queue is 323 apparent when looking at graphs of queue delay. 325 A persistent queue can be detected by tracking the (local) minimum 326 queue delay packets experience. To ensure that this minimum value 327 does not become stale, it has to have been experienced recently, i.e. 328 during an appropriate past time interval. This interval is the 329 maximum amount of time a minimum value is considered to be in effect, 330 and is related to the amount of time it takes for the largest 331 expected burst to drain. Conservatively, this interval should be at 332 least a round trip time to avoid falsely detecting a persistent queue 333 and not a lot more than a round trip time to avoid delay in detecting 334 the persistent queue. This suggests that the appropriate interval 335 value is the maximum round-trip time of all the connections sharing 336 the buffer. 338 (The following key insight makes computation of the local minimum 339 efficient: It is sufficient to keep a single state variable of how 340 long the minimum has been above or below a target value rather than 341 retaining all the local values to compute the minimum, leading to 342 both storage and computational savings. We use this insight in the 343 pseudo-code for CoDel later in the draft.) 345 These two innovations, use of sojourn time as observed values and the 346 local minimum as the statistic to monitor queue congestion are key to 347 CoDel's estimator building block. The local minimum sojourn time 348 provides an accurate and robust measure of standing queue and has an 349 efficient implementation. In addition, use of the minimum sojourn 350 time has important advantages in implementation. The minimum packet 351 sojourn can only be decreased when a packet is dequeued which means 352 that all the work of CoDel can take place when packets are dequeued 353 for transmission and that no locks are needed in the implementation. 354 The minimum is the only statistic with this property. 356 A more detailed explanation with many pictures can be found in 357 http://pollere.net/Pdfdocs/QrantJul06.pdf and 358 http://www.ietf.org/proceedings/84/slides/slides-84-tsvarea-4.pdf . 360 3.2. Setpoint 362 Now that we have a robust way of detecting standing queue, we need a 363 setpoint that tells us when to act. If the controller is set to take 364 action as soon as the estimator has a non-zero value, the average 365 drop rate will be maximized, which minimizes TCP goodput 366 [MACTCP1997]. Also, this policy results in no backlog over time (no 367 persistent queue), which negates much of the value of having a 368 buffer, since it maximizes the bottleneck link bandwidth lost due to 369 normal stochastic variation in packet interarrival time. We want a 370 setpoint that maximizes utilization while minimizing delay. Early in 371 the history of packet networking, Kleinrock developed the analytic 372 machinery to do this using a quantity he called 'power', which is the 373 ratio of a normalized throughput to a normalized delay [KLEIN81]. 375 It is straightforward to derive an analytic expression for the 376 average goodput of a TCP conversation at a given round-trip time r 377 and setpoint f (where f is expressed as a fraction of r). Reno TCP, 378 for example, yields: 380 goodput = r (3 + 6f - f^2) / (4 (1+f)) 381 Since the peak queue delay is simply f r, power is solely a function 382 of f since the r's in the numerator and denominator cancel: 384 power is proportional to (1 + 2f - 1/3 f^2) / (1 + f)^2 386 As Kleinrock observed, the best operating point, in terms of 387 bandwidth / delay tradeoff, is the peak power point, since points off 388 the peak represent a higher cost (in delay) per unit of bandwidth. 389 The power vs. f curve for any AIMD TCP is monotone decreasing. But 390 the curve is very flat for f < 0.1 followed by a increasing curvature 391 with a knee around f = 0.2, then a steep, almost linear fall off 392 [TSV84]. Since the previous equation showed that goodput is monotone 393 increasing with f, the best operating point is near the right edge of 394 the flat top since that represents the highest goodput achievable for 395 a negligible increase in delay. However, since the r in the model is 396 a conservative upper bound, a target of 0.1r runs the risk of pushing 397 shorter RTT connections over the knee and giving them higher delay 398 for no significant goodput increase. Generally, a more conservative 399 target of 0.05r offers a good utilization vs. delay tradeoff while 400 giving enough headroom to work well with a large variation in real 401 RTT. 403 As the above analysis shows, a very small standing queue gives close 404 to 100% utilization of the bottleneck link. While this result was 405 for Reno TCP, the derivation uses only properties that must hold for 406 any 'TCP friendly' transport. We have verified by both analysis and 407 simulation that this result holds for Reno, Cubic, and 408 Westwood[TSV84]. This results in a particularly simple form for the 409 setpoint: the ideal range for the permitted standing queue is between 410 5% and 10% of the TCP connection's RTT. Thus target is simply 5% of 411 the interval of section 3.1. 413 3.3. Control Loop 415 Section 3.1 describes a simple, reliable way to measure bad 416 (persistent) queue. Section 3.2 shows that TCP congestion control 417 dynamics gives rise to a setpoint for this measure that's a provably 418 good balance between enhancing throughput and minimizing delay, and 419 that this setpoint is a constant fraction of the same 'largest 420 average RTT' interval used to distinguish persistent from transient 421 queue. The only remaining building block needed for a basic AQM is a 422 'control loop' algorithm to effectively drive the queueing system 423 from any 'persistent queue above target' state to a state where the 424 persistent queue is below target. 426 Control theory provides a wealth of approaches to the design of 427 control loops. Most of classical control theory deals with the 428 control of linear, time-invariant, single-input-single-output (SISO) 429 systems. Control loops for these systems generally come from a (well 430 understood) class known as Proportional-Integral-Derivative (PID) 431 controllers. Unfortunately, a queue is not a linear system and an 432 AQM operates at the point of maximum non-linearity (where the output 433 link bandwidth saturates so increased demand creates delay rather 434 than higher utilization). Output queues are also not time-invariant 435 since traffic is generally a mix of connections which start and stop 436 at arbitrary times and which can have radically different behaviors 437 ranging from "open loop" UDP audio/video to "closed-loop" congestion- 438 avoiding TCP. Finally, the constantly changing mix of connections 439 (which can't be converted to a single 'lumped parameter' model 440 because of their transfer function differences) makes the system 441 multi-input-multi-output (MIMO), not SISO. 443 Since queueing systems match none of the prerequisites for a 444 classical controller, a modern state-space controller is a better 445 approach with states 'no persistent queue' and 'has persistent 446 queue'. Since Internet traffic mixtures change rapidly and 447 unpredictably, a noise and error tolerant adaptation algorithm like 448 Stochastic Gradient is a good choice. Since there's essentially no 449 information in the amount of persistent queue [TSV84], the adaptation 450 should be driven by how long it has persisted. 452 Consider the two extremes of traffic behavior, a single open-loop UDP 453 video stream and a single, long-lived TCP bulk data transfer. If the 454 average bandwidth of the UDP video stream is greater that the 455 bottleneck link rate, the link's queue will grow and the controller 456 will eventually enter 'has persistent queue' state and start dropping 457 packets. Since the video stream is open loop, its arrival rate is 458 unaffected by drops so the queue will persist until the average drop 459 rate is greater than the output bandwidth deficit (= average arrival 460 rate - average departure rate) so the job of the adaptation algorithm 461 is to discover this rate. For this example, the adaptation could 462 consist of simply estimating the arrival and departure rates then 463 dropping at a rate slightly greater than their difference. But this 464 class of algorithm won't work at all for the bulk data TCP stream. 465 TCP runs in closed-loop flow balance [TSV84] so its arrival rate is 466 almost always exactly equal to the departure rate - the queue isn't 467 the result of a rate imbalance but rather a mismatch between the TCP 468 sender's window and the source-destination-source round-trip path 469 capacity (i.e., the connection's bandwidth-delay product). The 470 sender's TCP congestion avoidance algorithm will slowly increase the 471 send window (one packet per round-trip-time) [RFC2581] which will 472 eventually cause the bottleneck to enter 'has persistent queue' 473 state. But, since the average input rate is the same as the average 474 output rate, the rate deficit estimation that gave the correct drop 475 rate for the video stream would compute a drop rate of zero for the 476 TCP stream. However, if the output link drops one packet as it 477 enters 'has persistent queue' state, when the sender discovers this 478 (via TCP's normal packet loss repair mechanisms) it will reduce its 479 window by a factor of two [RFC2581] so, one round-trip-time after the 480 drop, the persistent queue will go away. 482 If there were N TCP conversations sharing the bottleneck, the 483 controller would have to drop O(N) packets, one from each 484 conversation, to make all the conversations reduce their window to 485 get rid of the persistent queue. If the traffic mix consists of 486 short (<= bandwidth-delay product) conversations, the aggregate 487 behavior becomes more like the open-loop video example since each 488 conversation is likely to have already sent all its packets by the 489 time it learns about a drop so each drop has negligible effect on 490 subsequent traffic. 492 The controller does not know the number, duration, or kind of 493 conversations creating its queue, so it has to learn the appropriate 494 response. Since single drops can have a large effect if the degree 495 of multiplexing (the number of active conversations) is small, 496 dropping at too high a rate is likely to have a catastrophic effect 497 on throughput. Dropping at a low rate (< 1 packet per round-trip- 498 time) then increasing the drop rate slowly until the persistent queue 499 goes below target is unlikely to overdrop and is guaranteed to 500 eventually dissipate the persistent queue. This stochastic gradient 501 learning procedure is the core of CoDel's control loop (the gradient 502 exists because a drop always reduces the (instantaneous) queue so an 503 increasing drop rate always moves the system "down" toward no 504 persistent queue, regardless of traffic mix). 506 The "next drop time" is decreased in inverse proportion to the square 507 root of the number of drops since the dropping state was entered, 508 using the well-known nonlinear relationship of drop rate to 509 throughput to get a linear change in throughput [REDL1998], 510 [MACTCP1997]. 512 Since the best rate to start dropping is at slightly more than one 513 packet per RTT, the controller's initial drop rate can be directly 514 derived from the estimator's interval, defined in section 3.1. When 515 the minimum sojourn time first crosses the target and CoDel drops a 516 packet, the earliest the controller could see the effect of the drop 517 is the round trip time (interval) + the local queue wait time 518 (target). If the next drop happens any earlier than this time 519 (interval + target), CoDel will overdrop. In practice, the local 520 queue waiting time tends to vary, so making the initial drop spacing 521 (i.e., the time to the second drop) be exactly the minimum possible 522 also leads to overdropping. Analysis of simulation and real-world 523 measured data shows that the 75th percentile magnitude of this 524 variation is less than the target, and so the initial drop spacing 525 SHOULD be set to the estimator's interval plus twice the target 526 (i.e., initial drop spacing = 1.1 * interval) to ensure that the 527 controller has accounted for acceptable congestion delays. 529 Use of the minimum statistic lets the controller be placed in the 530 dequeue routine with the estimator. This means that the control 531 signal (the drop) can be sent at the first sign of bad queue (as 532 indicated by the sojourn time) and that the controller can stop 533 acting as soon as the sojourn time falls below the setpoint. 534 Dropping at dequeue has both implementation and control advantages. 536 4. Putting it together: queue management for the network edge 538 CoDel was initially designed as a bufferbloat solution for the 539 consumer network edge. The CoDel building blocks are able to adapt 540 to different or time-varying link rates, to be easily used with 541 multiple queues, to have excellent utilization with low delay, and to 542 have a simple and efficient implementation. The only setting CoDel 543 requires is its interval value, and as 100ms satisfies that 544 definition for normal Internet usage, CoDel can be parameter-free for 545 consumer use. CoDel was released to the open source community, where 546 it has been widely promulgated and adapted to many problems. CoDel's 547 efficient implementation and lack of configuration are unique 548 features and make it suitable to manage modern packet buffers. For 549 more background and results on CoDel, see [CODEL2012] and 550 http://pollere.net/CoDel.html . 552 4.1. Overview of CoDel AQM 554 To ensure that link utilization is not adversely affected, CoDel's 555 estimator sets its target to the setpoint that optimizes power and 556 CoDel's controller does not drop packets when the drop would leave 557 the queue empty or with fewer than a maximum transmission unit (MTU) 558 worth of bytes in the buffer. Section 3.2 showed that the ideal 559 setpoint is 5-10% of the connection RTT. In the open terrestrial- 560 based Internet, especially at the consumer edge, we expect most 561 unbloated RTTs to have a ceiling of 100ms [CHARB2007]. Using this 562 RTT gives a minimum target of 5ms and the interval for tracking the 563 minimum is 100ms. In practice, uncongested links will see sojourn 564 times below target more often than once per RTT, so the estimator is 565 not overly sensitive to the value of the interval. 567 When the estimator finds a persistent delay above target, the 568 controller enters the drop state where a packet is dropped and the 569 next drop time is set. As discussed in section 3.3, the initial next 570 drop spacing is intended to be long enough to give the endpoints time 571 to react to the single drop so SHOULD be set to a value of 1.1 times 572 the interval. If the estimator's output falls below target, the 573 controller cancels the next drop and exits the drop state. (The 574 controller is more sensitive than the estimator to an overly short 575 interval, since an unnecessary drop would occur and lower link 576 utilization.) If next drop time is reached while the controller is 577 still in drop state, the packet being dequeued is dropped and the 578 next drop time is recalculated. Additional logic prevents re- 579 entering the dropping state too soon after exiting it and resumes the 580 dropping state at a recent control level, if one exists. 582 Note that CoDel AQM only enters its dropping state when the local 583 minimum sojourn delay has exceeded target for a time interval long 584 enough for normal bursts to dissipate, ensuring that a burst of 585 packets that fits in the pipe will not be dropped. 587 4.2. Non-starvation 589 CoDel's goals are to control delay with little or no impact on link 590 utilization and to be deployed on a wide range of link bandwidths, 591 including variable-rate links, without reconfiguration. To keep from 592 making drops when it would starve the output link, CoDel makes 593 another check before dropping to see if at least an MTU worth of 594 bytes remains in the buffer. If not, the packet SHOULD NOT be 595 dropped and, therefore, CoDel exits the drop state. The MTU size can 596 be set adaptively to the largest packet seen so far or can be read 597 from the driver. 599 4.3. Using the interval 601 The interval is chosen to give endpoints time to react to a drop 602 without being so long that response times suffer. CoDel's estimator, 603 setpoint, and control loop all use the interval. Understanding their 604 derivation shows that CoDel is the most sensitive to the value of 605 interval for single long-lived TCPs with a decreased sensitivity for 606 traffic mixes. This is fortunate as RTTs vary across connections and 607 are not known apriori. The best policy is to use an interval 608 slightly larger than the RTT seen by most of the connections using a 609 link, a value that can be determined as the largest RTT seen if the 610 value is not an outlier (use of a 95-99th percentile value should 611 work). In practice, this value is not known or measured (though see 612 Section 6.2 for an application where interval is measured. An 613 interval setting of 100ms works well across a range of RTTs from 10ms 614 to 1 second (excellent performance is achieved in the range from 10 615 ms to 300ms). For devices intended for the normal terrestrial 616 Internet, interval SHOULD have a value of 100ms. This will only 617 cause overdropping where a long-lived TCP has an RTT longer than 618 100ms and there is little or no mixing with other connections through 619 the link. 621 Some confusion concerns the roles of the target setpoint and the 622 minimum-tracking interval. In particular, some experimenters believe 623 the value of target needs to be increased when the lower layers have 624 a bursty nature where packets are transmitted for short periods 625 interspersed with idle periods where the link is waiting for 626 permission to send. CoDel's estimator will "see" the effective 627 transmission rate over an interval and increasing target will just 628 lead to longer queue delays. On the other hand, where a significant 629 additional delay is added to the intrinsic round trip time of most or 630 all packets due to the waiting time for a transmission, it is 631 necessary to increase interval by that extra delay. That is, target 632 SHOULD NOT be adjusted but interval MAY need to be adjusted. For 633 more on this (and pictures) see http://pollere.net/Pdfdocs/ 634 noteburstymacs.pdf 636 4.4. The target setpoint 638 The target is the maximum acceptable persistent queue delay above 639 which CoDel is dropping or preparing to drop and below which CoDel 640 will not drop. The calculations of section 3.2 showed that the best 641 setpoint is 5-10% of the RTT, with the low end of 5% preferred. We 642 used simulation to explore the impact when TCPs are mixed with other 643 traffic and with connections of different RTTs. Accordingly, we 644 experimented extensively with values in the 5-10% of RTT range and, 645 overall, used target values between 1 and 20 milliseconds for RTTs 646 from 30 to 500ms and link bandwidths of 64Kbps to 100Mbps to 647 experimentally explore the setpoint that gives consistently high 648 utilization while controlling delay across a range of bandwidths, 649 RTTs, and traffic loads. Our results were notably consistent with 650 the mathematics of section 3.2. Below a target of 5ms, utilization 651 suffers for some conditions and traffic loads, and above 5ms we saw 652 very little or no improvement in utilization. Thus target SHOULD be 653 set to 5ms for normal Internet traffic. 655 A congested (but not overloaded) CoDel link with traffic composed 656 solely or primarily of long-lived TCP flows will have a median delay 657 through the link will tend to the target. For bursty traffic loads 658 and for overloaded conditions (where it is difficult or impossible 659 for all the arriving flows to be accommodated) the median queues will 660 be longer than target. 662 The non-starvation drop inhibit feature dominates where the link rate 663 becomes very small. By inhibiting drops when there is less than an 664 (outbound link) MTU worth of bytes in the buffer, CoDel adapts to 665 very low bandwidth links. This is shown in [CODEL2012] and 666 interested parties should see the discussion of results there. 667 Unpublished studies were carried out down to 64Kbps. The drop 668 inhibit condition can be expanded to include a test to retain 669 sufficient bytes or packets to fill an allocation in a request-and- 670 grant MAC. 672 Sojourn times must remain above the target for an entire interval in 673 order to enter the drop state. Any packet with a sojourn time less 674 than the target will reset the time that the queue was last below the 675 target. Since Internet traffic has very dynamic characteristics, the 676 actual sojourn delay experienced by packets varies greatly and is 677 often less than the target unless the overload is excessive. When a 678 link is not overloaded, it is not a bottleneck and packet sojourn 679 times will be small or nonexistent. In the usual case, there are 680 only one or two places along a path where packets will encounter a 681 bottleneck (usually at the edge), so the total amount of queueing 682 delay experienced by a packet should be less than 10ms even under 683 extremely congested conditions. This net delay is substantially 684 lower than common current queueing delays on the Internet that grow 685 to orders of seconds [NETAL2010, CHARB2007]. 687 4.5. Use with multiple queues 689 Unlike other AQMs, CoDel is easily adapted to multiple queue systems. 690 With other approaches there is always a question of how to account 691 for the fact that each queue receives less than the full link rate 692 over time and usually sees a varying rate over time. This is exactly 693 what CoDel excels at: using a packet's sojourn time in the buffer 694 completely circumvents this problem. In a multiple-queue setting, a 695 separate CoDel algorithm runs on each queue, but each CoDel instance 696 uses the packet sojourn time the same way a single-queue CoDel does. 697 Just as a single-queue CoDel adapts to changing link bandwidths 698 [CODEL2012], so does a multiple-queue CoDel system. As an 699 optimization to avoid queueing more than necessary, when testing for 700 queue occupancy before dropping, the total occupancy of all queues 701 sharing the same output link should be used. This property of CoDel 702 has been exploited in fq_codel, briefly discussed in the next section 703 and in more detail in [FQ-CODEL-ID]. 705 4.6. Use of stochastic bins or sub-queues to improve performance 707 Shortly after the release of the CoDel pseudocode, Eric Dumazet 708 created fq_codel, applying CoDel to each bin, or queue, used with 709 stochastic fair queueing. (To understand further, see [SFQ1990] or 710 the linux sfq documentation at http://linux.die.net/man/8/tc-sfq .) 711 fq_codel hashes on the packet header fields to determine a specific 712 bin, or sub-queue, for each five-tuple flow, and runs CoDel on each 713 bin or sub-queue thus creating a well-mixed output flow and obviating 714 issues of reverse path flows (including "ack compression"). 715 Dumazet's code is part of the CeroWrt project code at the 716 bufferbloat.net's web site and described in [FQ-CODEL-ID]. Andrew 717 McGregor has implemented a version of fq_codel for ns-3, which is 718 being prepared for inclusion at http://code.nsnam.org/tomh/ns-3-dev- 719 aqm/ . 721 We have also experimented with a similar multi-queue approach by 722 creating an ns-2 simulator code module, sfqcodel, which uses 723 Stochastic Fair Queueing (SFQ) to isolate flows into bins, each 724 running CoDel. This setup has provided excellent results: median 725 queues remain small across a range of traffic patterns that includes 726 bidirectional file transfers (that is, the same traffic sent in both 727 directions on a link), constant bit-rate VoIP-like flows, and 728 emulated web traffic and utilizations are consistently better than 729 single queue CoDel, generally very close to 100%. Our code, intended 730 for simulation experiments, is available at http://pollere.net/ 731 CoDel.html and being integrated into the ns-2 distribution. 733 A number of open issues should be studied. In particular, if the 734 number of different queues or bins is too large, the scheduling will 735 be the dominant factor, not the AQM; it is NOT the case that more 736 bins are always better. In our simulations, we have found good 737 behavior across mixed traffic types with smaller numbers of queues, 738 8-16 for a 5Mbps link. This configuration appears to give the best 739 behavior for voice, web browsing and file transfers where increased 740 numbers of bins seems to favor file transfers at the expense of the 741 other traffic. Our work has been very preliminary and we encourage 742 others to take this up and to explore analytic modeling. It would be 743 instructive to see the effects of different numbers of bins on a 744 range of traffic models, something like an updated version of 745 [BMPFQ]. 747 Implementers SHOULD use the fq_codel multiple queue approach if 748 possible as it deals with many problems beyond the reach of an AQM on 749 a single queue. 751 4.7. Setting up CoDel AQM 753 CoDel is set for use in devices in the open Internet. An interval of 754 100ms is used, target is set to 5% of interval, and the initial drop 755 spacing is also set to interval. These settings have been chosen so 756 that a device, such as a small WiFi router, can be sold without the 757 need for any values to be made adjustable, yielding a parameterless 758 implementation. In addition, CoDel is useful in environments with 759 significantly different characteristics from the normal Internet, for 760 example, in switches used as a cluster interconnect within a data 761 center. Since cluster traffic is entirely internal to the data 762 center, round trip latencies are low (typically <100us) but 763 bandwidths are high (1-40Gbps) so it's relatively easy for the 764 aggregation phase of a distributed computation (e.g., the Reduce part 765 of a Map/Reduce) to persistently fill then overflow the modest per- 766 port buffering available in most high speed switches. A CoDel 767 configured for this environment (target and interval in the 768 microsecond rather than millisecond range) can minimize drops (or ECN 769 marks) while keeping throughput high and latency low. 771 Devices destined for these environments MAY use a different interval, 772 where suitable. If appropriate analysis indicates, the target MAY be 773 set to some other value in the 5-10% of interval and the initial drop 774 spacing MAY be set to a value of 1.0 to 1.2 times the interval. But 775 these settings will cause problems such as overdropping and low 776 throughput if used on the open Internet, so devices that allow CoDel 777 to be configured SHOULD default to Internet-appropriate values given 778 in this document. 780 5. Annotated Pseudo-code for CoDel AQM 782 What follows is the CoDel algorithm in C++-like pseudo-code. Since 783 CoDel adds relatively little new code to a basic tail-drop fifo- 784 queue, we have attempted to highlight just these additions by 785 presenting CoDel as a sub-class of a basic fifo-queue base class. 786 The reference code is included to aid implementers who wish to apply 787 CoDel to queue management as described here or to adapt its 788 principles to other applications. 790 Implementors are strongly encouraged to also look at Eric Dumazet's 791 Linux kernel version of CoDel - a well-written, well tested, real- 792 world, C-based implementation. As of this writing, it is at 793 https://github.com/torvalds/linux/blob/master/net/sched/sch_codel.c. 795 The following pseudo-code is open-source with a dual BSD/GPL license: 797 Codel - The Controlled-Delay Active Queue Management algorithm. 798 Copyright (C) 2011-2014 Kathleen Nichols . 799 Redistribution and use in source and binary forms, with or without 800 modification, are permitted provided that the following conditions 801 are met: 803 o Redistributions of source code must retain the above copyright 804 notice, this list of conditions, and the following disclaimer, 805 without modification. 807 o Redistributions in binary form must reproduce the above copyright 808 notice, this list of conditions and the following disclaimer in 809 the documentation and/or other materials provided with the 810 distribution. 812 o The names of the authors may not be used to endorse or promote 813 products derived from this software without specific prior written 814 permission. 816 Alternatively, provided that this notice is retained in full, this 817 software may be distributed under the terms of the GNU General Public 818 License ("GPL") version 2, in which case the provisions of the GPL 819 apply INSTEAD OF those given above. 821 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 822 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 823 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 824 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 825 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 826 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 827 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 828 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 829 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 830 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 831 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 833 5.1. Data Types 835 time_t is an integer time value in units convenient for the system. 836 Resolution to at least a millisecond is required and better 837 resolution is useful up to the minimum possible packet time on the 838 output link; 64- or 32-bit widths are acceptable but with 32 bits the 839 resolution should be no finer than 2^{-16} to leave enough dynamic 840 range to represent a wide range of queue waiting times. Narrower 841 widths also have implementation issues due to overflow (wrapping) and 842 underflow (limit cycles because of truncation to zero) that are not 843 addressed in this pseudocode. The code presented here uses 0 as a 844 flag value to indicate "no time set." 846 packet_t* is a pointer to a packet descriptor. We assume it has a 847 tstamp field capable of holding a time_t and that field is available 848 for use by CoDel (it will be set by the enqueue routine and used by 849 the dequeue routine). 851 queue_t is a base class for queue objects (the parent class for 852 codel_queue_t objects). We assume it has enqueue() and dequeue() 853 methods that can be implemented in child classes. We assume it has a 854 bytes() method that returns the current queue size in bytes. This 855 can be an approximate value. The method is invoked in the dequeue() 856 method but shouldn't require a lock with the enqueue() method. 858 flag_t is a Boolean. 860 5.2. Per-queue state (codel_queue_t instance variables) 862 time_t first_above_time_ = 0; // Time to declare sojourn time above 863 // target 864 time_t drop_next_ = 0; // Time to drop next packet 865 uint32_t count_ = 0; // Packets dropped in dropping state 866 uint32_t lastcount_ = 0; // Count from previous iteration 867 flag_t dropping_ = false; // Set to true if in drop state 869 5.3. Constants 871 time_t target = MS2TIME(5); // 5ms target queue delay 872 time_t interval = MS2TIME(100); // 100ms sliding-minimum window 873 u_int maxpacket = 512; // Maximum packet size in bytes 874 // (should use interface MTU) 876 5.4. Enqueue routine 878 All the work of CoDel is done in the dequeue routine. The only CoDel 879 addition to enqueue is putting the current time in the packet's 880 tstamp field so that the dequeue routine can compute the packet's 881 sojourn time. 883 void codel_queue_t::enqueue(packet_t* pkt) 884 { 885 pkt->timestamp() = clock(); 886 queue_t::enqueue(pkt); 887 } 889 5.5. Dequeue routine 891 This is the heart of CoDel. There are two branches based on whether 892 the controller is in dropping state: (i) if the controller is in 893 dropping state (that is, the minimum packet sojourn time is greater 894 than target) then the controller checks if it is time to leave 895 dropping state or schedules the next drop(s); or (ii) if the 896 controller is not in dropping state, it determines if it should enter 897 dropping state and do the initial drop. 899 packet_t* CoDelQueue::dequeue() 900 { 901 time_t now = clock(); 902 dodequeue_result r = dodequeue(now); 903 uint32_t delta; 905 if (dropping_) { 906 if (! r.ok_to_drop) { 907 // sojourn time below target - leave dropping state 908 dropping_ = false; 909 } 910 // Time for the next drop. Drop current packet and dequeue 911 // next. If the dequeue doesn't take us out of dropping 912 // state, schedule the next drop. A large backlog might 913 // result in drop rates so high that the next drop should 914 // happen now, hence the 'while' loop. 915 while (now >= drop_next_ && dropping_) { 916 drop(r.p); 917 ++count_; 918 r = dodequeue(now); 919 if (! r.ok_to_drop) { 920 // leave dropping state 921 dropping_ = false; 922 } else { 923 // schedule the next drop. 924 drop_next_ = control_law(drop_next_, count_); 925 } 926 } 927 // If we get here we're not in dropping state. The 'ok_to_drop' 928 // return from dodequeue means that the sojourn time has been 929 // above 'target' for 'interval' so enter dropping state. 930 } else if (r.ok_to_drop) { 931 drop(r.p); 932 r = dodequeue(now); 933 dropping_ = true; 935 // If min went above target close to when it last went 936 // below, assume that the drop rate that controlled the 937 // queue on the last cycle is a good starting point to 938 // control it now. ('drop_next' will be at most 'interval' 939 // later than the time of the last drop so 'now - drop_next' 940 // is a good approximation of the time from the last drop 941 // until now.) Implementations vary slightly here; this is 942 // the Linux version, which is more widely deployed and 943 // tested. 944 delta = count_ - lastcount_; 945 count_ = (delta > 1 && now - drop_next_ < 16*interval_)? 946 delta : 1; 947 drop_next_ = control_law(now, count_); 948 lastcount_ = count_; 949 } 950 return (r.p); 951 } 953 5.6. Helper routines 955 Since the degree of multiplexing and nature of the traffic sources is 956 unknown, CoDel acts as a closed-loop servo system that gradually 957 increases the frequency of dropping until the queue is controlled 958 (sojourn time goes below target). This is the control law that 959 governs the servo. It has this form because of the sqrt(p) 960 dependence of TCP throughput on drop probability. Note that for 961 embedded systems or kernel implementation, the inverse sqrt can be 962 computed efficiently using only integer multiplication. 964 time_t codel_queue_t::control_law(time_t t, uint32_t count) 965 { 966 return t + interval / sqrt(count); 967 } 969 Next is a helper routine the does the actual packet dequeue and 970 tracks whether the sojourn time is above or below target and, if 971 above, if it has remained above continuously for at least interval. 972 It returns two values: a Boolean indicating if it is OK to drop 973 (sojourn time above target for at least interval), and the packet 974 dequeued. 976 typedef struct { 977 packet_t* p; 978 flag_t ok_to_drop; 979 } dodequeue_result; 981 dodequeue_result codel_queue_t::dodequeue(time_t now) 982 { 983 dodequeue_result r = { queue_t::dequeue(), false }; 984 if (r.p == NULL) { 985 // queue is empty - we can't be above target 986 first_above_time_ = 0; 987 return r; 988 } 990 // To span a large range of bandwidths, CoDel runs two 991 // different AQMs in parallel. One is sojourn-time-based 992 // and takes effect when the time to send an MTU-sized 993 // packet is less than target. The 1st term of the "if" 994 // below does this. The other is backlog-based and takes 995 // effect when the time to send an MTU-sized packet is >= 996 // target. The goal here is to keep the output link 997 // utilization high by never allowing the queue to get 998 // smaller than the amount that arrives in a typical 999 // interarrival time (MTU-sized packets arriving spaced 1000 // by the amount of time it takes to send such a packet on 1001 // the bottleneck). The 2nd term of the "if" does this. 1002 time_t sojourn_time = now - r.p->tstamp; 1003 if (sojourn_time_ < target_ || bytes() <= maxpacket_) { 1004 // went below - stay below for at least interval 1005 first_above_time_ = 0; 1006 } else { 1007 if (first_above_time_ == 0) { 1008 // just went above from below. if still above at 1009 // first_above_time, will say it's ok to drop. 1010 first_above_time_ = now + interval_; 1011 } else if (now >= first_above_time_) { 1012 r.ok_to_drop = true; 1013 } 1014 } 1015 return r; 1016 } 1018 5.7. Implementation considerations 1020 Since CoDel requires relatively little per-queue state and no direct 1021 communication or state sharing between the enqueue and dequeue 1022 routines, it is relatively simple to add CoDel to almost any packet 1023 processing pipeline, including ASIC- or NPU-based forwarding engines. 1025 One issue to consider is dodequeue()'s use of a 'bytes()' function to 1026 determine the current queue size in bytes. This value does not need 1027 to be exact. If the enqueue part of the pipeline keeps a running 1028 count of the total number of bytes it has put into the queue and the 1029 dequeue routine keeps a running count of the total bytes it has 1030 removed from the queue, 'bytes()' is simply the difference between 1031 these two counters (32-bit counters should be adequate.) Enqueue has 1032 to update its counter once per packet queued but it does not matter 1033 when (before, during or after the packet has been added to the 1034 queue). The worst that can happen is a slight, transient, 1035 underestimate of the queue size which might cause a drop to be 1036 briefly deferred. 1038 6. Adapting and applying CoDel's building blocks 1040 CoDel has been implemented and tested in a range of environments. 1042 6.1. Validations and available code 1044 An experiment by Stanford graduate students successfully used Linux 1045 CoDel to duplicate our published simulation work on CoDel's ability 1046 to adapt to drastic link rate changes. This experiment can be found 1047 at http://reproducingnetworkresearch.wordpress.com/2012/06/06/ 1048 solving-bufferbloat-the-codel-way/ . 1050 Our ns-2 simulations are available at http://pollere.net/CoDel.html . 1051 CableLabs has funded some additions to the simulator sfqcodel code, 1052 which have been made public. The basic algorithm of CoDel remains 1053 unchanged, but we continue to experiment with drop interval setting 1054 when resuming the drop state, inhibiting or canceling drop state when 1055 few bytes are in the queue, and other details. Our approach to 1056 changes is to only make them if we are convinced they do more good 1057 than harm, both operationally and in the implementation. With this 1058 in mind, some of these issues may continue to evolve as we get more 1059 deployment and as the building blocks are applied to a wider range of 1060 problems. 1062 CoDel is available in ns-2 version 2.35 and later. 1064 Andrew McGregor has an ns-3 implementation of both CoDel and 1065 fq_codel. CoDel is available in ns-3 version 3.21 and later at 1066 https://www.nsnam.org/ . At the time of this writing, the ns-3 1067 implementation of fq_codel is available at 1068 https://www.nsnam.org/wiki/GSOC2014Bufferbloat . 1070 CoDel is available in Linux. Eric Dumazet implemented CoDel in the 1071 Linux kernel. 1073 Dave Taht has been instrumental in the integration and distribution 1074 of CoDel as a bufferbloat solution 1075 (http://www.bufferbloat.net/projects/codel ). 1077 6.2. CoDel in the datacenter 1079 Nandita Dukkipati's team at Google was quick to realize that the 1080 CoDel building blocks could be applied to bufferbloat problems in 1081 datacenter servers, not just to Internet routers. The Linux CoDel 1082 queueing discipline (qdisc) was adapted in three ways to tackle this 1083 bufferbloat problem. 1085 1. The default CoDel action was modified to be a direct feedback 1086 from qdisc to the TCP layer at dequeue. The direct feedback 1087 simply reduces TCP's congestion window just as congestion control 1088 would do in the event of drop. The scheme falls back to ECN 1089 marking or packet drop if the TCP socket lock could not be 1090 acquired at dequeue. 1092 2. Being located in the server makes it possible to monitor the 1093 actual RTT to use as CoDel's interval rather than making a "best 1094 guess" of RTT. The CoDel interval is dynamically adjusted by 1095 using the maximum TCP round-trip time (RTT) of those connections 1096 sharing the same Qdisc/bucket. In particular, there is a history 1097 entry of the maximum RTT experienced over the last second. As a 1098 packet is dequeued, the RTT estimate is accessed from its TCP 1099 socket. If the estimate is larger than the current CoDel 1100 interval, the CoDel interval is immediately refreshed to the new 1101 value. If the CoDel interval is not refreshed for over a second, 1102 it is decreased it to the history entry and the process is 1103 repeated. The use of the dynamic TCP RTT estimate lets interval 1104 adapt to the actual maximum value currently seen and thus lets 1105 the controller space its drop intervals appropriately. 1107 3. Since the mathematics of computing the setpoint are invariant, a 1108 target of 5% of the RTT or CoDel interval was used here. 1110 Non-data packets were not dropped as these are typically small and 1111 sometimes critical control packets. Being located on the server, 1112 there is no concern with misbehaving users as there would be on the 1113 public Internet. 1115 In several data center workload benchmarks, which are typically 1116 bursty, CoDel reduced the queueing latencies at the qdisc, and 1117 thereby improved the mean and 99th-percentile latencies from several 1118 tens of milliseconds to less than one millisecond. The minimum 1119 tracking part of the CoDel framework proved useful in disambiguating 1120 "good" queue versus "bad" queue, particularly helpful in controlling 1121 qdisc buffers that are inherently bursty because of TCP Segmentation 1122 Offload (TSO). 1124 7. Security Considerations 1126 This document describes an active queue management algorithm for 1127 implementation in networked devices. There are no specific security 1128 exposures associated with CoDel. 1130 8. IANA Considerations 1132 This document does not require actions by IANA. 1134 9. Conclusions 1136 CoDel provides very general, efficient, parameterless building blocks 1137 for queue management that can be applied to single or multiple queues 1138 in a variety of data networking scenarios. It is a critical tool in 1139 solving bufferbloat. CoDel's settings MAY be modified for other 1140 special-purpose networking applications. We encourage 1141 experimentation, improvement, and rigorous testing. 1143 On-going projects are creating a deployable CoDel in Linux routers 1144 and experimenting with applying CoDel to stochastic queueing with 1145 promising results. 1147 10. Acknowledgments 1149 The authors wish to thank Jim Gettys for the constructive nagging 1150 that made us get the work "out there" before we thought it was ready. 1151 We also want to thank Dave Taht, Eric Dumazet, and the open source 1152 community for showing the value of getting it "out there" and for 1153 making it real. We also wish to thank Nandita Dukkipati for 1154 contributions to Section 6 and for comments which helped to 1155 substantially improve this draft. 1157 11. References 1159 11.1. Normative References 1161 [RFC2119] Bradner, S., "Key Words for use in RFCs to Indicate 1162 Requirement Levels", March 1997. 1164 11.2. Informative References 1166 [FQ-CODEL-ID] 1167 Hoeiland-Joergensen, T., McKenney, P., 1168 dave.taht@gmail.com, d., Gettys, J., and E. Dumazet, 1169 "FlowQueue-Codel", draft-ietf-aqm-fq-codel-03 (work in 1170 progress), November 2015. 1172 [RFC2581] Allman, M., Paxson, V., and W. Stevens, "TCP Congestion 1173 Control", RFC 2581, April 1999. 1175 [RFC0896] Nagle, J., "Congestion control in IP/TCP internetworks", 1176 RFC 896, January 1984. 1178 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 1179 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 1180 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 1181 S., Wroclawski, J., and L. Zhang, "Recommendations on 1182 Queue Management and Congestion Avoidance in the 1183 Internet", RFC 2309, April 1998. 1185 [TSV2011] Gettys, J., "Bufferbloat: Dark Buffers in the Internet", 1186 IETF 80 presentation to Transport Area Open Meeting, 1187 March, 2011, 1188 . 1190 [BB2011] Gettys, J. and K. Nichols, "Bufferbloat: Dark Buffers in 1191 the Internet", Communications of the ACM 9(11) pp. 57-65. 1193 [BMPFQ] Suter, B., "Buffer Management Schemes for Supporting TCP 1194 in Gigabit Routers with Per-flow Queueing", IEEE Journal 1195 on Selected Areas in Communications Vol. 17 Issue 6, June, 1196 1999, pp. 1159-1169. 1198 [CMNTS] Allman, M., "Comments on Bufferbloat", Computer 1199 Communication Review Vol. 43 No. 1, January, 2013, pp. 1200 31-37. 1202 [CODEL2012] 1203 Nichols, K. and V. Jacobson, "Controlling Queue Delay", 1204 Communications of the ACM Vol. 55 No. 11, July, 2012, pp. 1205 42-50. 1207 [VANQ2006] 1208 Jacobson, V., "A Rant on Queues", talk at MIT Lincoln 1209 Labs, Lexington, MA July, 2006, 1210 . 1212 [REDL1998] 1213 Nichols, K., Jacobson, V., and K. Poduri, "RED in a 1214 Different Light", Tech report, September, 1999, 1215 . 1218 [NETAL2010] 1219 Kreibich, C., et. al., "Netalyzr: Illuminating the Edge 1220 Network", Proceedings of the Internet Measurement 1221 Conference Melbourne, Australia, 2010. 1223 [TSV84] Jacobson, V., "CoDel talk at TSV meeting IETF 84", 1224 . 1227 [CHARB2007] 1228 Dischinger, M., et. al, "Characterizing Residential 1229 Broadband Networks", Proceedings of the Internet 1230 Measurement Conference San Diego, CA, 2007. 1232 [MACTCP1997] 1233 Mathis, M., Semke, J., and J. Mahdavi, "The Macroscopic 1234 Behavior of the TCP Congestion Avoidance Algorithm", ACM 1235 SIGCOMM Computer Communications Review Vol. 27 no. 1, Jan. 1236 2007. 1238 [SFQ1990] McKenney, P., "Stochastic Fairness Queuing", Proceedings 1239 of IEEE INFOCOMM 90 San Francisco, 1990. 1241 [KLEIN81] Kleinrock, L. and R. Gail, "An Invariant Property of 1242 Computer Network Power", International Conference on 1243 Communications June, 1981, 1244 . 1246 Authors' Addresses 1248 Kathleen Nichols 1249 Pollere, Inc. 1250 PO Box 370201 1251 Montara, CA 94037 1252 USA 1254 Email: nichols@pollere.com 1255 Van Jacobson 1256 Google 1258 Email: vanj@google.com 1260 Andrew McGregor 1261 Google 1263 Email: andrewmcgr@google.com 1265 Janardhan Iyengar 1266 Google 1268 Email: jri@google.com