idnits 2.17.1 draft-pan-tsvwg-pie-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** There are 9 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (December 10, 2012) is 4154 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC2309' is mentioned on line 110, but not defined ** Obsolete undefined reference: RFC 2309 (Obsoleted by RFC 7567) == Missing Reference: 'CoDel' is mentioned on line 439, but not defined == Missing Reference: 'CBQ' is mentioned on line 442, but not defined == Missing Reference: 'TCP-Models' is mentioned on line 445, but not defined == Missing Reference: 'PI' is mentioned on line 449, but not defined == Missing Reference: 'QCN' is mentioned on line 453, but not defined Summary: 3 errors (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Draft R. Pan 3 Network Working Group P. Natarajan, C. Piglione,M. Prabhu 4 Intended Status: Informational V. Subramanian, F. Baker, B. V. Steeg 5 Cisco Systems 7 Expires: June 2, 2013 December 10, 2012 9 PIE: A Lightweight Control Scheme To Address the 10 Bufferbloat Problem 12 draft-pan-tsvwg-pie-00 14 Abstract 16 Bufferbloat is a phenomenon where excess buffers in the network cause 17 high latency and jitter. As more and more interactive applications 18 (e.g. voice over IP, real time video streaming and financial 19 transactions) run in the Internet, high latency and jitter degrade 20 application performance. There is a pressing need to design 21 intelligent queue management schemes that can control latency and 22 jitter; and hence provide desirable quality of service to users. 24 We present here a lightweight design, PIE(Proportional Integral 25 controller Enhanced) that can effectively control the average 26 queueing latency to a target value. Simulation results, theoretical 27 analysis and Linux testbed results have shown that PIE can ensure low 28 latency and achieve high link utilization under various congestion 29 situations. The design does not require per-packet timestamp, so it 30 incurs very small overhead and is simple enough to implement in both 31 hardware and software. 33 Status of this Memo 35 This Internet-Draft is submitted to IETF in full conformance with the 36 provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF), its areas, and its working groups. Note that 40 other groups may also distribute working documents as 41 Internet-Drafts. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 47 The list of current Internet-Drafts can be accessed at 48 http://www.ietf.org/1id-abstracts.html 50 The list of Internet-Draft Shadow Directories can be accessed at 51 http://www.ietf.org/shadow.html 53 Copyright and License Notice 55 Copyright (c) 2012 IETF Trust and the persons identified as the 56 document authors. All rights reserved. 58 This document is subject to BCP 78 and the IETF Trust's Legal 59 Provisions Relating to IETF Documents 60 (http://trustee.ietf.org/license-info) in effect on the date of 61 publication of this document. Please review these documents 62 carefully, as they describe your rights and restrictions with respect 63 to this document. Code Components extracted from this document must 64 include Simplified BSD License text as described in Section 4.e of 65 the Trust Legal Provisions and are provided without warranty as 66 described in the Simplified BSD License. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 71 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 4 72 3. Design Goals . . . . . . . . . . . . . . . . . . . . . . . . . 4 73 4. The PIE Scheme . . . . . . . . . . . . . . . . . . . . . . . . 5 74 4.1 Random Dropping . . . . . . . . . . . . . . . . . . . . . . 5 75 4.2 Drop Probability Calculation . . . . . . . . . . . . . . . . 6 76 4.3 Departure Rate Estimation . . . . . . . . . . . . . . . . . 7 77 4.4 Handling Bursts . . . . . . . . . . . . . . . . . . . . . . 8 78 5. Comments and Discussions . . . . . . . . . . . . . . . . . . . 9 79 6. Incremental Deployment . . . . . . . . . . . . . . . . . . . . 10 80 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 10 81 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 10 82 8.1 Normative References . . . . . . . . . . . . . . . . . . . 10 83 8.2 Informative References . . . . . . . . . . . . . . . . . . 10 84 8.3 Other References . . . . . . . . . . . . . . . . . . . . . 10 85 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 11 87 1. Introduction 89 The explosion of smart phones, tablets and video traffic in the 90 Internet brings about a unique set of challenges for congestion 91 control. To avoid packet drops, many service providers or data center 92 operators require vendors to put in as much buffer as possible. With 93 rapid decrease in memory chip prices, these requests are easily 94 accommodated to keep customers happy. However, the above solution of 95 large buffer fails to take into account the nature of the TCP 96 protocol, the dominant transport protocol running in the Internet. 97 The TCP protocol continuously increases its sending rate and causes 98 network buffers to fill up. TCP cuts its rate only when it receives a 99 packet drop or mark that is interpreted as a congestion signal. 100 However, drops and marks usually occur when network buffers are full 101 or almost full. As a result, excess buffers, initially designed to 102 avoid packet drops, would lead to highly elevated queueing latency 103 and jitter. It is a delicate balancing act to design a queue 104 management scheme that not only allows short-term burst to smoothly 105 pass, but also controls the average latency when long-term congestion 106 persists. 108 Active queue management (AQM) schemes, such as Random Early Discard 109 (RED), have been around for well over a decade. AQM schemes could 110 potentially solve the aforementioned problem. RFC 2309[RFC2309] 111 strongly recommends the adoption of AQM schemes in the network to 112 improve the performance of the Internet. RED is implemented in a wide 113 variety of network devices, both in hardware and software. 114 Unfortunately, due to the fact that RED needs careful tuning of its 115 parameters for various network conditions, most network operators 116 don't turn RED on. In addition, RED is designed to control the queue 117 length which would affect delay implicitly. It does not control 118 latency directly. Hence, the Internet today still lacks an effective 119 design that can control buffer latency to improve the quality of 120 experience to latency-sensitive applications. 122 Recently, a new AQM scheme, CoDel[CoDel], was proposed to control 123 the latency directly to address the bufferbloat problem. CoDel 124 requires per packet timestamps. Also, packets are dropped at the 125 dequeue function after they have been enqueued for a while. Both of 126 these requirements consume excessive processing and infrastructure 127 resources. This consumption will make CoDel expensive to implement 128 and operate, especially in hardware. 130 PIE aims to combine the benefits of both RED and CoDel: easy to 131 implement like RED and directly control latency like CoDel. Similar 132 to RED, PIE randomly drops a packet at the onset of the congestion. 133 The congestion detection, however, is based on the queueing latency 134 like CoDel instead of the queue length like RED. Furthermore, PIE 135 also uses the latency moving trends: latency increasing or 136 decreasing, to help determine congestion levels. The design 137 parameters of PIE are chosen via stability analysis. While these 138 parameters can be fixed to work in various traffic conditions, they 139 could be made self-tuning to optimize system performance. 141 In addition, we assume any delay-based AQM scheme would be applied 142 over a Fair Queueing (FQ) structure or its approximate design, Class 143 Based Queueing (CBQ). FQ is one of the most studied scheduling 144 algorithms since it was first proposed in 1985 [RFC970]. CBQ has been 145 a standard feature in most network devices today[CBQ]. These designs 146 help flows/classes achieve max-min fairness and help mitigate bias 147 against long flows with long round trip times(RTT). Any AQM scheme 148 that is built on top of FQ or CBQ could benefit from these 149 advantages. Furthermore, we believe that these advantages such as per 150 flow/class fairness are orthogonal to the AQM design whose primary 151 goal is to control latency for a given queue. For flows that are 152 classified into the same class and put into the same queue, we need 153 to ensure their latency is better controlled and their fairness is 154 not worse than those under the standard DropTail or RED design. 156 This draft describes the overall design goals, system elements and 157 implementation details of PIE. We will also discuss various design 158 considerations, including how auto-tuning can be done. 160 2. Terminology 162 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 163 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 164 document are to be interpreted as described in RFC 2119 [RFC2119]. 166 3. Design Goals 168 We explore a queue management framework where we aim to improve the 169 performance of interactive and delay-sensitive applications. The 170 design of our scheme follows a few basic criteria. 172 * First, we directly control queueing latency instead of 173 controlling queue length. Queue sizes change with queue draining 174 rates and various flows' round trip times. Delay bloat is the 175 real issue that we need to address as it impairs real time 176 applications. If latency can be controlled, bufferbloat is not 177 an issue. As a matter of fact, we would allow more buffers for 178 sporadic bursts as long as the latency is under control. 180 * Secondly, we aim to attain high link utilization. The goal of 181 low latency shall be achieved without suffering link under- 182 utilization or losing network efficiency. An early congestion 183 signal could cause TCP to back off and avoid queue building up. 184 On the other hand, however, TCP's rate reduction could result in 185 link udner-utilization. There is a delicate balance between 186 achieving high link utilization and low latency. 188 * Furthermore, the scheme should be simple to implement and 189 easily scalable in both hardware and software. The wide adoption 190 of RED over a variety of network devices is a testament to the 191 power of simple random early dropping/marking. We strive to 192 maintain similar design simplicity. 194 * Finally, the scheme should ensure system stability for various 195 network topologies and scale well with arbitrary number streams. 196 Design parameters shall be set automatically. Users only need to 197 set performance-related parameters such as target queue delay, 198 not design parameters. 200 In the following, we will elaborate on the design of PIE and its 201 operation. 203 4. The PIE Scheme 205 As illustrated in Fig. 1, our scheme comprises three simple components: 206 a) random dropping at enqueuing; b) periodic drop probability update; c) 207 dequeuing rate estimation. 209 The following sections describe these components in further detail, and 210 explain how they interact with each other. At the end of this section, 211 we will discuss how the scheme can be easily augmented to precisely 212 control bursts. 214 4.1 Random Dropping 216 Like any state-of-the-art AQM scheme, PIE would drop packets randomly 217 according to a drop probability, p, that is obtained from the drop- 218 probability-calculation component: 220 * upon a packet arrival 222 randomly drop a packet with a probability p. 224 Random Drop 225 / -------------- 226 -------/ --------------> | | | | | --------------> 227 /|\ | | | | | | 228 | -------------- | 229 | Queue Buffer | 230 | | | Departure bytes 231 | |queue | 232 | |length | 233 | | | 234 | \|/ \|/ 235 | ----------------- ------------------- 236 | | Drop | | | 237 -----<-----| Probability |<---| Departure Rate | 238 | Calculation | | Estimation | 239 ----------------- ------------------- 241 Figure 1. The PIE Structure 243 4.2 Drop Probability Calculation 245 The PIE algorithm periodically updates the drop probability as follows: 247 * estimate current queueing delay using Little's law: 249 est_del = qlen/depart_rate; 251 * calculate drop probability p as: 253 p = p + alpha*(est_del-target_del) + beta*(est_del-est_del_old); 255 est_del_old = est_del. 257 Here, the current queue length is denoted by qlen. The draining rate of 258 the queue, depart_rate, is obtained from the departure-rate-estimation 259 block. Variables, est_del and est_del_old, represent the current and 260 previous estimation of the queueing delay. The target latency value is 261 expressed in target_del. The update interval is denoted as Tupdate. 263 Note that the calculation of drop probability is based not only on the 264 current estimation of the queueing delay, but also on the direction 265 where the delay is moving, i.e., whether the delay is getting longer or 266 shorter. This direction can simply be measured as the difference between 267 est_del and est_del_old. This is the classic Proportional Integral 268 controller design that is adopted here for controlling queueing latency. 269 The controller parameters, in the unit of hz, are designed using 270 feedback loop analysis where TCP's behaviors are modeled using the 271 results from well-studied prior art[TCP-Models]. 273 We would like to point out that this type of controller has been studied 274 before for controlling the queue length [PI, QCN]. PIE adopts the 275 Proportional Integral controller for controlling delay and makes the 276 scheme auto-tuning. The theoretical analysis of PIE is under paper 277 submission and its reference will be included in this draft once it 278 becomes available. Nonetheless, we will discuss the intuitions for these 279 parameters in Section 5. 281 4.3 Departure Rate Estimation 283 The draining rate of a queue in the network often varies either because 284 other queues are sharing the same link, or the link capacity fluctuates. 285 Rate fluctuation is particularly common in wireless networks. Hence, we 286 decide to measure the departure rate directly as follows. 288 * we are in a measurement cycle if we have enough data in the queue: 290 qlen > deq_threshold 292 * if in a measurement cycle: 294 upon a packet departure 296 dq_count = dq_count + deque_pkt_size; 298 * if dq_count > deq_threshold then 300 depart_rate = dq_count/(now-start); 302 dq_count = 0; 304 start = now; 306 We only measure the departure rate when there are sufficient data in the 307 buffer, i.e., when the queue length is over a certain threshold, 308 dq_threshold. Short, non-persistent bursts of packets result in empty 309 queues from time to time, this would make the measurement less accurate. 310 The parameter, dq_count, represents the number of bytes departed since 311 the last measurement. Once dq_count is over a certain threshold, 312 deq_threshold, we obtain a measurement sample. The threshold is 313 recommended to be set to 10KB assuming a typical packet size of around 314 1KB or 1.5KB. This threshold would allow us a long enough period to 315 obtain an average draining rate but also fast enough to reflect sudden 316 changes in the draining rate. Note that this threshold is not crucial 317 for the system's stability. 319 4.4 Handling Bursts 321 The above three components form the basis of the PIE algorithm. Although 322 we aim to control the average latency of a congested queue, the scheme 323 should allow short term bursts to pass through the system without 324 hurting them. We would like to discuss how PIE manages bursts in this 325 section. 327 Bursts are well tolerated in the basic scheme for the following reasons: 328 first, the drop probability is updated periodically. Any short term 329 burst that occurs within this period could pass through without 330 incurring extra drops as it would not trigger a new drop probability 331 calculation. Secondly, PIE's drop probability calculation is done 332 incrementally. A single update would only lead to a small incremental 333 change in the probability. So if it happens that a burst does occur at 334 the exact instant that the probability is being calculated, the 335 incremental nature of the calculation would ensure its impact is kept 336 small. 338 Nonetheless, we would like to give users a precise control of the burst. 339 We introduce a parameter, max_burst, that is similar to the burst 340 tolerance in the token bucket design. By default, the parameter is set 341 to be 100ms. Users can certainly modify it according to their 342 application scenarios. The burst allowance is added into the basic PIE 343 design as follows: 345 * if p == 0 and est_del < del_ref and est_del_old < del_ref 347 burst_allowance = max_burst; 349 * upon packet arrival 351 if burst_allowance > 0 enqueue packet; 353 * upon probability update 355 burst_allowance = burst_allowance - Tupdate; 357 The burst allowance, noted by burst_allowance, is initialized to 358 max_burst. As long as burst_allowance is above zero, an incoming packet 359 will be enqueued bypassing the random drop process. During each update 360 instance, the value of burst_allowance is decremented by the update 361 period, Tupdate. When the congestion goes away, defined by us as p 362 equals to 0 and both the current and previous samples of estimated delay 363 are less than target_del, we reset burst_allowance to max_burst. 365 5. Comments and Discussions 367 While the formal analysis will be included later, we would like to 368 discuss the intuitions regarding how to determine the key parameters. 369 Although the PIE algorithm would set them automatically, they are not 370 meant to be magic numbers. We hope to give enough explanations here to 371 help demystify them so that users can experiment and explore on their 372 own. 374 As it is obvious from the above, the crucial equation in the PIE 375 algorithm is 377 p = p + alpha*(est_del-target_del) + beta*(est_del-est_del_old). 379 The value of alpha determines how the deviation of current latency from 380 the target value affects the drop probability. The beta term exerts 381 additional adjustments depending on whether the latency is trending up 382 or down. Note that the drop probability is reached incrementally, not 383 through a single step. To avoid big swings in adjustments which often 384 leads to instability, we would like to tune p in small increments. 385 Suppose that p is in the range of 1%. Then we would want the value of 386 alpha and beta to be small enough, say 0.1%, adjustment in each step. If 387 p is in the higher range, say above 10%, then the situation would 388 warrant a higher single step tuning, for example 1%. Finally, the drop 389 probability would only be stabilized when the latency is stable, i.e. 390 est_del equals est_del_old; and the value of the latency is equal to 391 target_del. The relative weight between alpha and beta determines the 392 final balance between latency offset and latency jitter. 394 The update interval, Tupdate, also plays a key role in stability. Given 395 the same alpha and beta values, the faster the update is, the higher the 396 loop gain will be. As it is not showing explicitly in the above 397 equation, it can become an oversight. Notice also that alpha and beta 398 have a unit of hz. 400 As a further extension, we could introduce weights for flows that are 401 classified into the same queue to achieve differential dropping. For 402 example, the dropping probability for flow i could be p(i) = 403 p/weight(i). Flows with higher weights would receive proportionally less 404 drops; and vice versa. Adding FQ on top, FQ_PIE, is another alternative. 406 Also, we have discussed congestion notification via the form of packet 407 drops. The algorithm can be easily applied to networks codes where Early 408 Congestion Notification (ECN) is enabled. The drop probability, p, above 409 would become marking probability. 411 6. Incremental Deployment 413 One nice property of the AQM design is that it can be independently 414 designed and operated without the requirement of being inter-operable. 416 Although all network nodes can not be changed altogether to adopt 417 latency-based AQM schemes, we envision a gradual adoption which would 418 eventually lead to end-to-end low latency service for real time 419 applications. 421 7. IANA Considerations 423 There are no actions for IANA. 425 8. References 427 8.1 Normative References 429 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 430 Requirement Levels", BCP 14, RFC 2119, March 1997. 432 8.2 Informative References 434 [RFC970] Nagle, J., "On Packet Switches With Infinite 435 Storage",RFC970, December 1985. 437 8.3 Other References 439 [CoDel] Nichols, K., Jacobson, V., "Controlling Queue Delay", ACM 440 Queue. ACM Publishing. doi:10.1145/2209249.22W.09264. 442 [CBQ] Cisco White Paper, "http://www.cisco.com/en/US/docs/12_0t 443 /12_0tfeature/guide/cbwfq.html". 445 [TCP-Models] Misra, V., Gong, W., and Towsley, D., "Fluid-based 446 Analysis of a Network of AQM Routers Supporting TCP 447 Flows with an Application to RED", SIGCOMM 2000. 449 [PI] Hollot, C.V., Misra, V., Towsley, D. and Gong, W., 450 "On Designing Improved Controller for AQM Routers 451 Supporting TCP Flows", Infocom 2001. 453 [QCN] "Data Center Bridging - Congestion Notification", 454 http://www.ieee802.org/1/pages/802.1au.html. 456 Authors' Addresses 458 Rong Pan 459 Cisco Systems 460 510 McCarthy Blvd, 461 Milpitas, CA 95134, USA 462 Email: ropan@cisco.com 464 Preethi Natarajan, 465 Cisco Systems 466 510 McCarthy Blvd, 467 Milpitas, CA 95134, USA 468 Email: prenatar@cisco.com 470 Chiara Piglione 471 Cisco Systems 472 510 McCarthy Blvd, 473 Milpitas, CA 95134, USA 474 Email: cpiglion@cisco.com 476 Mythili Prabhu 477 Cisco Systems 478 510 McCarthy Blvd, 479 Milpitas, CA 95134, USA 480 Email: mysuryan@cisco.com 482 Vijay Subramanian 483 Cisco Systems 484 510 McCarthy Blvd, 485 Milpitas, CA 95134, USA 486 Email: vijaynsu@cisco.com 488 Fred Baker 489 Cisco Systems 490 510 McCarthy Blvd, 491 Milpitas, CA 95134, USA 492 Email: fred@cisco.com 494 Bill Ver Steeg 495 Cisco Systems 496 5030 Sugarloaf Parkway 497 Lawrenceville, GA, 30044, USA 498 Email: versteb@cisco.com