idnits 2.17.1 draft-ietf-aqm-pie-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 1 character in excess of 72. ** There are 32 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 26, 2016) is 2767 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- == Missing Reference: 'RED' is mentioned on line 801, but not defined == Missing Reference: 'PI' is mentioned on line 794, but not defined == Missing Reference: 'QCN' is mentioned on line 798, but not defined == Missing Reference: 'TCP-Models' is mentioned on line 805, but not defined -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) Summary: 2 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Draft R. Pan 3 Active Queue Management P. Natarajan 4 Working Group F. Baker 5 Intended Status: Experimental Track Cisco Systems 6 G. White 7 CableLabs 8 Expires: March 30, 2017 September 26, 2016 10 PIE: A Lightweight Control Scheme To Address the 11 Bufferbloat Problem 13 draft-ietf-aqm-pie-10 15 Abstract 17 Bufferbloat is a phenomenon in which excess buffers in the network 18 cause high latency and latency variation. As more and more 19 interactive applications (e.g. voice over IP, real time video 20 streaming and financial transactions) run in the Internet, high 21 latency and latency variation degrade application performance. There 22 is a pressing need to design intelligent queue management schemes 23 that can control latency and latency variation, and hence provide 24 desirable quality of service to users. 26 This document presents a lightweight active queue management design, 27 called PIE (Proportional Integral controller Enhanced), that can 28 effectively control the average queueing latency to a target value. 29 Simulation results, theoretical analysis and Linux testbed results 30 have shown that PIE can ensure low latency and achieve high link 31 utilization under various congestion situations. The design does not 32 require per-packet timestamps, so it incurs very little overhead and 33 is simple enough to implement in both hardware and software. 35 Status of this Memo 37 This Internet-Draft is submitted to IETF in full conformance with the 38 provisions of BCP 78 and BCP 79. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF), its areas, and its working groups. Note that 42 other groups may also distribute working documents as 43 Internet-Drafts. 45 Internet-Drafts are draft documents valid for a maximum of six months 46 and may be updated, replaced, or obsoleted by other documents at any 47 time. It is inappropriate to use Internet-Drafts as reference 48 material or to cite them other than as "work in progress." 50 The list of current Internet-Drafts can be accessed at 51 http://www.ietf.org/1id-abstracts.html 53 The list of Internet-Draft Shadow Directories can be accessed at 54 http://www.ietf.org/shadow.html 56 Copyright and License Notice 58 Copyright (c) 2012 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with respect 66 to this document. Code Components extracted from this document must 67 include Simplified BSD License text as described in Section 4.e of 68 the Trust Legal Provisions and are provided without warranty as 69 described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 74 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 5 75 3. Design Goals . . . . . . . . . . . . . . . . . . . . . . . . . 5 76 4. The Basic PIE Scheme . . . . . . . . . . . . . . . . . . . . . 6 77 4.1 Random Dropping . . . . . . . . . . . . . . . . . . . . . . 7 78 4.2 Drop Probability Calculation . . . . . . . . . . . . . . . . 8 79 4.3 Latency Calculation . . . . . . . . . . . . . . . . . . . . 10 80 4.4 Burst Tolerance . . . . . . . . . . . . . . . . . . . . . . 10 81 5. Optional Design Elements of PIE . . . . . . . . . . . . . . . . 11 82 5.1 ECN Support . . . . . . . . . . . . . . . . . . . . . . . . 11 83 5.2 Dequeue Rate Estimation . . . . . . . . . . . . . . . . . . 11 84 5.3 Setting PIE active and inactive . . . . . . . . . . . . . . 13 85 5.4 De-randomization . . . . . . . . . . . . . . . . . . . . . . 14 86 5.5 Cap Drop Adjustment . . . . . . . . . . . . . . . . . . . . 15 87 6. Implementation Cost . . . . . . . . . . . . . . . . . . . . . . 15 88 7. Scope of Experimentation . . . . . . . . . . . . . . . . . . . 16 89 8. Incremental Deployment . . . . . . . . . . . . . . . . . . . . 17 90 9. Security Considerations . . . . . . . . . . . . . . . . . . . . 18 91 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 92 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 18 93 11.1 Normative References . . . . . . . . . . . . . . . . . . . 18 94 11.2 Informative References . . . . . . . . . . . . . . . . . . 18 95 11.3 Other References . . . . . . . . . . . . . . . . . . . . . 19 96 12. The Basic PIE pseudo Code . . . . . . . . . . . . . . . . . . 20 97 13. Pseudo code for PIE with optional enhancement . . . . . . . . 23 99 1. Introduction 101 The explosion of smart phones, tablets and video traffic in the 102 Internet brings about a unique set of challenges for congestion 103 control. To avoid packet drops, many service providers or data center 104 operators require vendors to put in as much buffer as possible. 105 Because of the rapid decrease in memory chip prices, these requests 106 are easily accommodated to keep customers happy. While this solution 107 succeeds in assuring low packet loss and high TCP throughput, it 108 suffers from a major downside. The TCP protocol continuously 109 increases its sending rate and causes network buffers to fill up. TCP 110 cuts its rate only when it receives a packet drop or mark that is 111 interpreted as a congestion signal. However, drops and marks usually 112 occur when network buffers are full or almost full. As a result, 113 excess buffers, initially designed to avoid packet drops, would lead 114 to highly elevated queueing latency and latency variation. Designing 115 a queue management scheme is a delicate balancing act: it not only 116 should allow short-term burst to smoothly pass, but also should 117 control the average latency in the presence of long-running greedy 118 flows. 120 AQM schemes could potentially solve the aforementioned problem. 121 Active queue management (AQM) schemes, such as Random Early Detection 122 (RED [RED] as suggested in RFC 2309[RFC2309], now obsoleted by RFC 123 7567 [RFC7567]), have been around for well over a decade. RED is 124 implemented in a wide variety of network devices, both in hardware 125 and software. Unfortunately, due to the fact that RED needs careful 126 tuning of its parameters for various network conditions, most network 127 operators don't turn RED on. In addition, RED is designed to control 128 the queue length which would affect latency implicitly. It does not 129 control latency directly. Hence, the Internet today still lacks an 130 effective design that can control buffer latency to improve the 131 quality of experience to latency-sensitive applications. The more 132 recent RFC 7567 calls for new methods of controlling network 133 latency. 135 New algorithms are beginning to emerge to control queueing latency 136 directly to address the bufferbloat problem [CoDel]. Along these 137 lines, PIE also aims to keep the benefits of RED: including easy 138 implementation and scalability to high speeds. Similar to RED, PIE 139 randomly drops an incoming packet at the onset of the congestion. The 140 congestion detection, however, is based on the queueing latency 141 instead of the queue length like RED. Furthermore, PIE also uses the 142 derivative (rate of change) of the queueing latency to help determine 143 congestion levels and an appropriate response. The design parameters 144 of PIE are chosen via control theory stability analysis. While these 145 parameters can be fixed to work in various traffic conditions, they 146 could be made self-tuning to optimize system performance. 148 Separately, it is assumed that any latency-based AQM scheme would be 149 applied over a Fair Queueing (FQ) structure or one of its approximate 150 designs, Flow Queueing or Class Based Queueing (CBQ). FQ is one of 151 the most studied scheduling algorithms since it was first proposed in 152 1985 [RFC970]. CBQ has been a standard feature in most network 153 devices today[CBQ]. Any AQM scheme that is built on top of FQ or CBQ 154 could benefit from these advantages. Furthermore, these advantages 155 such as per flow/class fairness are orthogonal to the AQM design 156 whose primary goal is to control latency for a given queue. For flows 157 that are classified into the same class and put into the same queue, 158 one needs to ensure their latency is better controlled and their 159 fairness is not worse than those under the standard DropTail or RED 160 design. More details about the relationship between FQ and AQM can be 161 found in IETF draft [FQ-Implement]. 163 In October 2013, CableLabs' DOCSIS 3.1 specification [DOCSIS_3.1] 164 mandated that cable modems implement a specific variant of the PIE 165 design as the active queue management algorithm. In addition to cable 166 specific improvements, the PIE design in DOCSIS 3.1 [DOCSIS-PIE] has 167 improved the original design in several areas, including de- 168 randomization of coin tosses and enhanced burst protection. 170 This draft describes the design of PIE and separates it into basic 171 elements and optional components that may be implemented to enhance 172 the performance of PIE. 174 2. Terminology 176 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 177 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 178 document are to be interpreted as described in RFC 2119 [RFC2119]. 180 3. Design Goals 182 A queue management framework is designed to improve the performance 183 of interactive and latency-sensitive applications. It should follow 184 the general guidelines set by the AQM working group document " 185 Recommendations Regarding Active Queue Management" [RFC7567]. More 186 specifically PIE design has the following basic criteria. 188 * First, queueing latency, instead of queue length, is 189 controlled. Queue sizes change with queue draining rates and 190 various flows' round trip times. Latency bloat is the real issue 191 that needs to be addressed as it impairs real time applications. 193 If latency can be controlled, bufferbloat is not an issue. In 194 fact, once latency is under control it frees up buffers for 195 sporadic bursts. 197 * Secondly, PIE aims to attain high link utilization. The goal 198 of low latency shall be achieved without suffering link under- 199 utilization or losing network efficiency. An early congestion 200 signal could cause TCP to back off and avoid queue building up. 201 On the other hand, however, TCP's rate reduction could result in 202 link under-utilization. There is a delicate balance between 203 achieving high link utilization and low latency. 205 * Furthermore, the scheme should be simple to implement and 206 easily scalable in both hardware and software. PIE strives to 207 maintain similar design simplicity to RED, which has been 208 implemented in a wide variety of network devices. 210 * Finally, the scheme should ensure system stability for various 211 network topologies and scale well across an arbitrary number of 212 streams. Design parameters shall be set automatically. Users 213 only need to set performance-related parameters such as target 214 queue latency, not design parameters. 216 In the following, the design of PIE and its operation are described in 217 detail. 219 4. The Basic PIE Scheme 221 As illustrated in Fig. 1, PIE is comprised of three simple basic 222 components: a) random dropping at enqueueing; b) periodic drop 223 probability update; c) latency calculation. When a packet arrives, a 224 random decision is made regarding whether to drop the packet. The drop 225 probability is updated periodically based on how far the current latency 226 is away from the target and whether the queueing latency is currently 227 trending up or down. The queueing latency can be obtained using direct 228 measurements or using estimations calculated from the queue length and 229 the dequeue rate. 231 The detailed definition of parameters can be found in the pseudo code 232 section of this document (Section 11). Any state variables that PIE 233 maintains are noted using "PIE->". For full description of the 234 algorithm, one can refer to the full paper [HPSR-PIE]. 236 Random Drop 237 / -------------- 238 -------/ --------------> | | | | | --------------> 239 /|\ | | | | | 240 | -------------- 241 | Queue Buffer \ 242 | | \ 243 | |queue \ 244 | |length \ 245 | | \ 246 | \|/ \/ 247 | ----------------- ------------------- 248 | | Drop | | | 249 -----<-----| Probability |<---| Latency | 250 | Calculation | | Calculation | 251 ----------------- ------------------- 253 Figure 1. The PIE Structure 255 4.1 Random Dropping 257 PIE randomly drops a packet upon its arrival to a queue according to a 258 drop probability, PIE->drop_prob_, that is obtained from the drop- 259 probability-calculation component. The random drop is triggered by a 260 packet arrival before enqueueing into a queue. 262 * Upon a packet enqueue: 264 randomly drop the packet with a probability PIE->drop_prob_. 266 To ensure that PIE is work conserving, we bypass the random drop if the 267 latency sample, PIE->qdelay_old_, is smaller than half of the target 268 latency value (QDELAY_REF) when the drop probability is not too high, 269 PIE->drop_prob_ < 0.2; or if the queue has less than a couple of 270 packets. 272 * Upon a packet enqueue, PIE: 274 //Safeguard PIE to be work conserving 275 if ( (PIE->qdelay_old_ < QDELAY_REF/2 && PIE->drop_prob_ < 0.2) 276 || (queue_.byte_length() <= 2 * MEAN_PKTSIZE) ) { 277 return ENQUE; 278 else 279 randomly drop the packet with a probability PIE->drop_prob_. 281 PIE optionally supports ECN and see Section 5.1. 283 4.2 Drop Probability Calculation 285 The PIE algorithm periodically updates the drop probability based on the 286 latency samples: not only the current latency sample but also the trend 287 where the latency is going, up or down. This is the classical 288 Proportional Integral (PI) controller method which is known for 289 eliminating steady state errors. This type of controller has been 290 studied before for controlling the queue length [PI, QCN]. PIE adopts 291 the Proportional Integral controller for controlling latency. The 292 algorithm also auto-adjusts the control parameters based on how heavy 293 the congestion is, which is reflected in the current drop probability. 294 Note that the current drop probability is a direct measure of current 295 congestion level, no need to measure the arrival rate and dequeue rate 296 mismatches. 298 When a congestion period goes away, we might be left with a high drop 299 probability with light packet arrivals. Hence, the PIE algorithm 300 includes a mechanism by which the drop probability decay exponentially 301 (rather than linearly) when the system is not congested. This would help 302 the drop probability converge to 0 faster while the PI controller 303 ensures that it would eventually reaches zero. The decay parameter of 2% 304 gives us a time constant around 50*T_UPDATE. 306 Specifically, the PIE algorithm periodically adjust the drop probability 307 every T_UPDATE interval: 309 * calculate drop probability PIE->drop_prob_ and auto-tune it as: 311 p = alpha*(current_qdelay-QDELAY_REF) + 312 beta*(current_qdelay-PIE->qdelay_old_); 314 if (PIE->drop_prob_ < 0.000001) { 315 p /= 2048; 316 } else if (PIE->drop_prob_ < 0.00001) { 317 p /= 512; 318 } else if (PIE->drop_prob_ < 0.0001) { 319 p /= 128; 320 } else if (PIE->drop_prob_ < 0.001) { 321 p /= 32; 322 } else if (PIE->drop_prob_ < 0.01) { 323 p /= 8; 324 } else if (PIE->drop_prob_ < 0.1) { 325 p /= 2; 327 } else { 328 p = p; 329 } 330 PIE->drop_prob_ += p; 332 * decay the drop probability exponentially: 334 if (current_qdelay == 0 && PIE->qdelay_old_ == 0) { 336 PIE->drop_prob_ = PIE->drop_prob_*0.98; //1- 1/64 337 //is sufficient 339 } 341 * bound the drop probability 342 if (PIE->drop_prob_ < 0) 343 PIE->drop_prob_ = 0.0 344 if (PIE->drop_prob_ > 1) 345 PIE->drop_prob_ = 1.0 347 * store current latency value: 349 PIE->qdelay_old_ = current_qdelay. 351 The update interval, T_UPDATE, is defaulted to be 15ms. It MAY be 352 reduced on high speed links in order to provide smoother response. The 353 target latency value, QDELAY_REF, SHOULD be set to 15ms. Variables, 354 current_qdelay and PIE->qdelay_old_ represent the current and previous 355 samples of the queueing latency, which are calculated by the "Latency 356 Calculation" component (see Section 4.3). The variable current_qdelay is 357 actually a temporary variable while PIE->qdelay_old_ is a state variable 358 that PIE keeps. The drop probability is a value between 0 and 1. 359 However, implementations can certainly use integers. 361 The controller parameters, alpha and beta(in the unit of hz) are 362 designed using feedback loop analysis where TCP's behaviors are modeled 363 using the results from well-studied prior art[TCP-Models]. Note that the 364 above adjustment of 'p' effectively scales the alpha and beta parameters 365 based on current congestion level indicated by the drop probability. 367 The theoretical analysis of PIE can be found in [HPSR-PIE]. As a rule of 368 thumb, to keep the same feedback loop dynamics, if we cut T_UPDATE in 369 half, we should also cut alpha by half and increase beta by alpha/4. If 370 the target latency is reduced, e.g. for data center use, the values of 371 alpha and beta should be increased by the same order of magnitude that 372 the target latency is reduced. For example, if QDELAY_REF is reduced 373 changed from 15ms to 150us, a reduction of two orders of magnitude, then 374 alpha and beta values should be increased to alpha*100 and beta*100. 376 4.3 Latency Calculation 378 The PIE algorithm uses latency to calculate drop probability. 380 * It estimates current queueing latency using Little's law: 382 current_qdelay = queue_.byte_length()/dequeue_rate; 384 Details can be found in Section 5.2. 386 * or it may use other techniques for calculating queueing latency, 387 ex: timestamp packets at enqueue and use the same to calculate 388 latency during dequeue. 390 4.4 Burst Tolerance 392 PIE does not penalize short-term packet bursts as suggested in RFC7567 393 [RFC7567]. PIE allows bursts of traffic that create finite-duration 394 events in which current queueing latency exceeds the QDELAY_REF, without 395 triggering packet drops. A parameter, MAX_BURST, is introduced that 396 defines the burst duration that will be protected. By default, the 397 parameter SHOULD be set to be 150ms. For simplicity, the PIE algorithm 398 MAY effectively round MAX_BURST up to an integer multiple of T_UPDATE. 400 To implement the burst tolerance function, two basic components of PIE 401 are involved: "random dropping" and "drop probability calculation". The 402 PIE algorithm does the following: 404 * In "Random Dropping" block and upon a packet arrival , PIE checks: 406 Upon a packet enqueue: 407 if PIE->burst_allowance_ > 0 enqueue packet; 408 else randomly drop a packet with a probability PIE->drop_prob_. 410 if (PIE->drop_prob_ == 0 and current_qdelay < QDELAY_REF/2 and 411 PIE->qdelay_old_ < QDELAY_REF/2) 412 PIE->burst_allowance_ = MAX_BURST; 414 * In "Drop Probability Calculation" block, PIE additionally 415 calculates: 417 PIE->burst_allowance_ = max(0,PIE->burst_allowance_ - 418 T_UPDATE); 420 The burst allowance, noted by PIE->burst_allowance_, is initialized to 421 MAX_BURST. As long as PIE->burst_allowance_ is above zero, an incoming 422 packet will be enqueued bypassing the random drop process. During each 423 update instance, the value of PIE->burst_allowance_ is decremented by 424 the update period, T_UPDATE and is bottomed at 0. When the congestion 425 goes away, defined here as PIE->drop_prob_ equals 0 and both the current 426 and previous samples of estimated latency are less than half of 427 QDELAY_REF, PIE->burst_allowance_ is reset to MAX_BURST. 429 5. Optional Design Elements of PIE 431 The above forms the basic elements of the PIE algorithm. There are 432 several enhancements that are added to further augment the performance 433 of the basic algorithm. For clarity purposes, they are included in this 434 section. 436 5.1 ECN Support 438 PIE MAY support ECN by marking (rather than dropping) ECN capable 439 packets [IETF-ECN]. As a safeguard, an additional threshold, 440 mark_ecnth, is introduced. If the calculated drop probability exceeds 441 mark_ecnth, PIE reverts to packet drop for ECN capable packets. The 442 variable mark_ecnth SHOULD be set at 0.1(10%). 444 * To support ECN, the "random drop with a probability 445 PIE->drop_prob_" function in "Random Dropping" block are 446 changed to the following: 448 * Upon a packet enqueue: 450 if rand() < PIE->drop_prob_: 452 if PIE->drop_prob_ < mark_ecnth && ecn_capable_packet == TRUE: 454 mark packet; 456 else: 458 drop packet; 460 5.2 Dequeue Rate Estimation 461 Using the timestamps, a latency sample can only be obtained when a 462 packet reaches at the head of a queue. When a quick response time is 463 desired or a direct latency sample is not available, one may obtain 464 latency through measuring the dequeue rate. The draining rate of a queue 465 in the network often varies either because other queues are sharing the 466 same link, or the link capacity fluctuates. Rate fluctuation is 467 particularly common in wireless networks. One may measure directly at 468 the dequeue operation. Short, non-persistent bursts of packets result in 469 empty queues from time to time, this would make the measurement less 470 accurate. PIE measures when a sufficient data in the buffer, i.e., when 471 the queue length is over a certain threshold (DQ_THRESHOLD). PIE 472 measures how long it takes to drain DQ_THRESHOLD of packets. More 473 specifically, the rate estimation can be implemented as follows: 475 current_qdelay = queue_.byte_length() * 476 PIE->avg_dq_time_/DQ_THRESHOLD; 478 * Upon a packet deque: 480 if PIE->in_measurement_ == FALSE and queue.byte_length() >= 481 DQ_THRESHOLD: 482 PIE->in_measurement_ = TRUE; 483 PIE->measurement_start_ = now; 484 PIE->dq_count_ = 0; 486 if PIE->in_measurement_ == TRUE: 487 PIE->dq_count_ = PIE->dq_count_ + deque_pkt_size; 488 if PIE->dq_count_ >= DQ_THRESHOLD then 489 weight = DQ_THRESHOLD/2^16 490 PIE->avg_dq_time_ = (now-PIE->measurement_start_)*weight 491 + PIE->avg_dq_time_*(1-weight); 492 PIE->dq_count_=0; 493 PIE->measurement_start_ = now 494 else 495 PIE->in_measurement_ = FALSE; 497 The parameter, PIE->dq_count_, represents the number of bytes departed 498 since the last measurement. Once PIE->dq_count_ is over DQ_THRESHOLD, a 499 measurement sample is obtained. The threshold is recommended to be set 500 to 16KB assuming a typical packet size of around 1KB or 1.5KB. This 501 threshold would allow sufficient data to obtain an average draining rate 502 but also fast enough (< 64KB) to reflect sudden changes in the draining 503 rate. IF DQ_THRESHOLD is smaller than 64KB, a small weight is used to 504 smooth out the dequeue time and obtain PIE->avg_dq_time_. The dequeue 505 rate is simply DQ_THRESHOLD divided by PIE->avg_dq_time_. This threshold 506 is not crucial for the system's stability. Please note that the update 507 interval for calculating the drop probability is different from the rate 508 measurement cycle. The drop probability calculation is done periodically 509 per section 4.2 and it is done even when the algorithm is not in a 510 measurement cycle; in this case the previously latched value of PIE- 511 >avg_dq_time_ is used. 513 Random Drop 514 / -------------- 515 -------/ --------------------> | | | | | --------------> 516 /|\ | | | | | | 517 | | -------------- 518 | | Queue Buffer 519 | | | 520 | | |queue 521 | | |length 522 | | | 523 | \|/ \|/ 524 | ------------------------------ 525 | | Dequeue Rate | 526 -----<-----| & Drop Probability | 527 | Calculation | 528 ------------------------------ 530 Figure 2. The Enqueue-based PIE Structure 532 In some platforms, enqueueing and dequeueing functions belong to 533 different modules that are independent of each other. In such 534 situations, a pure enqueue-based design can be designed. As shown in 535 Figure 2, an enqueue-based design is depicted. The dequeue rate is 536 deduced from the number of packets enqueued and the queue length. The 537 design is based on the following key observation: over a certain time 538 interval, the number of dequeued packets = the number of enqueued 539 packets - the number of remaining packets in queue. In this design, 540 everything can be triggered by a packet arrival including the background 541 update process. The design complexity here is similar to the original 542 design. 544 5.3 Setting PIE active and inactive 546 Traffic naturally fluctuates in a network. It would be preferable not to 547 unnecessarily drop packets due to a spurious uptick in queueing latency. 548 PIE has an optional feature of automatically becoming active/inactive. 549 To implement this feature, PIE may choose to only become active (from 550 inactive) when the buffer occupancy is over a certain threshold, which 551 may be set to 1/3 of the tail drop threshold. PIE becomes inactive when 552 congestion is over, i.e. when the drop probability reaches 0, current 553 and previous latency samples are all below half of QDELAY_REF. 555 Ideally, PIE should become active/inactive based on the latency. 556 However, calculating latency when PIE is inactive would introduce 557 unnecessary packet processing overhead. Weighing the trade-offs, it is 558 decided to compare against tail drop threshold to keep things simple. 560 When PIE is optionally becomes active/inactive, the burst protection 561 logic in Section 4.4 are modified as follows: 563 * "Random Dropping" block, PIE adds: 565 Upon packet arrival: 567 if PIE->active_ == FALSE && queue_length >= TAIL_DROP/3: 568 PIE->active_ = TRUE; 569 PIE->burst_allowance_ = MAX_BURST; 571 if PIE->burst_allowance_ > 0 enqueue packet; 572 else randomly drop a packet with a probability PIE->drop_prob_. 574 if (PIE->drop_prob_ == 0 and current_qdelay < QDELAY_REF/2 and 575 PIE->qdelay_old_ < QDELAY_REF/2) 576 PIE->active_ = FALSE; 577 PIE->burst_allowance_ = MAX_BURST; 579 * "Drop Probability Calculation" block, PIE does the following: 580 if PIE->active_ == TRUE: 581 PIE->burst_allowance_ = max(0,PIE->burst_allowance_ - T_UPDATE); 583 5.4 De-randomization 585 Although PIE adopts random dropping to achieve latency control, 586 independent coin tosses could introduce outlier situations where packets 587 are dropped too close to each other or too far from each other. This 588 would cause real drop percentage to temporarily deviate from the 589 intended value PIE->drop_prob_. In certain scenarios, such as small 590 number of simultaneous TCP flows, these deviations can cause significant 591 deviations in link utilization and queueing latency. PIE may use a de- 592 randomization mechanism to avoid such situations. A parameter, called 593 PIE->accu_prob_, is reset to 0 after a drop. Upon a packet arrival, PIE- 594 >accu_prob_ is incremented by the amount of drop probability, PIE- 595 >drop_prob_. If PIE->accu_prob_ is less than a low threshold, e.g. 0.85, 596 the arriving packet is enqueued; on the other hand, if PIE->accu_prob_ 597 is more than a high threshold, e.g. 8.5, and the queue is congested, the 598 arrival packet is forced to be dropped. A packet is only randomly 599 dropped if PIE->accu_prob_ falls in between the two thresholds. Since 600 PIE->accu_prob_ is reset to 0 after a drop, another drop will not happen 601 until 0.85/PIE->drop_prob_ packets later. This avoids packets being 602 dropped too close to each other. In the other extreme case where 603 8.5/PIE->drop_prob_ packets have been enqueued without incurring a drop, 604 PIE would force a drop in order to prevent the drops from being spaced 605 too far apart. Further analysis can be found in [DOCSIS-PIE]. 607 5.5 Cap Drop Adjustment 609 In the case of one single TCP flow during slow start phase in the 610 system, queue could quickly increase during slow start and demands high 611 drop probability. In some environments such as Cable Modem Speed Test, 612 one could not afford triggering timeout and lose throughput as 613 throughput is shown to customers who are testing his/her connection 614 speed. PIE could cap the maximum drop probability increase in each step. 616 * "Drop Probability Calculation" block, PIE adds: 618 if (PIE->drop_prob_ >= 0.1 && p > 0.02) { 620 p = 0.02; 622 } 624 6. Implementation Cost 626 PIE can be applied to existing hardware or software solutions. There are 627 three steps involved in PIE as discussed in Section 4. Their 628 complexities are examined below. 630 Upon packet arrival, the algorithm simply drops a packet randomly based 631 on the drop probability. This step is straightforward and requires no 632 packet header examination and manipulation. If the implementation 633 doesn't rely on packet timestamps for calculating latency, PIE does not 634 require extra memory. Furthermore, the input side of a queue is 635 typically under software control while the output side of a queue is 636 hardware based. Hence, a drop at enqueueing can be readily retrofitted 637 into existing or software implementations. 639 The drop probability calculation is done in the background and it occurs 640 every T_UPDATE interval. Given modern high speed links, this period 641 translates into once every tens, hundreds or even thousands of packets. 642 Hence the calculation occurs at a much slower time scale than packet 643 processing time, at least an order of magnitude slower. The calculation 644 of drop probability involves multiplications using alpha and beta. Since 645 PIE's control law is robust to minor changes in alpha and beta values, 646 an implementation MAY choose these values to the closest multiples of 2 647 or 1/2 (ex: alpha=1/8, beta=1 + 1/4) such that the multiplications can 648 be done using simple adds and shifts. As no complicated functions are 649 required, PIE can be easily implemented in both hardware and software. 650 The state requirement is only one variables per queue: PIE->qdelay_old_. 651 Hence the memory overhead is small. 653 If one chooses to implement the departure rate estimation, PIE uses a 654 counter to keep track of the number of bytes departed for the current 655 interval. This counter is incremented per packet departure. Every 656 T_UPDATE, PIE calculates latency using the departure rate, which can be 657 implemented using a multiplication. Note that many network devices keep 658 track of an interface's departure rate. In this case, PIE might be able 659 to reuse this information, simply skip the third step of the algorithm 660 and hence incurs no extra cost. If a platform already leverages packet 661 timestamps for other purposes, PIE can make use of these packet 662 timestamps for latency calculation instead of estimating departure rate. 664 Flow queuing can also be combined with PIE to provide isolation between 665 flows. In this case, it is preferable to have an independent value of 666 drop probability per queue. This allows each flow to receive the most 667 appropriate level of congestion signal, and ensures that sparse flows 668 are protected from experiencing packet drops. However, running the 669 entire PIE algorithm independently on each queue in order to calculate 670 the drop probability may be overkill. Furthermore, in the case that 671 departure rate estimation is used to predict queuing latency, it is not 672 possible to calculate an accurate per-queue departure rate upon which to 673 implement the PIE drop probability calculation. Instead, it has been 674 proposed ([DOCSIS_AQM]) that a single implementation of the PIE drop 675 probability calculation based on the overall latency estimate be used, 676 followed by a per-queue scaling of drop-probability based on the ratio 677 of queue-depth between the queue in question and the current largest 678 queue. This scaling is reasonably simple, and has a couple of nice 679 properties. One, if a packet is arriving to an empty queue, it is given 680 immunity from packet drops altogether, regardless of the state of the 681 other queues. Two, in the situation where only a single queue is in use, 682 the algorithm behaves exactly like the single-queue PIE algorithm. 684 In summary, PIE is simple enough to be implemented in both software and 685 hardware. 687 7. Scope of Experimentation 689 The design of the PIE algorithm is presented in this document. It 690 effectively controls the average queueing latency to a target value. The 691 following areas can be further studied and experimented: 693 * Autotuning of target latency without losing utilization; 695 * Autotuning for average RTT of traffic; 697 * The proper threshold to transition smoothly between ECN marking 698 and dropping; 700 * The enhancements in Section 5 can be experimented to see if they 701 would bring more value in the real world. If so, they will be 702 incorporated into the basic PIE algorithm; 704 * The PIE design is separated into data path and control path, and 705 the control path can be implemented in software. Field tests of 706 other control laws can be experimented to further improve PIE's 707 performance. 709 Although all network nodes cannot be changed altogether to adopt 710 latency-based AQM schemes such as PIE, a gradual adoption would 711 eventually lead to end-to-end low latency service for all applications. 713 8. Incremental Deployment 715 From testbed experiments and large scale simulations of PIE so far, PIE 716 has been shown to be effective across diverse range of network 717 scenarios. There is no indication that PIE would be harmful to deploy. 719 The PIE scheme can be independently deployed and managed without a need 720 for interoperability between different network devices. In addition, any 721 individual buffer queue can be incrementally upgraded to PIE as it can 722 co-exist with existing AQM schemes such as WRED. 724 PIE is intended to be self-configuring. Users should not need to 725 configure any design parameters. Upon installation, the two user- 726 configurable parameters: QDELAY_REF and MAX_BURST, will be defaulted to 727 15ms and 150ms for non datacenter network devices and to 15us and 150us 728 for datacenter switches, respectively. 730 Since the data path of the algorithm needs only a simple coin toss and 731 the control path calculation happens in a much slower time scale, We 732 don't forsee any scaling issues associated with the algorithm as the 733 link speed scales up. 735 9. Security Considerations 737 This document describes an active queue management algorithm based on 738 implementations in different products. This algorithm introduces no 739 specific security exposures. 741 10. IANA Considerations 743 There are no actions for IANA. 745 11. References 747 11.1 Normative References 749 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 750 Requirement Levels", BCP 14, RFC 2119, March 1997. 752 11.2 Informative References 754 [RFC970] Nagle, J., "On Packet Switches With Infinite 755 Storage",RFC970, December 1985. 757 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 758 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 759 Patridge, C., Peterson, L., Ramakrishnan, K., Shenker, S., 760 Wroclawski, J. and Zhang, L., "Recommendations on Queue 761 Management and Congestion Avoidance in the Internet", 762 April, 1998. 764 [RFC7567] Baker, F. and Fairhurst, G., "Recommendations Regarding 765 Active Queue Management", July, 2015. 767 [CBQ] Cisco White Paper, 768 "http://www.cisco.com/en/US/docs/12_0t/12_0tfeature/guide/cbwfq.html". 770 [CoDel] Nichols, K., Jacobson, V., "Controlling Queue Delay", ACM 771 Queue. ACM Publishing. doi:10.1145/2209249.22W.09264. 773 [DOCSIS_3.1] http://www.cablelabs.com/wp-content/uploads/specdocs 774 /CM-SP-MULPIv3.1-I01-131029.pdf. 776 [DOCSIS-PIE] White, G. and Pan, R., "A PIE-Based AQM for DOCSIS 777 Cable Modems", IETF draft-white-aqm-docsis-pie-02. 779 [FQ-Implement] Baker, F. and Pan, R. "On Queueing, Marking and 780 Dropping", IETF draft-ietf-aqm-fq-implementation. 782 [HPSR-PIE] Pan, R., Natarajan, P. Piglione, C., Prabhu, M.S., 783 Subramanian, V., Baker, F. Steeg and B. V., "PIE: A Lightweight 784 Control Scheme to Address the Bufferbloat Problem", IEEE HPSR 2013. 785 https://www.researchgate.net/publication/261134127_PIE_A_lightweight 786 _control_scheme_to_address_the_bufferbloat_problem?origin=mail. 788 [IETF-ECN] Briscoe, B. Kaippallimalil, J and Phaler, P., 789 "Guidelines for Adding Congestion Notification to Protocols that 790 Encapsulate IP", draft-ietf-tsvwg-ecn-encap-guidelines. 792 11.3 Other References 794 [PI] Hollot, C.V., Misra, V., Towsley, D. and Gong, W., "On 795 Designing Improved Controller for AQM Routers Supporting TCP Flows", 796 Infocom 2001. 798 [QCN] "Data Center Bridging - Congestion Notification", 799 http://www.ieee802.org/1/pages/802.1au.html. 801 [RED] Floyd, S. and Jacobson V., "Random Early Detection (RED) 802 Gateways for Congestion Avoidance", IEEE/ACM Transactions on 803 Networking, August, 1993. 805 [TCP-Models] Misra, V., Gong, W., and Towsley, D., "Fluid-base 806 Analysis of a Network of AQM Routers Supporting TCP Flows with an 807 Application to RED", SIGCOMM 2000. 809 Authors' Addresses 811 Rong Pan 812 Cisco Systems 813 3625 Cisco Way, 814 San Jose, CA 95134, USA 815 Email: ropan@cisco.com 817 Preethi Natarajan, 818 Cisco Systems 819 725 Alder Drive, 820 Milpitas, CA 95035, USA 821 Email: prenatar@cisco.com 823 Fred Baker 824 Cisco Systems 825 725 Alder Drive, 826 Milpitas, CA 95035, USA 827 Email: fred@cisco.com 829 Greg White 830 CableLabs 831 858 Coal Creek Circle 832 Louisville, CO 80027, USA 833 Email: g.white@cablelabs.com 835 Other Contributor's Addresses 837 Bill Ver Steeg 838 Comcast Cable 839 Email: William_VerSteeg@comcast.com 841 Mythili Prabhu* 842 Akamai Technologies 843 3355 Scott Blvd 844 Santa Clara, CA - 95054 845 Email: mythili@akamai.com 847 Chiara Piglione* 848 Broadcom Corporation 849 3151 Zanker Road 850 San Jose, CA 95134 851 Email: chiara@broadcom.com 853 Vijay Subramanian* 854 PLUMgrid, Inc. 855 350 Oakmead Parkway, 856 Suite 250 857 Sunnyvale, CA 94085 858 Email: vns@plumgrid.com 859 * Formerly at Cisco Systems 861 12. The Basic PIE pseudo Code 863 Configurable Parameters: 864 - QDELAY_REF. AQM Latency Target (default: 15ms) 865 - MAX_BURST. AQM Max Burst Allowance (default: 150ms) 867 Internal Parameters: 868 - Weights in the drop probability calculation (1/s): 869 alpha (default: 1/8), beta(default: 1 + 1/4) 870 - T_UPDATE: a period to calculate drop probability (default:15ms) 872 Table which stores status variables (ending with "_"): 873 - burst_allowance_: current burst allowance 874 - drop_prob_: The current packet drop probability. reset to 0 875 - qdelay_old_: The previous queue delay. reset to 0 877 Public/system functions: 878 - queue_. Holds the pending packets. 879 - drop(packet). Drops/discards a packet 880 - now(). Returns the current time 881 - random(). Returns a uniform r.v. in the range 0 ~ 1 882 - queue_.byte_length(). Returns current queue_ length in bytes 883 - queue_.enque(packet). Adds packet to tail of queue_ 884 - queue_.deque(). Returns the packet from the head of queue_ 885 - packet.size(). Returns size of packet 886 - packet.timestamp_delay(). Returns timestamped packet latency 888 ============================ 890 //called on each packet arrival 891 enque(Packet packet) { 892 if (PIE->drop_prob_ == 0 && current_qdelay < QDELAY_REF/2 893 && PIE->qdelay_old_ < QDELAY_REF/2) { 894 PIE->burst_allowance_ = MAX_BURST; 895 } 896 if (PIE->burst_allowance_ == 0 && drop_early() == DROP) { 897 drop(packet); 898 } else { 899 queue_.enque(packet); 900 } 901 } 903 =========================== 905 drop_early() { 907 //Safeguard PIE to be work conserving 908 if ( (PIE->qdelay_old_ < QDELAY_REF/2 && PIE->drop_prob_ < 0.2) 909 || (queue_.byte_length() <= 2 * MEAN_PKTSIZE) ) { 910 return ENQUE; 911 } 913 double u = random(); 914 if (u < PIE->drop_prob_) { 915 return DROP; 916 } else { 918 return ENQUE; 919 } 920 } 922 =========================== 923 //we choose the timestamp option of obtaining latency for clarity 924 //rate estimation method can be found in the extended PIE pseudo code 926 deque(Packet packet) { 928 current_qdelay = packet.timestamp_delay(); 930 } 932 ============================ 933 //update periodically, T_UPDATE = 15ms 935 calculate_drop_prob() { 937 //can be implemented using integer multiply, 939 p = alpha*(current_qdelay - QDELAY_REF) + \ 940 beta*(current_qdelay-PIE->qdelay_old_); 942 if (PIE->drop_prob_ < 0.000001) { 943 p /= 2048; 944 } else if (PIE->drop_prob_ < 0.00001) { 945 p /= 512; 946 } else if (PIE->drop_prob_ < 0.0001) { 947 p /= 128; 948 } else if (PIE->drop_prob_ < 0.001) { 949 p /= 32; 950 } else if (PIE->drop_prob_ < 0.01) { 951 p /= 8; 952 } else if (PIE->drop_prob_ < 0.1) { 953 p /= 2; 954 } else { 955 p = p; 956 } 958 PIE->drop_prob_ += p; 960 //Exponentially decay drop prob when congestion goes away 961 if (current_qdelay == 0 && PIE->qdelay_old_ == 0) { 962 PIE->drop_prob_ *= 0.98; //1- 1/64 is sufficient 963 } 964 //bound drop probability 965 if (PIE->drop_prob_ < 0) 966 PIE->drop_prob_ = 0.0 967 if (PIE->drop_prob_ > 1) 968 PIE->drop_prob_ = 1.0 970 PIE->qdelay_old_ = current_qdelay; 972 PIE->burst_allowance_ = max(0,PIE->burst_allowance_ - T_UPDATE); 974 } 975 } 977 13. Pseudo code for PIE with optional enhancement 979 Configurable Parameters: 980 - QDELAY_REF. AQM Latency Target (default: 15ms) 981 - MAX_BURST. AQM Max Burst Allowance (default: 150ms) 982 - MAX_ECNTH. AQM Max ECN Marking Threshold (default: 10%) 984 Internal Parameters: 985 - Weights in the drop probability calculation (1/s): 986 alpha (default: 1/8), beta(default: 1+1/4) 987 - DQ_THRESHOLD: (in bytes, default: 2^14 (in a power of 2) ) 988 - T_UPDATE: a period to calculate drop probability (default:15ms) 989 - TAIL_DROP: each queue has a tail drop threshold, pass it to PIE 991 Table which stores status variables (ending with "_"): 992 - active_: INACTIVE/ACTIVE 993 - burst_allowance_: current burst allowance 994 - drop_prob_: The current packet drop probability. reset to 0 995 - accu_prob_: Accumulated drop probability. reset to 0 996 - qdelay_old_: The previous queue delay estimate. reset to 0 997 - last_timestamp_: Timestamp of previous status update 998 - dq_count_, measurement_start_, in_measurement_, 999 avg_dq_time_. variables for measuring average dequeue rate. 1001 Public/system functions: 1002 - queue_. Holds the pending packets. 1003 - drop(packet). Drops/discards a packet 1004 - mark(packet). Marks ECN for a packet 1005 - now(). Returns the current time 1006 - random(). Returns a uniform r.v. in the range 0 ~ 1 1007 - queue_.byte_length(). Returns current queue_ length in bytes 1008 - queue_.enque(packet). Adds packet to tail of queue_ 1009 - queue_.deque(). Returns the packet from the head of queue_ 1010 - packet.size(). Returns size of packet 1011 - packet.ecn(). Returns whether packet is ECN capable or not 1013 ============================ 1014 //called on each packet arrival 1015 enque(Packet packet) { 1016 if (queue_.byte_length()+packet.size() > TAIL_DROP) { 1017 drop(packet); 1018 PIE->accu_prob_ = 0; 1019 } else if (PIE->active_ == TRUE && drop_early() == DROP 1020 && PIE->burst_allowance_ == 0) { 1021 if (PIE->drop_prob_ < MAX_ECNTH && packet.ecn() == TRUE) 1022 mark(packet); 1023 else 1024 drop(packet); 1025 PIE->accu_prob_ = 0; 1026 } else { 1027 queue_.enque(packet); 1028 } 1030 //If the queue is over a certain threshold, turn on PIE 1031 if (PIE->active_ == INACTIVE 1032 && queue_.byte_length() >= TAIL_DROP/3) { 1033 PIE->active_ = ACTIVE; 1034 PIE->qdelay_old_ = 0; 1035 PIE->drop_prob_ = 0; 1036 PIE->in_measurement_ = TRUE; 1037 PIE->dq_count_ = 0; 1038 PIE->avg_dq_time_ = 0; 1039 PIE->last_timestamp_ = now; 1040 PIE->burst_allowance_ = MAX_BURST; 1041 PIE->accu_prob_ = 0; 1042 PIE->measurement_start_ = now; 1043 } 1045 //If the queue has been idle for a while, turn off PIE 1046 //reset counters when accessing the queue after some idle 1047 //period if PIE was active before 1048 if ( PIE->drop_prob_ == 0 && PIE->qdelay_old_ == 0 1049 && current_qdelay == 0) { 1050 PIE->active_ = INACTIVE; 1051 PIE->in_measurement_ = FALSE; 1052 } 1054 } 1056 =========================== 1058 drop_early() { 1060 //PIE is active but the queue is not congested, return ENQUE 1061 if ( (PIE->qdelay_old_ < QDELAY_REF/2 && PIE->drop_prob_ < 0.2) 1062 || (queue_.byte_length() <= 2 * MEAN_PKTSIZE) ) { 1063 return ENQUE; 1064 } 1066 if (PIE->drop_prob_ == 0) { 1067 PIE->accu_prob_ = 0; 1068 } 1070 //For practical reasons, drop probability can be further scaled 1071 //according to packet size. but need to set a bound to 1072 //avoid unnecessary bias 1074 //Random drop 1075 PIE->accu_prob_ += PIE->drop_prob_; 1076 if (PIE->accu_prob_ < 0.85) 1077 return ENQUE; 1078 if (PIE->accu_prob_ >= 8.5) 1079 return DROP; 1080 double u = random(); 1081 if (u < PIE->drop_prob_) { 1082 PIE->accu_prob_ = 0; 1083 return DROP; 1084 } else { 1085 return ENQUE; 1086 } 1087 } 1089 ============================ 1090 //update periodically, T_UPDATE = 15ms 1091 calculate_drop_prob() { 1092 if ( (now - PIE->last_timestamp_) >= T_UPDATE && 1093 PIE->active_ == ACTIVE) { 1094 //can be implemented using integer multiply, 1095 //DQ_THRESHOLD is power of 2 value 1096 current_qdelay = queue_.byte_length() * PIE- 1097 >avg_dq_time_/DQ_THRESHOLD; 1099 p = alpha*(current_qdelay - QDELAY_REF) + \ 1100 beta*(current_qdelay-PIE->qdelay_old_); 1102 if (PIE->drop_prob_ < 0.000001) { 1103 p /= 2048; 1104 } else if (PIE->drop_prob_ < 0.00001) { 1105 p /= 512; 1106 } else if (PIE->drop_prob_ < 0.0001) { 1107 p /= 128; 1108 } else if (PIE->drop_prob_ < 0.001) { 1109 p /= 32; 1110 } else if (PIE->drop_prob_ < 0.01) { 1111 p /= 8; 1112 } else if (PIE->drop_prob_ < 0.1) { 1113 p /= 2; 1114 } else { 1115 p = p; 1116 } 1118 if (PIE->drop_prob_ >= 0.1 && p > 0.02) { 1119 p = 0.02; 1120 } 1121 PIE->drop_prob_ += p; 1123 //Exponentially decay drop prob when congestion goes away 1124 if (current_qdelay < QDELAY_REF/2 && PIE->qdelay_old_ < 1125 QDELAY_REF/2) { 1126 PIE->drop_prob_ *= 0.98; //1- 1/64 is sufficient 1127 } 1129 //bound drop probability 1130 if (PIE->drop_prob_ < 0) 1131 PIE->drop_prob_ = 0 1132 if (PIE->drop_prob_ > 1) 1133 PIE->drop_prob_ = 1 1135 PIE->qdelay_old_ = current_qdelay; 1136 PIE->last_timestamp_ = now; 1137 PIE->burst_allowance_ = max(0,PIE->burst_allowance_ - T_UPDATE); 1138 } 1139 } 1140 ========================== 1141 //called on each packet departure 1142 deque(Packet packet) { 1144 //deque rate estimation 1145 if (PIE->in_measurement_ == TRUE) { 1146 PIE->dq_count_ = packet.size() + PIE->dq_count_; 1147 //start a new measurement cycle if we have enough packets 1148 if ( PIE->dq_count_ >= DQ_THRESHOLD) { 1149 dq_time = now - PIE->measurement_start_; 1150 if(PIE->avg_dq_time_ == 0) { 1151 PIE->avg_dq_time_ = dq_time; 1152 } else { 1153 weight = DQ_THRESHOLD/2^16 1154 PIE->avg_dq_time_ = dq_time*weight + PIE->avg_dq_time_*(1- 1155 weight); 1156 } 1157 PIE->in_measurement_ = FALSE; 1158 } 1159 } 1161 //start a measurement if we have enough data in the queue: 1162 if (queue_.byte_length() >= DQ_THRESHOLD && 1163 PIE->in_measurement_ == FALSE) { 1164 PIE->in_measurement_ = TRUE; 1165 PIE->measurement_start_ = now; 1166 PIE->dq_count_ = 0; 1167 } 1168 }