idnits 2.17.1 draft-baker-aqm-sfq-implementation-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 13, 2014) is 3604 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-01) exists of draft-hoeiland-joergensen-aqm-fq-codel-00 == Outdated reference: A later version (-02) exists of draft-nichols-tsvwg-codel-01 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Active Queue Management F. Baker 3 Internet-Draft R. Pan 4 Intended status: Informational Cisco Systems 5 Expires: December 15, 2014 June 13, 2014 7 On Queuing, Marking, and Dropping 8 draft-baker-aqm-sfq-implementation-00 10 Abstract 12 This note discusses implementation strategies for coupled queuing and 13 mark/drop algorithms. 15 Status of This Memo 17 This Internet-Draft is submitted in full conformance with the 18 provisions of BCP 78 and BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF). Note that other groups may also distribute 22 working documents as Internet-Drafts. The list of current Internet- 23 Drafts is at http://datatracker.ietf.org/drafts/current/. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 This Internet-Draft will expire on December 15, 2014. 32 Copyright Notice 34 Copyright (c) 2014 IETF Trust and the persons identified as the 35 document authors. All rights reserved. 37 This document is subject to BCP 78 and the IETF Trust's Legal 38 Provisions Relating to IETF Documents 39 (http://trustee.ietf.org/license-info) in effect on the date of 40 publication of this document. Please review these documents 41 carefully, as they describe your rights and restrictions with respect 42 to this document. Code Components extracted from this document must 43 include Simplified BSD License text as described in Section 4.e of 44 the Trust Legal Provisions and are provided without warranty as 45 described in the Simplified BSD License. 47 Table of Contents 49 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 50 2. Fair Queuing: Algorithms and History . . . . . . . . . . . . 2 51 2.1. Generalized Processor Sharing . . . . . . . . . . . . . . 3 52 2.1.1. GPS Comparisons: transmission quanta . . . . . . . . 3 53 2.1.2. GPS Comparisons: flow definition . . . . . . . . . . 4 54 2.1.3. GPS Comparisons: unit of measurement . . . . . . . . 5 55 2.2. GPS Approximations . . . . . . . . . . . . . . . . . . . 5 56 2.2.1. Definition of a queuing algorithm . . . . . . . . . . 5 57 2.2.2. Round Robin Models . . . . . . . . . . . . . . . . . 6 58 2.2.3. Calendar Queue Models . . . . . . . . . . . . . . . . 7 59 2.2.4. Work Conserving Models and Stochastic Fairness 60 Queuing . . . . . . . . . . . . . . . . . . . . . . . 8 61 2.2.5. Non Work Conserving Models and Virtual Clock . . . . 9 62 3. Queuing, Marking, and Dropping . . . . . . . . . . . . . . . 10 63 3.1. Queuing with Tail Mark/Drop . . . . . . . . . . . . . . . 10 64 3.2. Queuing with CoDel Mark/Drop . . . . . . . . . . . . . . 10 65 3.3. Queuing with PIE Mark/Drop . . . . . . . . . . . . . . . 11 66 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 12 67 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 68 6. Security Considerations . . . . . . . . . . . . . . . . . . . 12 69 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 12 70 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 71 8.1. Normative References . . . . . . . . . . . . . . . . . . 13 72 8.2. Informative References . . . . . . . . . . . . . . . . . 13 73 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 14 74 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 76 1. Introduction 78 In the discussion of Active Queue Management, there has been 79 discussion of the coupling of queue management algorithms such as 80 Stochastic Fairness Queuing [SFQ] with mark/drop algorithms such as 81 CoDel [I-D.nichols-tsvwg-codel] or PIE [I-D.pan-tsvwg-pie]. In the 82 interest of clarifying the discussion, we document possible 83 implementation approaches to that, and analyze the possible effects 84 and side-effects. The language and model derive from the 85 Architecture for Differentiated Services [RFC2475]. 87 2. Fair Queuing: Algorithms and History 89 There is extensive history in the set of algorithms collectively 90 referred to as "Fair Queuing". The model was initially discussed in 91 [RFC0970], which proposed it hypothetically as a solution to the TCP 92 Silly Window Syndrome issue in BSD 4.1. The problem was that, due to 93 a TCP implementation bug, some senders would settle into sending a 94 long stream of very short segments, which unnecessarily consumed 95 bandwidth on TCP and IP headers and occupied short packet buffers, 96 thereby disrupting competing sessions. Nagle suggested that if 97 packet streams were sorted by their source address and the sources 98 treated in a round robin fashion, a sender's effect on end-to-end 99 latency and increased loss rate would primarily affect only itself. 100 This touched off perhaps a decade of work by various researchers on 101 what was and is termed "Fair Queuing," philosophical discussions of 102 the meaning of the word "fair," operational reasons that one might 103 want a "weighted" or "predictably unfair" queuing algorithm, and so 104 on. 106 2.1. Generalized Processor Sharing 108 Conceptually, any Fair Queuing algorithm attempts to implement some 109 approximation to the Generalized Processor Sharing [GPS] model. 111 The GPS model, in its essence, presumes that a set of identified data 112 streams, called "flows", pass through an interface. Each flow has a 113 rate when measured over a period of time; A voice session might, for 114 example, require 64 KBPS plus whatever overhead is necessary to 115 deliver it, and a TCP session might have variable throughput 116 depending on where it is in its evolution. The premise of 117 Generalized Processor Sharing is that on all time scales, the flow 118 occupies a predictable bit rate, so that if there is enough bandwidth 119 for the flow in the long term, it also lacks nothing in the short 120 term. "All time scales" is obviously untenable in a packet network - 121 and even in a traditional TDM circuit switch network - because a 122 timescale shorter than he duration of a packet will only see one 123 packet at a time. But it provides an ideal for other models to be 124 compared against. 126 There are a number of attributes of approximations to the GPS model 127 that bear operational consideration, including at least the 128 transmission quanta, the definition of a "flow", the unit of 129 measurement. Implementation algorithms have different practical 130 impacts as well. 132 2.1.1. GPS Comparisons: transmission quanta 134 The most obvious comparison between the GPS model and common 135 approximations to it is that real world data is not delivered 136 uniformly, but in some quantum. The smallest quantum, in a packet 137 network, is a packet. But quanta can be larger; for example, in 138 video applications it is common to describe data flow in frames per 139 second, where a frame describes a picture on a screen or the changes 140 made from a previous one. A single video frame is commonly on the 141 order of tens of packets. If a codec is delivering thirty frames per 142 second, it is conceivable that the packets comprising a frame might 143 be sent as thirty bursts per second, with each burst sent at the 144 interface rate of the camera or other sender. Similarly, TCP 145 exchanges have an initial window, which might be any number of 146 packets; common values are 1, 2, 3, 4, and 10, and there are also 147 reports of bursts of 65K bytes at the relevant MSS, which is to say 148 about 45 packets in one burst, presumably coming from TCP offload 149 engines. After that initial burst, TCP senders commonly send pairs 150 of packets, but may send either smaller or larger bursts, and the 151 rate at which they send is governed by the arrival rate of 152 acknowledgements from the receiver. 154 2.1.2. GPS Comparisons: flow definition 156 An important engineering trade-off relevant to GPS is the definition 157 of a "flow". A flow is, by definition, a defined data stream. 158 Common definitions include: 160 o Packets in a single transport layer session ("microflow"), 161 identified by a five-tuple [RFC2990], 163 o Packets between a single pair of addresses, identified by a source 164 and destination address or prefix, 166 o Packets from a single source address or prefix [RFC0970], 168 o Packets to a single destination address or prefix, 170 o Packets to or from a single subscriber, customer, or peer 171 [RFC6057]. In Service Provider operations, this might be a 172 neighboring Autonomous System; in broadband, a residential 173 customer. 175 The difference should be apparent. Consider a comparison between 176 sorting by source address or destination address, to pick two 177 examples, in the case that a given router interface has N application 178 sessions going through it between N/2 local destinations and N remote 179 sources. Sorting by source, or in this case by source/destination 180 pair, would give each remote peer an upper bound guarantee of 1/N of 181 the available capacity, which might be distributed very unevenly 182 among the local destinations. Sorting by destination would give each 183 local destination an upper bound guarantee of 2/N of the available 184 capacity, which might be distributed very unevenly among the remote 185 systems and correlated sessions. Who is one fair to? In both cases, 186 they deliver equal service by their definition, but that might not be 187 someone else's definition. 189 2.1.3. GPS Comparisons: unit of measurement 191 And finally, there is the question of what is measured for rate. If 192 the sole objective is to force packet streams to not dominate each 193 other, it is sufficient to count packets. However, if the issue is 194 the bit rate of an SLA, one must consider the sizes of the packets 195 (the aggregate throughput of a flow, measured in bits or bytes). And 196 if predictable unfairness is a consideration, the value must be 197 weighted accordingly. 199 2.2. GPS Approximations 201 Carrying the matter further, a queuing algorithm may also be termed 202 "Work Conserving" or "Non Work Conserving". A "work conserving" 203 algorithm, by definition, is either empty, in which case no attempt 204 is being made to dequeue data from it, or contains something, in 205 which case it continuously tries to empty the queue. A work 206 conserving queue that contains queued data, at an interface with a 207 given rate, will deliver data at that rate until it empties. A non- 208 work-conserving queue might stop delivering even through it still 209 contains data. A common reason for doing this is to impose an 210 artificial upper bound on a class of traffic that is lower than the 211 rate of the underlying physical interface. 213 2.2.1. Definition of a queuing algorithm 215 In the discussion following, we assume a basic definition of a 216 queuing algorithm. A queuing algorithm has, at minimum: 218 o Some form of internal storage for the elements kept in the queue, 220 o If it has multiple internal classifications, 222 * a method for classifying elements, 224 * additional storage for the classifier and implied classes, 226 o a method for creating the queue, 228 o a method for destroying the queue, 230 o a method, called "enqueue", for placing packets into the queue or 231 queuing system 233 o a method, called "dequeue", for removing packets from the queue or 234 queuing system 236 There may also be other information or methods, such as the ability 237 to inspect the queue. It also often has inspectable external 238 attributes, such as the total volume of packets or bytes in queue, 239 and may have limit thresholds, such as a maximum number of packets or 240 bytes the queue might hold. 242 For example, a simple FIFO queue has a linear data structure, 243 enqueues packets at the tail, and dequeues packets from the head. It 244 might have a maximum queue depth and a current queue depth, 245 maintained in packets or bytes. 247 2.2.2. Round Robin Models 249 One class of implementation approaches, generically referred to as 250 "Weighted Round Robin", implements the structure of the queue as an 251 array or ring of queues associated with flows, for whatever 252 definition of a flow is important. 254 On enqueue, the enqueue function classifies a packet and places it 255 into a simple FIFO sub-queue. 257 On dequeue, the sub-queues are searched in round-robin order, and 258 when a sub-queue is identified that contains data, removes a 259 specified quantum of data from it. That quantum is at minimum a 260 packet, but it may be more. If the system is intended to maintain a 261 byte rate, there will be memory between searches of the excess 262 previously dequeued. 264 +-+ 265 +>|1| 266 | +-+ 267 | | 268 | +-+ +-+ 269 | |1| +>|3| 270 | +-+ | +-+ 271 | | | | 272 | +-+ +-+ | +-+ 273 | |1| +>|2| | |3| 274 | +-+ | +-+ | +-+ 275 | A | A | A 276 | | | | | | 277 ++--++ ++--++ ++--++ 278 +->| Q |-->| Q |-->| Q |--+ 279 | +----+ +----+ +----+ | 280 +----------------------------+ 282 Figure 1: Round Robin Queues 284 If a hash is used as a classifier, the modulus of the hash might be 285 used as an array index, selecting the sub-queue that the packet will 286 go into. One can imagine other classifiers, such as using a DSCP 287 value as an index into an array containing the queue number for a 288 flow, or more complex access list implementations. 290 In any event, a sub-queue contains the traffic for a flow, and data 291 is sent from each sub-queue in succession. 293 2.2.3. Calendar Queue Models 295 Another class of implementation approaches, generically referred to 296 as "Weighted Fair Queues" or "Calendar Queue Implementations", 297 implements the structure of the queue as an array or ring of queues 298 (often called "buckets") associated with time or sequence; Each 299 bucket contains the set of packets, which may be null, intended to be 300 sent at a certain time or following the emptying of the previous 301 bucket. The queue structure includes a look-aside table that 302 indicates the current depth (which is to say, the next bucket) of any 303 given class of traffic, which might similarly be identified using a 304 hash, a DSCP, an access list, or any other classifier. Conceptually, 305 the queues each contain zero or more packets from each class of 306 traffic. One is the queue being emptied "now"; the rest are 307 associated with some time or sequence in the future. 309 On enqueue, the enqueue function classifies a packet and determines 310 the current depth of that class, with a view to scheduling it for 311 transmission at some time or sequence in the future. If the unit of 312 scheduling is a packet and the queuing quantum is one packet per sub- 313 queue, a burst of packets arrives in a given flow, and at the start 314 the flow has no queued data, the first packet goes into the "next" 315 queue, the second into its successor, and so on; if there was some 316 data in the class, the first packet in the burst would go into the 317 bucket pointed to by the look-aside table. If the unit of scheduling 318 is time, the explanation in Section 2.2.5 might be simplest to 319 follow, but the bucket selected will be the bucket corresponding to a 320 given transmission time in the future. A necessary side-effect, 321 memory being finite, is that there exist a finite number of "future" 322 buckets. If enough traffic arrives to cause a class to wrap, one is 323 forced to drop something (tail-drop). 325 On dequeue, the buckets are searched at their stated times or in 326 their stated sequence, and when a bucket is identified that contains 327 data, removes a specified quantum of data from it and, by extension, 328 from the associated traffic classes. A single bucket might contain 329 data from a number of classes simultaneously. 331 +-+ 332 +>|1| 333 | +-+ 334 | | 335 | +-+ +-+ 336 | |2| +>|2| 337 | +-+ | +-+ 338 | | | | 339 | +-+ | +-+ +-+ 340 | |3| | |1| +>|1| 341 | +-+ | +-+ | +-+ 342 | A | A | A 343 | | | | | | 344 ++--++ ++--++ ++--++ 345 "now"+->| Q |-->| Q |-->| Q |-->... 346 +----+ +----+ +----+ 347 A A A 348 |3 |2 |1 349 +++++++++++++++++++++++ 350 |||| Flow |||| 351 +++++++++++++++++++++++ 353 Figure 2: Calendar Queue 355 In any event, a sub-queue contains the traffic for a point in time or 356 a point in sequence, and data is sent from each sub-queue in 357 succession. If sub-queues are associated with time, an interesting 358 end case develops: If the system is draining a given sub-queue, and 359 the time of the next sub-queue arrives, what should the system do? 360 One potentially valid line of reasoning would have it continue 361 delivering the data in the present queue, on the assumption that it 362 will likely trade off for time in the next. Another potentially 363 valid line of reasoning would have it discard any waiting data in the 364 present queue and move to the next. 366 2.2.4. Work Conserving Models and Stochastic Fairness Queuing 368 McKenney's Stochastic Fairness Queuing [SFQ] is an example of a work 369 conserving algorithm. This algorithm measures packets, and considers 370 a "flow" to be an equivalence class of traffic defined by a hashing 371 algorithm over the source and destination IPv4 addresses. As packets 372 arrive, the enqueue function performs the indicated hash and places 373 the packet into the indicated sub-queue. The dequeue function 374 operates as described in Section 2.2.2; sub-queues are inspected in 375 round-robin sequence, and if they contain one or more packets, a 376 packet is removed. 378 Shreedhar's Deficit Round Robin [DRR] model modifies the quanta to 379 bytes, and deals with variable length packets. A sub-queue 380 descriptor contains a waiting quantum (the amount intended to be 381 dequeued on the previous dequeue attempt that was not satisfied), a 382 per-round quantum (the sub-queue is intended to dequeue a certain 383 number of bytes each round), and a maximum to permit (some multiple 384 of the MTU). In each dequeue attempt, the dequeue method sets the 385 waiting quantum to the smaller of the maximum quantum and the sum of 386 the waiting and incremental quantum. It then dequeues up to the 387 waiting quantum, in bytes, of packets in the queue, and reduces the 388 waiting quantum by the number of bytes dequeued. Since packets will 389 not normally be exactly the size of the quantum, some dequeue 390 attempts will dequeue more than others, but they will over time 391 average the incremental quantum per round if there is data present. 393 McKenny or Shreedhar's models could be implemented as described in 394 Section 2.2.3. The weakness of a WRR approach is the search time 395 expended when the queuing system is relatively empty, which the 396 calendar queue model obviates. 398 2.2.5. Non Work Conserving Models and Virtual Clock 400 Zhang's Virtual Clock [VirtualClock] is an example of a non-work- 401 conserving algorithm. It is trivially implemented as described in 402 Section 2.2.3. It associates buckets with intervals in time, with 403 durations on the order of microseconds to tens of milliseconds. Each 404 flow is assigned a rate in bytes per interval. The flow entry 405 maintains a point in time the "next" packet in the flow should be 406 scheduled. 408 On enqueue, the method determines whether the "next schedule" time is 409 "in the past"; if so, the packet is scheduled "now", and if not, the 410 packet is scheduled at that time. It then calculates the new "next 411 schedule" time, as the current "next schedule" time plus the length 412 of the packet divided by the rate; if the resulting time is also in 413 the past, the "next schedule" time is set to "now", and otherwise to 414 the calculated time. As noted in Section 2.2.3, there is an 415 interesting point regarding "too much time in the future"; if a 416 packet is scheduled too far into the future, it may be marked or 417 dropped in the AQM procedure, and if it runs beyond the end of the 418 queuing system, may be defensively tail dropped. 420 On dequeue, the bucket associated with the time "now" is inspected. 421 If it contains a packet, the packet is dequeued and transmitted. If 422 the bucket is empty and the time for the next bucket has not arrived, 423 the system waits, even if there is a packet in the next bucket. As 424 noted in Section 2.2.3, there is an interesting point regarding the 425 queue associated with "now". If a subsequent bucket, even if it is 426 actually empty, would be delayed by the transmission of a packet, one 427 could imagine marking the packet ECN CE [RFC3168] [RFC6679] or 428 dropping the packet. 430 3. Queuing, Marking, and Dropping 432 Queuing, marking, and dropping are integrated in any system that has 433 a queue. If nothing else, as memory is finite, a system has to drop 434 as discussed in Section 2.2.3 and Section 2.2.5 in order to protect 435 itself. However, host transports interpret drops as signals, so AQM 436 algorithms use that as a mechanism to signal. 438 It is useful to think of the effects of queuing as a signal as well. 439 In TCP, SCTP, and protocols like them, delay experienced by a packet 440 can be used to guess the rate available at a given time on a path 441 even though the characteristics of the path and competing traffic 442 remain unknown [PacketPair]. The mathematical side of that is that 443 if two packets were sent at the same time, the ratio of the size of 444 the second packet divided by the difference in arrival times of the 445 two packets cannot exceed the capacity of the link (although it may 446 well be lower). From an engineering perspective, the receiver sends 447 acknowledgements as data is received, so the arrival of 448 acknowledgements at the sender paces the sender at approximately the 449 average rate it is able to achieve through the network. This is true 450 even if the sender keeps an arbitrarily large amount of data stored 451 in network queues, and is the basis for delay-based congestion 452 control algorithms. So, delaying a packet momentarily in order to 453 permit another session to improve its operation has the effect of 454 signaling a slightly lower capacity to the sender. 456 3.1. Queuing with Tail Mark/Drop 458 In the default case, in which a FIFO queue is used with defensive 459 tail-drop only, the effect is therefore to signal to the sender in 460 two ways: 462 o Ack Clocking, pacing the sender to send at approximately the rate 463 it can deliver data to the receiver, and 465 o Defensive loss, when a sender sends faster than available capacity 466 (such as by probing network capacity when fully utilizing that 467 capacity) and overburdens a queue. 469 3.2. Queuing with CoDel Mark/Drop 471 In any case wherein a queuing algorithm is used along with CoDel 472 [I-D.nichols-tsvwg-codel], the sequence of events is that a packet is 473 time-stamped, enqueued, dequeued, compared to a subsequent reading of 474 the clock, and then acted on, whether by dropping it, marking and 475 forwarding it, or simply forwarding it. This is to say that the only 476 drop algorithm inherent in queuing is the defensive drop when the 477 queue's resources are overrun. However, the intention of marking or 478 dropping is to signal to the sender much earlier, when a certain 479 amount of delay has been observed,. The CoDel algorithm is completely 480 separate from the queuing algorithm. Hence, in a FIFO+CoDel, 481 SFQ+CoDel, or Virtual Clock+CoDel implementation, the queuing 482 algorithm is completely separate from the AQM algorithm. Using them 483 in series results in four signals to the sender: 485 o Ack Clocking, pacing the sender to send at approximately the rate 486 it can deliver data to the receiver through a queue, 488 o Lossless signaling that a certain delay threshold has been 489 reached, if ECN [RFC3168][RFC6679] is in use, 491 o Intentional signaling via loss that a certain delay threshold has 492 been reached, if ECN is not in use, and 494 o Defensive loss, when a sender sends faster than available capacity 495 (such as by probing network capacity when fully utilizing that 496 capacity) and overburdens a queue. 498 3.3. Queuing with PIE Mark/Drop 500 In any case wherein a queuing algorithm is used along with PIE 501 [I-D.pan-tsvwg-pie], RED, or other such algorithms, the sequence of 502 events is that a queue is inspected, a packet is dropped, marked, or 503 left unchanged, enqueued, dequeued, compared to a subsequent reading 504 of the clock, and then forwarded on. This is to say that the AQM 505 Mark/Drop Algorithm precedes enqueue; if it has not been effective 506 and as a result the queue is out of resources anyway, the defensive 507 drop algorithm steps in, and failing that, the queue operates in 508 whatever way it does. Hence, in a FIFO+PIE, SFQ+PIE, or Virtual 509 Clock+PIE implementation, the queuing algorithm is again completely 510 separate from the AQM algorithm. Using them in series results in 511 four signals to the sender: 513 o Ack Clocking, pacing the sender to send at approximately the rate 514 it can deliver data to the receiver through a queue, 516 o Lossless signaling that a queue depth that corresponds to a 517 certain delay threshold has been reached, if ECN is in use, 519 o Intentional signaling via loss that a queue depth that corresponds 520 to a certain delay threshold has been reached, if ECN is not in 521 use, and 523 o Defensive loss, when a sender sends faster than available capacity 524 (such as by probing network capacity when fully utilizing that 525 capacity) and overburdens a queue. 527 4. Conclusion 529 To summarize, in Section 2, implementation approaches for several 530 classes of queueing algorithms were explored. Queuing algorithms 531 such as SFQ, Virtual Clock, and FlowQueue-Codel 532 [I-D.hoeiland-joergensen-aqm-fq-codel] have value in the network, in 533 that they delay packets to enforce a rate upper bound or to permit 534 competing flows to compete more effectively. ECN Marking and loss 535 are also useful signals if used in a manner that enhances TCP/SCTP 536 operation or restrains unmanaged UDP data flows. 538 It is, however, incorrect to discuss a scheduler and a mark/drop 539 algorithm working together as a single algorithm, even if they are 540 coded that way and even if there might be optimizations that can be 541 done between the two. Conceptually, they operate in series, as 542 discussed in Section 3. The observed effects also differ; while 543 defensive loss protects the intermediate system and provides a 544 signal, AQM mark/drop works to reduce mean latency, and the 545 scheduling of flows works to modify flow interleave and 546 acknowledgement pacing. Certain features like flow isolation are 547 provided by fair queueing related designs, not the effect of the mark 548 /drop algorithm. 550 5. IANA Considerations 552 This memo asks the IANA for no new parameters. 554 6. Security Considerations 556 This memo adds no new security issues; it observes on implementation 557 strategies for Diffserv implementation. 559 7. Acknowledgements 561 This note grew out of, and is in response to, mailing list 562 discussions in AQM, in which some have pushed an algorithm the 563 compare to AQM marking and dropping algorithms, but which includes 564 SFQ. The authors think highly of queuing algorithms that can ensure 565 certain behaviors, but in this context believe that coupling queuing 566 and marking or dropping is unwarranted and masks issues with the mark 567 /drop algorithm in question. 569 8. References 571 8.1. Normative References 573 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 574 and W. Weiss, "An Architecture for Differentiated 575 Services", RFC 2475, December 1998. 577 8.2. Informative References 579 [DRR] Microsoft Corporation and Washington University in St. 580 Louis, "Efficient fair queueing using deficit round 581 robin", ACM SIGCOMM 1995, October 1995, 582 . 585 [GPS] Xerox PARC, University of California, Berkeley, and Xerox 586 PARC, "Analysis and simulation of a fair queueing 587 algorithm", ACM SIGCOMM 1989, September 1989, 588 . 591 [I-D.hoeiland-joergensen-aqm-fq-codel] 592 Hoeiland-Joergensen, T., McKenney, P., Taht, D., Gettys, 593 J., and E. Dumazet, "FlowQueue-Codel", draft-hoeiland- 594 joergensen-aqm-fq-codel-00 (work in progress), March 2014. 596 [I-D.nichols-tsvwg-codel] 597 Nichols, K. and V. Jacobson, "Controlled Delay Active 598 Queue Management", draft-nichols-tsvwg-codel-01 (work in 599 progress), February 2013. 601 [I-D.pan-tsvwg-pie] 602 Pan, R., Natarajan, P., Piglione, C., and M. Prabhu, "PIE: 603 A Lightweight Control Scheme To Address the Bufferbloat 604 Problem", draft-pan-tsvwg-pie-00 (work in progress), 605 December 2012. 607 [PacketPair] 608 University of California Berkeley, "Congestion Control in 609 Computer Networks", UC Berkeley TR-654 1991, September 610 1991, . 613 [RFC0970] Nagle, J., "On packet switches with infinite storage", RFC 614 970, December 1985. 616 [RFC2990] Huston, G., "Next Steps for the IP QoS Architecture", RFC 617 2990, November 2000. 619 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 620 of Explicit Congestion Notification (ECN) to IP", RFC 621 3168, September 2001. 623 [RFC6057] Bastian, C., Klieber, T., Livingood, J., Mills, J., and R. 624 Woundy, "Comcast's Protocol-Agnostic Congestion Management 625 System", RFC 6057, December 2010. 627 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 628 and K. Carlberg, "Explicit Congestion Notification (ECN) 629 for RTP over UDP", RFC 6679, August 2012. 631 [SFQ] SRI International, "Stochastic Fairness Queuing", IEEE 632 Infocom 1990, June 1990, . 635 [VirtualClock] 636 Xerox PARC, "Virtual Clock", ACM SIGCOMM 1990, September 637 1990, 638 . 640 Appendix A. Change Log 642 Initial Version: June 2014 644 Authors' Addresses 646 Fred Baker 647 Cisco Systems 648 Santa Barbara, California 93117 649 USA 651 Email: fred@cisco.com 653 Rong Pan 654 Cisco Systems 655 Milpitas, California 95035 656 USA 658 Email: ropan@cisco.com