idnits 2.17.1 draft-ietf-tsvwg-byte-pkt-congest-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1624 has weird spacing: '...ability p ...' == Line 1629 has weird spacing: '...ss-rate p*u ...' == Line 1630 has weird spacing: '...ss-rate p*u*s...' == Line 1637 has weird spacing: '...ss-rate p*u ...' == Line 1638 has weird spacing: '...ss-rate p*u*s...' (Using the creation date from RFC2309, updated by this document, for RFC5378 checks: 1997-03-25) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 07, 2013) is 3794 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-02) exists of draft-nichols-tsvwg-codel-01 -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) Summary: 0 errors (**), 0 flaws (~~), 7 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe 3 Internet-Draft BT 4 Updates: 2309 (if approved) J. Manner 5 Intended status: BCP Aalto University 6 Expires: May 11, 2014 November 07, 2013 8 Byte and Packet Congestion Notification 9 draft-ietf-tsvwg-byte-pkt-congest-12 11 Abstract 13 This document provides recommendations of best current practice for 14 dropping or marking packets using any active queue management (AQM) 15 algorithm, including random early detection (RED), BLUE, pre- 16 congestion notification (PCN) and newer schemes such as CoDel 17 (Controlled Delay) and PIE (Proportional Integral controller 18 Enhanced). We give three strong recommendations: (1) packet size 19 should be taken into account when transports detect and respond to 20 congestion indications, (2) packet size should not be taken into 21 account when network equipment creates congestion signals (marking, 22 dropping), and therefore (3) in the specific case of RED, the byte- 23 mode packet drop variant that drops fewer small packets should not be 24 used. This memo updates RFC 2309 to deprecate deliberate 25 preferential treatment of small packets in AQM algorithms. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on May 11, 2014. 44 Copyright Notice 46 Copyright (c) 2013 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 1.1. Terminology and Scoping . . . . . . . . . . . . . . . . . 6 63 1.2. Example Comparing Packet-Mode Drop and Byte-Mode Drop . . 7 64 2. Recommendations . . . . . . . . . . . . . . . . . . . . . . . 9 65 2.1. Recommendation on Queue Measurement . . . . . . . . . . . 9 66 2.2. Recommendation on Encoding Congestion Notification . . . . 10 67 2.3. Recommendation on Responding to Congestion . . . . . . . . 11 68 2.4. Recommendation on Handling Congestion Indications when 69 Splitting or Merging Packets . . . . . . . . . . . . . . . 12 70 3. Motivating Arguments . . . . . . . . . . . . . . . . . . . . . 12 71 3.1. Avoiding Perverse Incentives to (Ab)use Smaller Packets . 12 72 3.2. Small != Control . . . . . . . . . . . . . . . . . . . . . 14 73 3.3. Transport-Independent Network . . . . . . . . . . . . . . 14 74 3.4. Partial Deployment of AQM . . . . . . . . . . . . . . . . 15 75 3.5. Implementation Efficiency . . . . . . . . . . . . . . . . 17 76 4. A Survey and Critique of Past Advice . . . . . . . . . . . . . 17 77 4.1. Congestion Measurement Advice . . . . . . . . . . . . . . 18 78 4.1.1. Fixed Size Packet Buffers . . . . . . . . . . . . . . 18 79 4.1.2. Congestion Measurement without a Queue . . . . . . . . 19 80 4.2. Congestion Notification Advice . . . . . . . . . . . . . . 20 81 4.2.1. Network Bias when Encoding . . . . . . . . . . . . . . 20 82 4.2.2. Transport Bias when Decoding . . . . . . . . . . . . . 22 83 4.2.3. Making Transports Robust against Control Packet 84 Losses . . . . . . . . . . . . . . . . . . . . . . . . 23 85 4.2.4. Congestion Notification: Summary of Conflicting 86 Advice . . . . . . . . . . . . . . . . . . . . . . . . 24 87 5. Outstanding Issues and Next Steps . . . . . . . . . . . . . . 25 88 5.1. Bit-congestible Network . . . . . . . . . . . . . . . . . 25 89 5.2. Bit- & Packet-congestible Network . . . . . . . . . . . . 25 90 6. Security Considerations . . . . . . . . . . . . . . . . . . . 26 91 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 26 92 8. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 26 93 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 28 94 10. Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 28 95 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 28 96 11.1. Normative References . . . . . . . . . . . . . . . . . . . 28 97 11.2. Informative References . . . . . . . . . . . . . . . . . . 28 98 Appendix A. Survey of RED Implementation Status . . . . . . . . . 32 99 Appendix B. Sufficiency of Packet-Mode Drop . . . . . . . . . . . 34 100 B.1. Packet-Size (In)Dependence in Transports . . . . . . . . . 35 101 B.2. Bit-Congestible and Packet-Congestible Indications . . . . 38 102 Appendix C. Byte-mode Drop Complicates Policing Congestion 103 Response . . . . . . . . . . . . . . . . . . . . . . 39 104 Appendix D. Changes from Previous Versions . . . . . . . . . . . 40 106 1. Introduction 108 This document provides recommendations of best current practice for 109 how we should correctly scale congestion control functions with 110 respect to packet size for the long term. It also recognises that 111 expediency may be necessary to deal with existing widely deployed 112 protocols that don't live up to the long term goal. 114 When signalling congestion, the problem of how (and whether) to take 115 packet sizes into account has exercised the minds of researchers and 116 practitioners for as long as active queue management (AQM) has been 117 discussed. Indeed, one reason AQM was originally introduced was to 118 reduce the lock-out effects that small packets can have on large 119 packets in drop-tail queues. This memo aims to state the principles 120 we should be using and to outline how these principles will affect 121 future protocol design, taking into account the existing deployments 122 we have already. 124 The question of whether to take into account packet size arises at 125 three stages in the congestion notification process: 127 Measuring congestion: When a congested resource measures locally how 128 congested it is, should it measure its queue length in time, bytes 129 or packets? 131 Encoding congestion notification into the wire protocol: When a 132 congested network resource signals its level of congestion, should 133 it drop / mark each packet dependent on the size of the particular 134 packet in question? 136 Decoding congestion notification from the wire protocol: When a 137 transport interprets the notification in order to decide how much 138 to respond to congestion, should it take into account the size of 139 each missing or marked packet? 141 Consensus has emerged over the years concerning the first stage, 142 which Section 2.1 records in the RFC Series. In summary: If possible 143 it is best to measure congestion by time in the queue, but otherwise 144 the choice between bytes and packets solely depends on whether the 145 resource is congested by bytes or packets. 147 The controversy is mainly around the last two stages: whether to 148 allow for the size of the specific packet notifying congestion i) 149 when the network encodes or ii) when the transport decodes the 150 congestion notification. 152 Currently, the RFC series is silent on this matter other than a paper 153 trail of advice referenced from [RFC2309], which conditionally 154 recommends byte-mode (packet-size dependent) drop [pktByteEmail]. 155 Reducing drop of small packets certainly has some tempting 156 advantages: i) it drops less control packets, which tend to be small 157 and ii) it makes TCP's bit-rate less dependent on packet size. 158 However, there are ways of addressing these issues at the transport 159 layer, rather than reverse engineering network forwarding to fix the 160 problems. 162 This memo updates [RFC2309] to deprecate deliberate preferential 163 treatment of packets in AQM algorithms solely because of their size. 164 It recommends that (1) packet size should be taken into account when 165 transports detect and respond to congestion indications, (2) not when 166 network equipment creates them. This memo also adds to the 167 congestion control principles enumerated in BCP 41 [RFC2914]. 169 In the particular case of Random early Detection (RED), this means 170 that the byte-mode packet drop variant should not be used to drop 171 fewer small packets, because that creates a perverse incentive for 172 transports to use tiny segments, consequently also opening up a DoS 173 vulnerability. Fortunately all the RED implementers who responded to 174 our admittedly limited survey (Section 4.2.4) have not followed the 175 earlier advice to use byte-mode drop, so the position this memo 176 argues for seems to already exist in implementations. 178 However, at the transport layer, TCP congestion control is a widely 179 deployed protocol that doesn't scale with packet size (i.e. its 180 reduction in rate does not take into account the size of a lost 181 packet). To date this hasn't been a significant problem because most 182 TCP implementations have been used with similar packet sizes. But, 183 as we design new congestion control mechanisms, this memo recommends 184 that we should build in scaling with packet size rather than assuming 185 we should follow TCP's example. 187 This memo continues as follows. First it discusses terminology and 188 scoping. Section 2 gives the concrete formal recommendations, 189 followed by motivating arguments in Section 3. We then critically 190 survey the advice given previously in the RFC series and the research 191 literature (Section 4), referring to an assessment of whether or not 192 this advice has been followed in production networks (Appendix A). 193 To wrap up, outstanding issues are discussed that will need 194 resolution both to inform future protocol designs and to handle 195 legacy (Section 5). Then security issues are collected together in 196 Section 6 before conclusions are drawn in Section 8. The interested 197 reader can find discussion of more detailed issues on the theme of 198 byte vs. packet in the appendices. 200 This memo intentionally includes a non-negligible amount of material 201 on the subject. For the busy reader Section 2 summarises the 202 recommendations for the Internet community. 204 1.1. Terminology and Scoping 206 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 207 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 208 document are to be interpreted as described in [RFC2119]. 210 This memo applies to the design of all AQM algorithms, for example, 211 Random Early Detection (RED) [RFC2309], BLUE [BLUE02], Pre-Congestion 212 Notification (PCN) [RFC5670], Controlled Delay (CoDel) 213 [I-D.nichols-tsvwg-codel] and the Proportional Integral controller 214 Enhanced (PIE) [I-D.pan-tsvwg-pie]. Throughout, RED is used as a 215 concrete example because it is a widely known and deployed AQM 216 algorithm. There is no intention to imply that the advice is any 217 less applicable to the other algorithms, nor that RED is preferred. 219 Congestion Notification: Congestion notification is a changing 220 signal that aims to communicate the probability that the network 221 resource(s) will not be able to forward the level of traffic load 222 offered (or that there is an impending risk that they will not be 223 able to). 225 The `impending risk' qualifier is added, because AQM systems set a 226 virtual limit smaller than the actual limit to the resource, then 227 notify when this virtual limit is exceeded in order to avoid 228 uncontrolled congestion of the actual capacity. 230 Congestion notification communicates a real number bounded by the 231 range [ 0 , 1 ]. This ties in with the most well-understood 232 measure of congestion notification: drop probability. 234 Explicit and Implicit Notification: The byte vs. packet dilemma 235 concerns congestion notification irrespective of whether it is 236 signalled implicitly by drop or using Explicit Congestion 237 Notification (ECN [RFC3168] or PCN [RFC5670]). Throughout this 238 document, unless clear from the context, the term marking will be 239 used to mean notifying congestion explicitly, while congestion 240 notification will be used to mean notifying congestion either 241 implicitly by drop or explicitly by marking. 243 Bit-congestible vs. Packet-congestible: If the load on a resource 244 depends on the rate at which packets arrive, it is called packet- 245 congestible. If the load depends on the rate at which bits arrive 246 it is called bit-congestible. 248 Examples of packet-congestible resources are route look-up engines 249 and firewalls, because load depends on how many packet headers 250 they have to process. Examples of bit-congestible resources are 251 transmission links, radio power and most buffer memory, because 252 the load depends on how many bits they have to transmit or store. 253 Some machine architectures use fixed size packet buffers, so 254 buffer memory in these cases is packet-congestible (see 255 Section 4.1.1). 257 The path through a machine will typically encounter both packet- 258 congestible and bit-congestible resources. However, currently, a 259 design goal of network processing equipment such as routers and 260 firewalls is to size the packet-processing engine(s) relative to 261 the lines in order to keep packet processing uncongested even 262 under worst case packet rates with runs of minimum size packets. 263 Therefore, packet-congestion is currently rare [RFC6077; S.3.3], 264 but there is no guarantee that it will not become more common in 265 future. 267 Note that information is generally processed or transmitted with a 268 minimum granularity greater than a bit (e.g. octets). The 269 appropriate granularity for the resource in question should be 270 used, but for the sake of brevity we will talk in terms of bytes 271 in this memo. 273 Coarser Granularity: Resources may be congestible at higher levels 274 of granularity than bits or packets, for instance stateful 275 firewalls are flow-congestible and call-servers are session- 276 congestible. This memo focuses on congestion of connectionless 277 resources, but the same principles may be applicable for 278 congestion notification protocols controlling per-flow and per- 279 session processing or state. 281 RED Terminology: In RED whether to use packets or bytes when 282 measuring queues is called respectively "packet-mode queue 283 measurement" or "byte-mode queue measurement". And whether the 284 probability of dropping a particular packet is independent or 285 dependent on its size is called respectively "packet-mode drop" or 286 "byte-mode drop". The terms byte-mode and packet-mode should not 287 be used without specifying whether they apply to queue measurement 288 or to drop. 290 1.2. Example Comparing Packet-Mode Drop and Byte-Mode Drop 292 Taking RED as a well-known example algorithm, a central question 293 addressed by this document is whether to recommend RED's packet-mode 294 drop variant and to deprecate byte-mode drop. Table 1 compares how 295 packet-mode and byte-mode drop affect two flows of different size 296 packets. For each it gives the expected number of packets and of 297 bits dropped in one second. Each example flow runs at the same bit- 298 rate of 48Mb/s, but one is broken up into small 60 byte packets and 299 the other into large 1500 byte packets. 301 To keep up the same bit-rate, in one second there are about 25 times 302 more small packets because they are 25 times smaller. As can be seen 303 from the table, the packet rate is 100,000 small packets versus 4,000 304 large packets per second (pps). 306 Parameter Formula Small packets Large packets 307 -------------------- -------------- ------------- ------------- 308 Packet size s/8 60B 1,500B 309 Packet size s 480b 12,000b 310 Bit-rate x 48Mbps 48Mbps 311 Packet-rate u = x/s 100kpps 4kpps 313 Packet-mode Drop 314 Pkt loss probability p 0.1% 0.1% 315 Pkt loss-rate p*u 100pps 4pps 316 Bit loss-rate p*u*s 48kbps 48kbps 318 Byte-mode Drop MTU, M=12,000b 319 Pkt loss probability b = p*s/M 0.004% 0.1% 320 Pkt loss-rate b*u 4pps 4pps 321 Bit loss-rate b*u*s 1.92kbps 48kbps 323 Table 1: Example Comparing Packet-mode and Byte-mode Drop 325 For packet-mode drop, we illustrate the effect of a drop probability 326 of 0.1%, which the algorithm applies to all packets irrespective of 327 size. Because there are 25 times more small packets in one second, 328 it naturally drops 25 times more small packets, that is 100 small 329 packets but only 4 large packets. But if we count how many bits it 330 drops, there are 48,000 bits in 100 small packets and 48,000 bits in 331 4 large packets--the same number of bits of small packets as large. 333 The packet-mode drop algorithm drops any bit with the same 334 probability whether the bit is in a small or a large packet. 336 For byte-mode drop, again we use an example drop probability of 0.1%, 337 but only for maximum size packets (assuming the link maximum 338 transmission unit (MTU) is 1,500B or 12,000b). The byte-mode 339 algorithm reduces the drop probability of smaller packets 340 proportional to their size, making the probability that it drops a 341 small packet 25 times smaller at 0.004%. But there are 25 times more 342 small packets, so dropping them with 25 times lower probability 343 results in dropping the same number of packets: 4 drops in both 344 cases. The 4 small dropped packets contain 25 times less bits than 345 the 4 large dropped packets: 1,920 compared to 48,000. 347 The byte-mode drop algorithm drops any bit with a probability 348 proportionate to the size of the packet it is in. 350 2. Recommendations 352 This section gives recommendations related to network equipment in 353 Sections 2.1 and 2.2, and in Sections 2.3 and 2.4 we discuss the 354 implications on the transport protocols. 356 2.1. Recommendation on Queue Measurement 358 Ideally, an AQM would measure the service time of the queue to 359 measure congestion of a resource. However service time can only be 360 measured as packets leave the queue, where it is not always expedient 361 to implement a full AQM algorithm. To predict the service time as 362 packets join the queue, an AQM algorithm needs to measure the length 363 of the queue. 365 In this case, if the resource is bit-congestible, the AQM 366 implementation SHOULD measure the length of the queue in bytes and, 367 if the resource is packet-congestible, the implementation SHOULD 368 measure the length of the queue in packets. Subject to the 369 exceptions below, no other choice makes sense, because the number of 370 packets waiting in the queue isn't relevant if the resource gets 371 congested by bytes and vice versa. For example, the length of the 372 queue into a transmission line would be measured in bytes, while the 373 length of the queue into a firewall would be measured in packets. 375 To avoid the pathological effects of drop tail, the AQM can then 376 transform this service time or queue length into the probability of 377 dropping or marking a packet (e.g. RED's piecewise linear function 378 between thresholds). 380 What this advice means for RED as a specific example: 382 1. A RED implementation SHOULD use byte mode queue measurement for 383 measuring the congestion of bit-congestible resources and packet 384 mode queue measurement for packet-congestible resources. 386 2. An implementation SHOULD NOT make it possible to configure the 387 way a queue measures itself, because whether a queue is bit- 388 congestible or packet-congestible is an inherent property of the 389 queue. 391 Exceptions to these recommendations might be necessary, for instance 392 where a packet-congestible resource has to be configured as a proxy 393 bottleneck for a bit-congestible resource in an adjacent box that 394 does not support AQM. 396 The recommended approach in less straightforward scenarios, such as 397 fixed size packet buffers, resources without a queue and buffers 398 comprising a mix of packet and bit-congestible resources, is 399 discussed in Section 4.1. For instance, Section 4.1.1 explains that 400 the queue into a line should be measured in bytes even if the queue 401 consists of fixed-size packet-buffers, because the root-cause of any 402 congestion is bytes arriving too fast for the line--packets filling 403 buffers are merely a symptom of the underlying congestion of the 404 line. 406 2.2. Recommendation on Encoding Congestion Notification 408 When encoding congestion notification (e.g. by drop, ECN or PCN), the 409 probability that network equipment drops or marks a particular packet 410 to notify congestion SHOULD NOT depend on the size of the packet in 411 question. As the example in Section 1.2 illustrates, to drop any bit 412 with probability 0.1% it is only necessary to drop every packet with 413 probability 0.1% without regard to the size of each packet. 415 This approach ensures the network layer offers sufficient congestion 416 information for all known and future transport protocols and also 417 ensures no perverse incentives are created that would encourage 418 transports to use inappropriately small packet sizes. 420 What this advice means for RED as a specific example: 422 1. The RED AQM algorithm SHOULD NOT use byte-mode drop, i.e. it 423 ought to use packet-mode drop. Byte-mode drop is more complex, 424 it creates the perverse incentive to fragment segments into tiny 425 pieces and it is vulnerable to floods of small packets. 427 2. If a vendor has implemented byte-mode drop, and an operator has 428 turned it on, it is RECOMMENDED to switch it to packet-mode drop, 429 after establishing if there are any implications on the relative 430 performance of applications using different packet sizes. The 431 unlikely possibility of some application-specific legacy use of 432 byte-mode drop is the only reason that all the above 433 recommendations on encoding congestion notification are not 434 phrased more strongly. 436 RED as a whole SHOULD NOT be switched off. Without RED, a drop 437 tail queue biases against large packets and is vulnerable to 438 floods of small packets. 440 Note well that RED's byte-mode queue drop is completely orthogonal to 441 byte-mode queue measurement and should not be confused with it. If a 442 RED implementation has a byte-mode but does not specify what sort of 443 byte-mode, it is most probably byte-mode queue measurement, which is 444 fine. However, if in doubt, the vendor should be consulted. 446 A survey (Appendix A) showed that there appears to be little, if any, 447 installed base of the byte-mode drop variant of RED. This suggests 448 that deprecating byte-mode drop will have little, if any, incremental 449 deployment impact. 451 2.3. Recommendation on Responding to Congestion 453 When a transport detects that a packet has been lost or congestion 454 marked, it SHOULD consider the strength of the congestion indication 455 as proportionate to the size in octets (bytes) of the missing or 456 marked packet. 458 In other words, when a packet indicates congestion (by being lost or 459 marked) it can be considered conceptually as if there is a congestion 460 indication on every octet of the packet, not just one indication per 461 packet. 463 To be clear, the above recommendation solely describes how a 464 transport should interpret the meaning of a congestion indication, as 465 a long term goal. It makes no recommendation on whether a transport 466 should act differently based on this interpretation. It merely aids 467 interoperablity between transports, if they choose to make their 468 actions depend on the strength of congestion indications. 470 This definition will be useful as the IETF transport area continues 471 its programme of; 473 o updating host-based congestion control protocols to take account 474 of packet size 476 o making transports less sensitive to losing control packets like 477 SYNs and pure ACKs. 479 What this advice means for the case of TCP: 481 1. If two TCP flows with different packet sizes are required to run 482 at equal bit rates under the same path conditions, this SHOULD be 483 done by altering TCP (Section 4.2.2), not network equipment (the 484 latter affects other transports besides TCP). 486 2. If it is desired to improve TCP performance by reducing the 487 chance that a SYN or a pure ACK will be dropped, this SHOULD be 488 done by modifying TCP (Section 4.2.3), not network equipment. 490 To be clear, we are not recommending at all that TCPs under 491 equivalent conditions should aim for equal bit-rates. We are merely 492 saying that anyone trying to do such a thing should modify their TCP 493 algorithm, not the network. 495 These recommendations are phrased as 'SHOULD' rather than 'MUST', 496 because there may be cases where expediency dictates that 497 compatibility with pre-existing versions of a transport protocol make 498 the recommendations impractical. 500 2.4. Recommendation on Handling Congestion Indications when Splitting 501 or Merging Packets 503 Packets carrying congestion indications may be split or merged in 504 some circumstances (e.g. at a RTP/RTCP transcoder or during IP 505 fragment reassembly). Splitting and merging only make sense in the 506 context of ECN, not loss. 508 The general rule to follow is that the number of octets in packets 509 with congestion indications SHOULD be equivalent before and after 510 merging or splitting. This is based on the principle used above; 511 that an indication of congestion on a packet can be considered as an 512 indication of congestion on each octet of the packet. 514 The above rule is not phrased with the word "MUST" to allow the 515 following exception. There are cases where pre-existing protocols 516 were not designed to conserve congestion marked octets (e.g. IP 517 fragment reassembly [RFC3168] or loss statistics in RTCP receiver 518 reports [RFC3550] before ECN was added [RFC6679]). When any such 519 protocol is updated, it SHOULD comply with the above rule to conserve 520 marked octets. However, the rule may be relaxed if it would 521 otherwise become too complex to interoperate with pre-existing 522 implementations of the protocol. 524 One can think of a splitting or merging process as if all the 525 incoming congestion-marked octets increment a counter and all the 526 outgoing marked octets decrement the same counter. In order to 527 ensure that congestion indications remain timely, even the smallest 528 positive remainder in the conceptual counter should trigger the next 529 outgoing packet to be marked (causing the counter to go negative). 531 3. Motivating Arguments 533 This section is informative. It justifies the recommendations given 534 in the previous section. 536 3.1. Avoiding Perverse Incentives to (Ab)use Smaller Packets 538 Increasingly, it is being recognised that a protocol design must take 539 care not to cause unintended consequences by giving the parties in 540 the protocol exchange perverse incentives [Evol_cc][RFC3426]. Given 541 there are many good reasons why larger path maximum transmission 542 units (PMTUs) would help solve a number of scaling issues, we do not 543 want to create any bias against large packets that is greater than 544 their true cost. 546 Imagine a scenario where the same bit rate of packets will contribute 547 the same to bit-congestion of a link irrespective of whether it is 548 sent as fewer larger packets or more smaller packets. A protocol 549 design that caused larger packets to be more likely to be dropped 550 than smaller ones would be dangerous in both the following cases: 552 Malicious transports: A queue that gives an advantage to small 553 packets can be used to amplify the force of a flooding attack. By 554 sending a flood of small packets, the attacker can get the queue 555 to discard more traffic in large packets, allowing more attack 556 traffic to get through to cause further damage. Such a queue 557 allows attack traffic to have a disproportionately large effect on 558 regular traffic without the attacker having to do much work. 560 Non-malicious transports: Even if an application designer is not 561 actually malicious, if over time it is noticed that small packets 562 tend to go faster, designers will act in their own interest and 563 use smaller packets. Queues that give advantage to small packets 564 create an evolutionary pressure for applications or transports to 565 send at the same bit-rate but break their data stream down into 566 tiny segments to reduce their drop rate. Encouraging a high 567 volume of tiny packets might in turn unnecessarily overload a 568 completely unrelated part of the system, perhaps more limited by 569 header-processing than bandwidth. 571 Imagine two unresponsive flows arrive at a bit-congestible 572 transmission link each with the same bit rate, say 1Mbps, but one 573 consists of 1500B and the other 60B packets, which are 25x smaller. 574 Consider a scenario where gentle RED [gentle_RED] is used, along with 575 the variant of RED we advise against, i.e. where the RED algorithm is 576 configured to adjust the drop probability of packets in proportion to 577 each packet's size (byte mode packet drop). In this case, RED aims 578 to drop 25x more of the larger packets than the smaller ones. Thus, 579 for example if RED drops 25% of the larger packets, it will aim to 580 drop 1% of the smaller packets (but in practice it may drop more as 581 congestion increases [RFC4828; Appx B.4]). Even though both flows 582 arrive with the same bit rate, the bit rate the RED queue aims to 583 pass to the line will be 750kbps for the flow of larger packets but 584 990kbps for the smaller packets (because of rate variations it will 585 actually be a little less than this target). 587 Note that, although the byte-mode drop variant of RED amplifies small 588 packet attacks, drop-tail queues amplify small packet attacks even 589 more (see Security Considerations in Section 6). Wherever possible 590 neither should be used. 592 3.2. Small != Control 594 Dropping fewer control packets considerably improves performance. It 595 is tempting to drop small packets with lower probability in order to 596 improve performance, because many control packets tend to be smaller 597 (TCP SYNs & ACKs, DNS queries & responses, SIP messages, HTTP GETs, 598 etc). However, we must not give control packets preference purely by 599 virtue of their smallness, otherwise it is too easy for any data 600 source to get the same preferential treatment simply by sending data 601 in smaller packets. Again we should not create perverse incentives 602 to favour small packets rather than to favour control packets, which 603 is what we intend. 605 Just because many control packets are small does not mean all small 606 packets are control packets. 608 So, rather than fix these problems in the network, we argue that the 609 transport should be made more robust against losses of control 610 packets (see 'Making Transports Robust against Control Packet Losses' 611 in Section 4.2.3). 613 3.3. Transport-Independent Network 615 TCP congestion control ensures that flows competing for the same 616 resource each maintain the same number of segments in flight, 617 irrespective of segment size. So under similar conditions, flows 618 with different segment sizes will get different bit-rates. 620 To counter this effect it seems tempting not to follow our 621 recommendation, and instead for the network to bias congestion 622 notification by packet size in order to equalise the bit-rates of 623 flows with different packet sizes. However, in order to do this, the 624 queuing algorithm has to make assumptions about the transport, which 625 become embedded in the network. Specifically: 627 o The queuing algorithm has to assume how aggressively the transport 628 will respond to congestion (see Section 4.2.4). If the network 629 assumes the transport responds as aggressively as TCP NewReno, it 630 will be wrong for Compound TCP and differently wrong for Cubic 631 TCP, etc. To achieve equal bit-rates, each transport then has to 632 guess what assumption the network made, and work out how to 633 replace this assumed aggressiveness with its own aggressiveness. 635 o Also, if the network biases congestion notification by packet size 636 it has to assume a baseline packet size--all proposed algorithms 637 use the local MTU (for example see the byte-mode loss probability 638 formula in Table 1). Then if the non-Reno transports mentioned 639 above are trying to reverse engineer what the network assumed, 640 they also have to guess the MTU of the congested link. 642 Even though reducing the drop probability of small packets (e.g. 643 RED's byte-mode drop) helps ensure TCP flows with different packet 644 sizes will achieve similar bit rates, we argue this correction should 645 be made to any future transport protocols based on TCP, not to the 646 network in order to fix one transport, no matter how predominant it 647 is. Effectively, favouring small packets is reverse engineering of 648 network equipment around one particular transport protocol (TCP), 649 contrary to the excellent advice in [RFC3426], which asks designers 650 to question "Why are you proposing a solution at this layer of the 651 protocol stack, rather than at another layer?" 653 In contrast, if the network never takes account of packet size, the 654 transport can be certain it will never need to guess any assumptions 655 the network has made. And the network passes two pieces of 656 information to the transport that are sufficient in all cases: i) 657 congestion notification on the packet and ii) the size of the packet. 658 Both are available for the transport to combine (by taking account of 659 packet size when responding to congestion) or not. Appendix B checks 660 that these two pieces of information are sufficient for all relevant 661 scenarios. 663 When the network does not take account of packet size, it allows 664 transport protocols to choose whether to take account of packet size 665 or not. However, if the network were to bias congestion notification 666 by packet size, transport protocols would have no choice; those that 667 did not take account of packet size themselves would unwittingly 668 become dependent on packet size, and those that already took account 669 of packet size would end up taking account of it twice. 671 3.4. Partial Deployment of AQM 673 In overview, the argument in this section runs as follows: 675 o Because the network does not and cannot always drop packets in 676 proportion to their size, it shouldn't be given the task of making 677 drop signals depend on packet size at all. 679 o Transports on the other hand don't always want to make their rate 680 response proportional to the size of dropped packets, but if they 681 want to, they always can. 683 The argument is similar to the end-to-end argument that says "Don't 684 do X in the network if end-systems can do X by themselves, and they 685 want to be able to choose whether to do X anyway." Actually the 686 following argument is stronger; in addition it says "Don't give the 687 network task X that could be done by the end-systems, if X is not 688 deployed on all network nodes, and end-systems won't be able to tell 689 whether their network is doing X, or whether they need to do X 690 themselves." In this case, the X in question is "making the response 691 to congestion depend on packet size". 693 We will now re-run this argument taking each step in more depth. The 694 argument applies solely to drop, not to ECN marking. 696 A queue drops packets for either of two reasons: a) to signal to host 697 congestion controls that they should reduce the load and b) because 698 there is no buffer left to store the packets. Active queue 699 management tries to use drops as a signal for hosts to slow down 700 (case a) so that drop due to buffer exhaustion (case b) should not be 701 necessary. 703 AQM is not universally deployed in every queue in the Internet; many 704 cheap Ethernet bridges, software firewalls, NATs on consumer devices, 705 etc implement simple tail-drop buffers. Even if AQM were universal, 706 it has to be able to cope with buffer exhaustion (by switching to a 707 behaviour like tail-drop), in order to cope with unresponsive or 708 excessive transports. For these reasons networks will sometimes be 709 dropping packets as a last resort (case b) rather than under AQM 710 control (case a). 712 When buffers are exhausted (case b), they don't naturally drop 713 packets in proportion to their size. The network can only reduce the 714 probability of dropping smaller packets if it has enough space to 715 store them somewhere while it waits for a larger packet that it can 716 drop. If the buffer is exhausted, it does not have this choice. 717 Admittedly tail-drop does naturally drop somewhat fewer small 718 packets, but exactly how few depends more on the mix of sizes than 719 the size of the packet in question. Nonetheless, in general, if we 720 wanted networks to do size-dependent drop, we would need universal 721 deployment of (packet-size dependent) AQM code, which is currently 722 unrealistic. 724 A host transport cannot know whether any particular drop was a 725 deliberate signal from an AQM or a sign of a queue shedding packets 726 due to buffer exhaustion. Therefore, because the network cannot 727 universally do size-dependent drop, it should not do it all. 729 Whereas universality is desirable in the network, diversity is 730 desirable between different transport layer protocols - some, like 731 NewReno TCP [RFC5681], may not choose to make their rate response 732 proportionate to the size of each dropped packet, while others will 733 (e.g. TFRC-SP [RFC4828]). 735 3.5. Implementation Efficiency 737 Biasing against large packets typically requires an extra multiply 738 and divide in the network (see the example byte-mode drop formula in 739 Table 1). Allowing for packet size at the transport rather than in 740 the network ensures that neither the network nor the transport needs 741 to do a multiply operation--multiplication by packet size is 742 effectively achieved as a repeated add when the transport adds to its 743 count of marked bytes as each congestion event is fed to it. Also 744 the work to do the biasing is spread over many hosts, rather than 745 concentrated in just the congested network element. These aren't 746 principled reasons in themselves, but they are a happy consequence of 747 the other principled reasons. 749 4. A Survey and Critique of Past Advice 751 This section is informative, not normative. 753 The original 1993 paper on RED [RED93] proposed two options for the 754 RED active queue management algorithm: packet mode and byte mode. 755 Packet mode measured the queue length in packets and dropped (or 756 marked) individual packets with a probability independent of their 757 size. Byte mode measured the queue length in bytes and marked an 758 individual packet with probability in proportion to its size 759 (relative to the maximum packet size). In the paper's outline of 760 further work, it was stated that no recommendation had been made on 761 whether the queue size should be measured in bytes or packets, but 762 noted that the difference could be significant. 764 When RED was recommended for general deployment in 1998 [RFC2309], 765 the two modes were mentioned implying the choice between them was a 766 question of performance, referring to a 1997 email [pktByteEmail] for 767 advice on tuning. A later addendum to this email introduced the 768 insight that there are in fact two orthogonal choices: 770 o whether to measure queue length in bytes or packets (Section 4.1) 772 o whether the drop probability of an individual packet should depend 773 on its own size (Section 4.2). 775 The rest of this section is structured accordingly. 777 4.1. Congestion Measurement Advice 779 The choice of which metric to use to measure queue length was left 780 open in RFC2309. It is now well understood that queues for bit- 781 congestible resources should be measured in bytes, and queues for 782 packet-congestible resources should be measured in packets 783 [pktByteEmail]. 785 Congestion in some legacy bit-congestible buffers is only measured in 786 packets not bytes. In such cases, the operator has to set the 787 thresholds mindful of a typical mix of packets sizes. Any AQM 788 algorithm on such a buffer will be oversensitive to high proportions 789 of small packets, e.g. a DoS attack, and under-sensitive to high 790 proportions of large packets. However, there is no need to make 791 allowances for the possibility of such legacy in future protocol 792 design. This is safe because any under-sensitivity during unusual 793 traffic mixes cannot lead to congestion collapse given the buffer 794 will eventually revert to tail drop, discarding proportionately more 795 large packets. 797 4.1.1. Fixed Size Packet Buffers 799 The question of whether to measure queues in bytes or packets seems 800 to be well understood. However, measuring congestion is confusing 801 when the resource is bit congestible but the queue into the resource 802 is packet congestible. This section outlines the approach to take. 804 Some, mostly older, queuing hardware allocates fixed sized buffers in 805 which to store each packet in the queue. This hardware forwards to 806 the line in one of two ways: 808 o With some hardware, any fixed sized buffers not completely filled 809 by a packet are padded when transmitted to the wire. This case, 810 should clearly be treated as packet-congestible, because both 811 queuing and transmission are in fixed MTU-sized units. Therefore 812 the queue length in packets is a good model of congestion of the 813 link. 815 o More commonly, hardware with fixed size packet buffers transmits 816 packets to line without padding. This implies a hybrid forwarding 817 system with transmission congestion dependent on the size of 818 packets but queue congestion dependent on the number of packets, 819 irrespective of their size. 821 Nonetheless, there would be no queue at all unless the line had 822 become congested--the root-cause of any congestion is too many 823 bytes arriving for the line. Therefore, the AQM should measure 824 the queue length as the sum of all the packet sizes in bytes that 825 are queued up waiting to be serviced by the line, irrespective of 826 whether each packet is held in a fixed size buffer. 828 In the (unlikely) first case where use of padding means the queue 829 should be measured in packets, further confusion is likely because 830 the fixed buffers are rarely all one size. Typically pools of 831 different sized buffers are provided (Cisco uses the term 'buffer 832 carving' for the process of dividing up memory into these pools 833 [IOSArch]). Usually, if the pool of small buffers is exhausted, 834 arriving small packets can borrow space in the pool of large buffers, 835 but not vice versa. However, there is no need to consider all this 836 complexity, because the root-cause of any congestion is still line 837 overload--buffer consumption is only the symptom. Therefore, the 838 length of the queue should be measured as the sum of the bytes in the 839 queue that will be transmitted to line, including any padding. In 840 the (unusual) case of transmission with padding this means the sum of 841 the sizes of the small buffers queued plus the sum of the sizes of 842 the large buffers queued. 844 We will return to borrowing of fixed sized buffers when we discuss 845 biasing the drop/marking probability of a specific packet because of 846 its size in Section 4.2.1. But here we can repeat the simple rule 847 for how to measure the length of queues of fixed buffers: no matter 848 how complicated the buffering scheme is, ultimately a transmission 849 line is nearly always bit-congestible so the number of bytes queued 850 up waiting for the line measures how congested the line is, and it is 851 rarely important to measure how congested the buffering system is. 853 4.1.2. Congestion Measurement without a Queue 855 AQM algorithms are nearly always described assuming there is a queue 856 for a congested resource and the algorithm can use the queue length 857 to determine the probability that it will drop or mark each packet. 858 But not all congested resources lead to queues. For instance, power 859 limited resources are usually bit-congestible if energy is primarily 860 required for transmission rather than header processing, but it is 861 rare for a link protocol to build a queue as it approaches maximum 862 power. 864 Nonetheless, AQM algorithms do not require a queue in order to work. 865 For instance spectrum congestion can be modelled by signal quality 866 using target bit-energy-to-noise-density ratio. And, to model radio 867 power exhaustion, transmission power levels can be measured and 868 compared to the maximum power available. [ECNFixedWireless] proposes 869 a practical and theoretically sound way to combine congestion 870 notification for different bit-congestible resources at different 871 layers along an end to end path, whether wireless or wired, and 872 whether with or without queues. 874 In wireless protocols that use request to send / clear to send (RTS / 875 CTS) control, such as some variants of IEEE802.11, it is reasonable 876 to base an AQM on the time spent waiting for transmission 877 opportunities (TXOPs) even though wireless spectrum is usually 878 regarded as congested by bits (for a given coding scheme). This is 879 because requests for TXOPs queue up as the spectrum gets congested by 880 all the bits being transferred. So the time that TXOPs are queued 881 directly reflects bit congestion of the spectrum. 883 4.2. Congestion Notification Advice 885 4.2.1. Network Bias when Encoding 887 4.2.1.1. Advice on Packet Size Bias in RED 889 The previously mentioned email [pktByteEmail] referred to by 890 [RFC2309] advised that most scarce resources in the Internet were 891 bit-congestible, which is still believed to be true (Section 1.1). 892 But it went on to offer advice that is updated by this memo. It said 893 that drop probability should depend on the size of the packet being 894 considered for drop if the resource is bit-congestible, but not if it 895 is packet-congestible. The argument continued that if packet drops 896 were inflated by packet size (byte-mode dropping), "a flow's fraction 897 of the packet drops is then a good indication of that flow's fraction 898 of the link bandwidth in bits per second". This was consistent with 899 a referenced policing mechanism being worked on at the time for 900 detecting unusually high bandwidth flows, eventually published in 901 1999 [pBox]. However, the problem could and should have been solved 902 by making the policing mechanism count the volume of bytes randomly 903 dropped, not the number of packets. 905 A few months before RFC2309 was published, an addendum was added to 906 the above archived email referenced from the RFC, in which the final 907 paragraph seemed to partially retract what had previously been said. 908 It clarified that the question of whether the probability of 909 dropping/marking a packet should depend on its size was not related 910 to whether the resource itself was bit congestible, but a completely 911 orthogonal question. However the only example given had the queue 912 measured in packets but packet drop depended on the size of the 913 packet in question. No example was given the other way round. 915 In 2000, Cnodder et al [REDbyte] pointed out that there was an error 916 in the part of the original 1993 RED algorithm that aimed to 917 distribute drops uniformly, because it didn't correctly take into 918 account the adjustment for packet size. They recommended an 919 algorithm called RED_4 to fix this. But they also recommended a 920 further change, RED_5, to adjust drop rate dependent on the square of 921 relative packet size. This was indeed consistent with one implied 922 motivation behind RED's byte mode drop--that we should reverse 923 engineer the network to improve the performance of dominant end-to- 924 end congestion control mechanisms. This memo makes a different 925 recommendations in Section 2. 927 By 2003, a further change had been made to the adjustment for packet 928 size, this time in the RED algorithm of the ns2 simulator. Instead 929 of taking each packet's size relative to a `maximum packet size' it 930 was taken relative to a `mean packet size', intended to be a static 931 value representative of the `typical' packet size on the link. We 932 have not been able to find a justification in the literature for this 933 change, however Eddy and Allman conducted experiments [REDbias] that 934 assessed how sensitive RED was to this parameter, amongst other 935 things. However, this changed algorithm can often lead to drop 936 probabilities of greater than 1 (which gives a hint that there is 937 probably a mistake in the theory somewhere). 939 On 10-Nov-2004, this variant of byte-mode packet drop was made the 940 default in the ns2 simulator. It seems unlikely that byte-mode drop 941 has ever been implemented in production networks (Appendix A), 942 therefore any conclusions based on ns2 simulations that use RED 943 without disabling byte-mode drop are likely to behave very 944 differently from RED in production networks. 946 4.2.1.2. Packet Size Bias Regardless of AQM 948 The byte-mode drop variant of RED (or a similar variant of other AQM 949 algorithms) is not the only possible bias towards small packets in 950 queueing systems. We have already mentioned that tail-drop queues 951 naturally tend to lock-out large packets once they are full. 953 But also queues with fixed sized buffers reduce the probability that 954 small packets will be dropped if (and only if) they allow small 955 packets to borrow buffers from the pools for larger packets (see 956 Section 4.1.1). Borrowing effectively makes the maximum queue size 957 for small packets greater than that for large packets, because more 958 buffers can be used by small packets while less will fit large 959 packets. Incidentally, the bias towards small packets from buffer 960 borrowing is nothing like as large as that of RED's byte-mode drop. 962 Nonetheless, fixed-buffer memory with tail drop is still prone to 963 lock-out large packets, purely because of the tail-drop aspect. So, 964 fixed size packet-buffers should be augmented with a good AQM 965 algorithm and packet-mode drop. If an AQM is too complicated to 966 implement with multiple fixed buffer pools, the minimum necessary to 967 prevent large packet lock-out is to ensure smaller packets never use 968 the last available buffer in any of the pools for larger packets. 970 4.2.2. Transport Bias when Decoding 972 The above proposals to alter the network equipment to bias towards 973 smaller packets have largely carried on outside the IETF process. 974 Whereas, within the IETF, there are many different proposals to alter 975 transport protocols to achieve the same goals, i.e. either to make 976 the flow bit-rate take account of packet size, or to protect control 977 packets from loss. This memo argues that altering transport 978 protocols is the more principled approach. 980 A recently approved experimental RFC adapts its transport layer 981 protocol to take account of packet sizes relative to typical TCP 982 packet sizes. This proposes a new small-packet variant of TCP- 983 friendly rate control [RFC5348] called TFRC-SP [RFC4828]. 984 Essentially, it proposes a rate equation that inflates the flow rate 985 by the ratio of a typical TCP segment size (1500B including TCP 986 header) over the actual segment size [PktSizeEquCC]. (There are also 987 other important differences of detail relative to TFRC, such as using 988 virtual packets [CCvarPktSize] to avoid responding to multiple losses 989 per round trip and using a minimum inter-packet interval.) 991 Section 4.5.1 of this TFRC-SP spec discusses the implications of 992 operating in an environment where queues have been configured to drop 993 smaller packets with proportionately lower probability than larger 994 ones. But it only discusses TCP operating in such an environment, 995 only mentioning TFRC-SP briefly when discussing how to define 996 fairness with TCP. And it only discusses the byte-mode dropping 997 version of RED as it was before Cnodder et al pointed out it didn't 998 sufficiently bias towards small packets to make TCP independent of 999 packet size. 1001 So the TFRC-SP spec doesn't address the issue of which of the network 1002 or the transport _should_ handle fairness between different packet 1003 sizes. In its Appendix B.4 it discusses the possibility of both 1004 TFRC-SP and some network buffers duplicating each other's attempts to 1005 deliberately bias towards small packets. But the discussion is not 1006 conclusive, instead reporting simulations of many of the 1007 possibilities in order to assess performance but not recommending any 1008 particular course of action. 1010 The paper originally proposing TFRC with virtual packets (VP-TFRC) 1011 [CCvarPktSize] proposed that there should perhaps be two variants to 1012 cater for the different variants of RED. However, as the TFRC-SP 1013 authors point out, there is no way for a transport to know whether 1014 some queues on its path have deployed RED with byte-mode packet drop 1015 (except if an exhaustive survey found that no-one has deployed it!-- 1016 see Appendix A). Incidentally, VP-TFRC also proposed that byte-mode 1017 RED dropping should really square the packet-size compensation-factor 1018 (like that of Cnodder's RED_5, but apparently unaware of it). 1020 Pre-congestion notification [RFC5670] is an IETF technology to use a 1021 virtual queue for AQM marking for packets within one Diffserv class 1022 in order to give early warning prior to any real queuing. The PCN 1023 marking algorithms have been designed not to take account of packet 1024 size when forwarding through queues. Instead the general principle 1025 has been to take account of the sizes of marked packets when 1026 monitoring the fraction of marking at the edge of the network, as 1027 recommended here. 1029 4.2.3. Making Transports Robust against Control Packet Losses 1031 Recently, two RFCs have defined changes to TCP that make it more 1032 robust against losing small control packets [RFC5562] [RFC5690]. In 1033 both cases they note that the case for these two TCP changes would be 1034 weaker if RED were biased against dropping small packets. We argue 1035 here that these two proposals are a safer and more principled way to 1036 achieve TCP performance improvements than reverse engineering RED to 1037 benefit TCP. 1039 Although there are no known proposals, it would also be possible and 1040 perfectly valid to make control packets robust against drop by 1041 requesting a scheduling class with lower drop probability, by re- 1042 marking to a Diffserv code point [RFC2474] within the same behaviour 1043 aggregate. 1045 Although not brought to the IETF, a simple proposal from Wischik 1046 [DupTCP] suggests that the first three packets of every TCP flow 1047 should be routinely duplicated after a short delay. It shows that 1048 this would greatly improve the chances of short flows completing 1049 quickly, but it would hardly increase traffic levels on the Internet, 1050 because Internet bytes have always been concentrated in the large 1051 flows. It further shows that the performance of many typical 1052 applications depends on completion of long serial chains of short 1053 messages. It argues that, given most of the value people get from 1054 the Internet is concentrated within short flows, this simple 1055 expedient would greatly increase the value of the best efforts 1056 Internet at minimal cost. A similar but more extensive approach has 1057 been evaluated on Google servers [GentleAggro]. 1059 The proposals discussed in this sub-section are experimental 1060 approaches that are not yet in wide operational use, but they are 1061 existence proofs that transports can make themselves robust against 1062 loss of control packets. The examples are all TCP-based, but 1063 applications over non-TCP transports could mitigate loss of control 1064 packets by making similar use of Diffserv, data duplication, FEC etc. 1066 4.2.4. Congestion Notification: Summary of Conflicting Advice 1068 +-----------+----------------+-----------------+--------------------+ 1069 | transport | RED_1 (packet | RED_4 (linear | RED_5 (square byte | 1070 | cc | mode drop) | byte mode drop) | mode drop) | 1071 +-----------+----------------+-----------------+--------------------+ 1072 | TCP or | s/sqrt(p) | sqrt(s/p) | 1/sqrt(p) | 1073 | TFRC | | | | 1074 | TFRC-SP | 1/sqrt(p) | 1/sqrt(sp) | 1/(s.sqrt(p)) | 1075 +-----------+----------------+-----------------+--------------------+ 1077 Table 2: Dependence of flow bit-rate per RTT on packet size, s, and 1078 drop probability, p, when network and/or transport bias towards small 1079 packets to varying degrees 1081 Table 2 aims to summarise the potential effects of all the advice 1082 from different sources. Each column shows a different possible AQM 1083 behaviour in different queues in the network, using the terminology 1084 of Cnodder et al outlined earlier (RED_1 is basic RED with packet- 1085 mode drop). Each row shows a different transport behaviour: TCP 1086 [RFC5681] and TFRC [RFC5348] on the top row with TFRC-SP [RFC4828] 1087 below. Each cell shows how the bits per round trip of a flow depends 1088 on packet size, s, and drop probability, p. In order to declutter 1089 the formulae to focus on packet-size dependence they are all given 1090 per round trip, which removes any RTT term. 1092 Let us assume that the goal is for the bit-rate of a flow to be 1093 independent of packet size. Suppressing all inessential details, the 1094 table shows that this should either be achievable by not altering the 1095 TCP transport in a RED_5 network, or using the small packet TFRC-SP 1096 transport (or similar) in a network without any byte-mode dropping 1097 RED (top right and bottom left). Top left is the `do nothing' 1098 scenario, while bottom right is the `do-both' scenario in which bit- 1099 rate would become far too biased towards small packets. Of course, 1100 if any form of byte-mode dropping RED has been deployed on a subset 1101 of queues that congest, each path through the network will present a 1102 different hybrid scenario to its transport. 1104 Whatever, we can see that the linear byte-mode drop column in the 1105 middle would considerably complicate the Internet. It's a half-way 1106 house that doesn't bias enough towards small packets even if one 1107 believes the network should be doing the biasing. Section 2 1108 recommends that _all_ bias in network equipment towards small packets 1109 should be turned off--if indeed any equipment vendors have 1110 implemented it--leaving packet-size bias solely as the preserve of 1111 the transport layer (solely the leftmost, packet-mode drop column). 1113 In practice it seems that no deliberate bias towards small packets 1114 has been implemented for production networks. Of the 19% of vendors 1115 who responded to a survey of 84 equipment vendors, none had 1116 implemented byte-mode drop in RED (see Appendix A for details). 1118 5. Outstanding Issues and Next Steps 1120 5.1. Bit-congestible Network 1122 For a connectionless network with nearly all resources being bit- 1123 congestible the recommended position is clear--that the network 1124 should not make allowance for packet sizes and the transport should. 1125 This leaves two outstanding issues: 1127 o How to handle any legacy of AQM with byte-mode drop already 1128 deployed; 1130 o The need to start a programme to update transport congestion 1131 control protocol standards to take account of packet size. 1133 A survey of equipment vendors (Section 4.2.4) found no evidence that 1134 byte-mode packet drop had been implemented, so deployment will be 1135 sparse at best. A migration strategy is not really needed to remove 1136 an algorithm that may not even be deployed. 1138 A programme of experimental updates to take account of packet size in 1139 transport congestion control protocols has already started with 1140 TFRC-SP [RFC4828]. 1142 5.2. Bit- & Packet-congestible Network 1144 The position is much less clear-cut if the Internet becomes populated 1145 by a more even mix of both packet-congestible and bit-congestible 1146 resources (see Appendix B.2). This problem is not pressing, because 1147 most Internet resources are designed to be bit-congestible before 1148 packet processing starts to congest (see Section 1.1). 1150 The IRTF Internet congestion control research group (ICCRG) has set 1151 itself the task of reaching consensus on generic forwarding 1152 mechanisms that are necessary and sufficient to support the 1153 Internet's future congestion control requirements (the first 1154 challenge in [RFC6077]). The research question of whether packet 1155 congestion might become common and what to do if it does may in the 1156 future be explored in the IRTF (the "Challenge 3: Packet Size" in 1157 [RFC6077]). 1159 Note that sometimes it seems that resources might be congested by 1160 neither bits nor packets, e.g. where the queue for access to a 1161 wireless medium is in units of transmission opportunities. However, 1162 the root cause of congestion of the underlying spectrum is overload 1163 of bits (see Section 4.1.2). 1165 6. Security Considerations 1167 This memo recommends that queues do not bias drop probability due to 1168 packets size. For instance dropping small packets less often than 1169 large creates a perverse incentive for transports to break down their 1170 flows into tiny segments. One of the benefits of implementing AQM 1171 was meant to be to remove this perverse incentive that drop-tail 1172 queues gave to small packets. 1174 In practice, transports cannot all be trusted to respond to 1175 congestion. So another reason for recommending that queues do not 1176 bias drop probability towards small packets is to avoid the 1177 vulnerability to small packet DDoS attacks that would otherwise 1178 result. One of the benefits of implementing AQM was meant to be to 1179 remove drop-tail's DoS vulnerability to small packets, so we 1180 shouldn't add it back again. 1182 If most queues implemented AQM with byte-mode drop, the resulting 1183 network would amplify the potency of a small packet DDoS attack. At 1184 the first queue the stream of packets would push aside a greater 1185 proportion of large packets, so more of the small packets would 1186 survive to attack the next queue. Thus a flood of small packets 1187 would continue on towards the destination, pushing regular traffic 1188 with large packets out of the way in one queue after the next, but 1189 suffering much less drop itself. 1191 Appendix C explains why the ability of networks to police the 1192 response of _any_ transport to congestion depends on bit-congestible 1193 network resources only doing packet-mode not byte-mode drop. In 1194 summary, it says that making drop probability depend on the size of 1195 the packets that bits happen to be divided into simply encourages the 1196 bits to be divided into smaller packets. Byte-mode drop would 1197 therefore irreversibly complicate any attempt to fix the Internet's 1198 incentive structures. 1200 7. IANA Considerations 1202 This document has no actions for IANA. 1204 8. Conclusions 1206 This memo identifies the three distinct stages of the congestion 1207 notification process where implementations need to decide whether to 1208 take packet size into account. The recommendations provided in 1209 Section 2 of this memo are different in each case: 1211 o When network equipment measures the length of a queue, if it is 1212 not feasible to use time it is recommended to count in bytes if 1213 the network resource is congested by bytes, or to count in packets 1214 if is congested by packets. 1216 o When network equipment decides whether to drop (or mark) a packet, 1217 it is recommended that the size of the particular packet should 1218 not be taken into account 1220 o However, when a transport algorithm responds to a dropped or 1221 marked packet, the size of the rate reduction should be 1222 proportionate to the size of the packet. 1224 In summary, the answers are 'it depends', 'no' and 'yes' respectively 1226 For the specific case of RED, this means that byte-mode queue 1227 measurement will often be appropriate but the use of byte-mode drop 1228 is very strongly discouraged. 1230 At the transport layer the IETF should continue updating congestion 1231 control protocols to take account of the size of each packet that 1232 indicates congestion. Also the IETF should continue to make 1233 protocols less sensitive to losing control packets like SYNs, pure 1234 ACKs and DNS exchanges. Although many control packets happen to be 1235 small, the alternative of network equipment favouring all small 1236 packets would be dangerous. That would create perverse incentives to 1237 split data transfers into smaller packets. 1239 The memo develops these recommendations from principled arguments 1240 concerning scaling, layering, incentives, inherent efficiency, 1241 security and policeability. But it also addresses practical issues 1242 such as specific buffer architectures and incremental deployment. 1243 Indeed a limited survey of RED implementations is discussed, which 1244 shows there appears to be little, if any, installed base of RED's 1245 byte-mode drop. Therefore it can be deprecated with little, if any, 1246 incremental deployment complications. 1248 The recommendations have been developed on the well-founded basis 1249 that most Internet resources are bit-congestible not packet- 1250 congestible. We need to know the likelihood that this assumption 1251 will prevail longer term and, if it might not, what protocol changes 1252 will be needed to cater for a mix of the two. The IRTF Internet 1253 Congestion Control Research Group (ICCRG) is currently working on 1254 these problems [RFC6077]. 1256 9. Acknowledgements 1258 Thank you to Sally Floyd, who gave extensive and useful review 1259 comments. Also thanks for the reviews from Philip Eardley, David 1260 Black, Fred Baker, David Taht, Toby Moncaster, Arnaud Jacquet and 1261 Mirja Kuehlewind as well as helpful explanations of different 1262 hardware approaches from Larry Dunn and Fred Baker. We are grateful 1263 to Bruce Davie and his colleagues for providing a timely and 1264 efficient survey of RED implementation in Cisco's product range. 1265 Also grateful thanks to Toby Moncaster, Will Dormann, John Regnault, 1266 Simon Carter and Stefaan De Cnodder who further helped survey the 1267 current status of RED implementation and deployment and, finally, 1268 thanks to the anonymous individuals who responded. 1270 Bob Briscoe and Jukka Manner were partly funded by Trilogy, a 1271 research project (ICT- 216372) supported by the European Community 1272 under its Seventh Framework Programme. The views expressed here are 1273 those of the authors only. 1275 10. Comments Solicited 1277 Comments and questions are encouraged and very welcome. They can be 1278 addressed to the IETF Transport Area working group mailing list 1279 , and/or to the authors. 1281 11. References 1283 11.1. Normative References 1285 [RFC2119] Bradner, S., "Key words for use in RFCs to 1286 Indicate Requirement Levels", BCP 14, 1287 RFC 2119, March 1997. 1289 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, 1290 "The Addition of Explicit Congestion 1291 Notification (ECN) to IP", RFC 3168, 1292 September 2001. 1294 11.2. Informative References 1296 [BLUE02] Feng, W-c., Shin, K., Kandlur, D., and D. 1297 Saha, "The BLUE active queue management 1298 algorithms", IEEE/ACM Transactions on 1299 Networking 10(4) 513--528, August 2002, . 1303 [CCvarPktSize] Widmer, J., Boutremans, C., and J-Y. Le 1304 Boudec, "Congestion Control for Flows with 1305 Variable Packet Size", ACM CCR 34(2) 137-- 1306 151, 2004, 1307 . 1310 [CHOKe_Var_Pkt] Psounis, K., Pan, R., and B. Prabhaker, 1311 "Approximate Fair Dropping for Variable 1312 Length Packets", IEEE Micro 21(1):48--56, 1313 January-February 2001, . 1317 [DRQ] Shin, M., Chong, S., and I. Rhee, "Dual- 1318 Resource TCP/AQM for Processing- 1319 Constrained Networks", IEEE/ACM 1320 Transactions on Networking Vol 16, issue 1321 2, April 2008, . 1324 [DupTCP] Wischik, D., "Short messages", 1325 Philosphical Transactions of the Royal 1326 Society A 366(1872):1941-1953, June 2008, 1327 . 1330 [ECNFixedWireless] Siris, V., "Resource Control for Elastic 1331 Traffic in CDMA Networks", Proc. ACM 1332 MOBICOM'02 , September 2002, . 1336 [Evol_cc] Gibbens, R. and F. Kelly, "Resource 1337 pricing and the evolution of congestion 1338 control", Automatica 35(12)1969--1985, 1339 December 1999, . 1342 [GentleAggro] Flach, T., Dukkipati, N., Terzis, A., 1343 Raghavan, B., Cardwell, N., Cheng, Y., 1344 Jain, A., Hao, S., Katz-Bassett, E., and 1345 R. Govindan, "Reducing Web Latency: the 1346 Virtue of Gentle Aggression", ACM SIGCOMM 1347 CCR 43(4)159--170, August 2013, . 1350 [I-D.nichols-tsvwg-codel] Nichols, K. and V. Jacobson, "Controlled 1351 Delay Active Queue Management", 1352 draft-nichols-tsvwg-codel-01 (work in 1353 progress), February 2013. 1355 [I-D.pan-tsvwg-pie] Pan, R., Natarajan, P., Piglione, C., and 1356 M. Prabhu, "PIE: A Lightweight Control 1357 Scheme To Address the Bufferbloat 1358 Problem", draft-pan-tsvwg-pie-00 (work in 1359 progress), December 2012. 1361 [IOSArch] Bollapragada, V., White, R., and C. 1362 Murphy, "Inside Cisco IOS Software 1363 Architecture", Cisco Press: CCIE 1364 Professional Development ISBN13: 978-1- 1365 57870-181-0, July 2000. 1367 [PktSizeEquCC] Vasallo, P., "Variable Packet Size 1368 Equation-Based Congestion Control", ICSI 1369 Technical Report tr-00-008, 2000, . 1373 [RED93] Floyd, S. and V. Jacobson, "Random Early 1374 Detection (RED) gateways for Congestion 1375 Avoidance", IEEE/ACM Transactions on 1376 Networking 1(4) 397--413, August 1993, . 1380 [REDbias] Eddy, W. and M. Allman, "A Comparison of 1381 RED's Byte and Packet Modes", Computer 1382 Networks 42(3) 261--280, June 2003, . 1386 [REDbyte] De Cnodder, S., Elloumi, O., and K. 1387 Pauwels, "RED behavior with different 1388 packet sizes", Proc. 5th IEEE Symposium on 1389 Computers and Communications (ISCC) 793-- 1390 799, July 2000, . 1393 [RFC2309] Braden, B., Clark, D., Crowcroft, J., 1394 Davie, B., Deering, S., Estrin, D., Floyd, 1395 S., Jacobson, V., Minshall, G., Partridge, 1396 C., Peterson, L., Ramakrishnan, K., 1397 Shenker, S., Wroclawski, J., and L. Zhang, 1398 "Recommendations on Queue Management and 1399 Congestion Avoidance in the Internet", 1400 RFC 2309, April 1998. 1402 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. 1403 Black, "Definition of the Differentiated 1404 Services Field (DS Field) in the IPv4 and 1405 IPv6 Headers", RFC 2474, December 1998. 1407 [RFC2914] Floyd, S., "Congestion Control 1408 Principles", BCP 41, RFC 2914, 1409 September 2000. 1411 [RFC3426] Floyd, S., "General Architectural and 1412 Policy Considerations", RFC 3426, 1413 November 2002. 1415 [RFC3550] Schulzrinne, H., Casner, S., Frederick, 1416 R., and V. Jacobson, "RTP: A Transport 1417 Protocol for Real-Time Applications", 1418 STD 64, RFC 3550, July 2003. 1420 [RFC3714] Floyd, S. and J. Kempf, "IAB Concerns 1421 Regarding Congestion Control for Voice 1422 Traffic in the Internet", RFC 3714, 1423 March 2004. 1425 [RFC4828] Floyd, S. and E. Kohler, "TCP Friendly 1426 Rate Control (TFRC): The Small-Packet (SP) 1427 Variant", RFC 4828, April 2007. 1429 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. 1430 Widmer, "TCP Friendly Rate Control (TFRC): 1431 Protocol Specification", RFC 5348, 1432 September 2008. 1434 [RFC5562] Kuzmanovic, A., Mondal, A., Floyd, S., and 1435 K. Ramakrishnan, "Adding Explicit 1436 Congestion Notification (ECN) Capability 1437 to TCP's SYN/ACK Packets", RFC 5562, 1438 June 2009. 1440 [RFC5670] Eardley, P., "Metering and Marking 1441 Behaviour of PCN-Nodes", RFC 5670, 1442 November 2009. 1444 [RFC5681] Allman, M., Paxson, V., and E. Blanton, 1445 "TCP Congestion Control", RFC 5681, 1446 September 2009. 1448 [RFC5690] Floyd, S., Arcia, A., Ros, D., and J. 1449 Iyengar, "Adding Acknowledgement 1450 Congestion Control to TCP", RFC 5690, 1451 February 2010. 1453 [RFC6077] Papadimitriou, D., Welzl, M., Scharf, M., 1454 and B. Briscoe, "Open Research Issues in 1455 Internet Congestion Control", RFC 6077, 1456 February 2011. 1458 [RFC6679] Westerlund, M., Johansson, I., Perkins, 1459 C., O'Hanlon, P., and K. Carlberg, 1460 "Explicit Congestion Notification (ECN) 1461 for RTP over UDP", RFC 6679, August 2012. 1463 [RFC6789] Briscoe, B., Woundy, R., and A. Cooper, 1464 "Congestion Exposure (ConEx) Concepts and 1465 Use Cases", RFC 6789, December 2012. 1467 [Rate_fair_Dis] Briscoe, B., "Flow Rate Fairness: 1468 Dismantling a Religion", ACM 1469 CCR 37(2)63--74, April 2007, . 1472 [gentle_RED] Floyd, S., "Recommendation on using the 1473 "gentle_" variant of RED", Web page , 1474 March 2000, . 1477 [pBox] Floyd, S. and K. Fall, "Promoting the Use 1478 of End-to-End Congestion Control in the 1479 Internet", IEEE/ACM Transactions on 1480 Networking 7(4) 458--472, August 1999, . 1484 [pktByteEmail] Floyd, S., "RED: Discussions of Byte and 1485 Packet Modes", email , March 1997, . 1489 Appendix A. Survey of RED Implementation Status 1491 This Appendix is informative, not normative. 1493 In May 2007 a survey was conducted of 84 vendors to assess how widely 1494 drop probability based on packet size has been implemented in RED 1495 Table 3. About 19% of those surveyed replied, giving a sample size 1496 of 16. Although in most cases we do not have permission to identify 1497 the respondents, we can say that those that have responded include 1498 most of the larger equipment vendors, covering a large fraction of 1499 the market. The two who gave permission to be identified were Cisco 1500 and Alcatel-Lucent. The others range across the large network 1501 equipment vendors at L3 & L2, firewall vendors, wireless equipment 1502 vendors, as well as large software businesses with a small selection 1503 of networking products. All those who responded confirmed that they 1504 have not implemented the variant of RED with drop dependent on packet 1505 size (2 were fairly sure they had not but needed to check more 1506 thoroughly). At the time the survey was conducted, Linux did not 1507 implement RED with packet-size bias of drop, although we have not 1508 investigated a wider range of open source code. 1510 +-------------------------------+----------------+-----------------+ 1511 | Response | No. of vendors | %age of vendors | 1512 +-------------------------------+----------------+-----------------+ 1513 | Not implemented | 14 | 17% | 1514 | Not implemented (probably) | 2 | 2% | 1515 | Implemented | 0 | 0% | 1516 | No response | 68 | 81% | 1517 | Total companies/orgs surveyed | 84 | 100% | 1518 +-------------------------------+----------------+-----------------+ 1520 Table 3: Vendor Survey on byte-mode drop variant of RED (lower drop 1521 probability for small packets) 1523 Where reasons have been given, the extra complexity of packet bias 1524 code has been most prevalent, though one vendor had a more principled 1525 reason for avoiding it--similar to the argument of this document. 1527 Our survey was of vendor implementations, so we cannot be certain 1528 about operator deployment. But we believe many queues in the 1529 Internet are still tail-drop. The company of one of the co-authors 1530 (BT) has widely deployed RED, but many tail-drop queues are bound to 1531 still exist, particularly in access network equipment and on 1532 middleboxes like firewalls, where RED is not always available. 1534 Routers using a memory architecture based on fixed size buffers with 1535 borrowing may also still be prevalent in the Internet. As explained 1536 in Section 4.2.1, these also provide a marginal (but legitimate) bias 1537 towards small packets. So even though RED byte-mode drop is not 1538 prevalent, it is likely there is still some bias towards small 1539 packets in the Internet due to tail drop and fixed buffer borrowing. 1541 Appendix B. Sufficiency of Packet-Mode Drop 1543 This Appendix is informative, not normative. 1545 Here we check that packet-mode drop (or marking) in the network gives 1546 sufficiently generic information for the transport layer to use. We 1547 check against a 2x2 matrix of four scenarios that may occur now or in 1548 the future (Table 4). The horizontal and vertical dimensions have 1549 been chosen because each tests extremes of sensitivity to packet size 1550 in the transport and in the network respectively. 1552 Note that this section does not consider byte-mode drop at all. 1553 Having deprecated byte-mode drop, the goal here is to check that 1554 packet-mode drop will be sufficient in all cases. 1556 +-------------------------------+-----------------+-----------------+ 1557 | Transport | a) Independent | b) Dependent on | 1558 | | of packet size | packet size of | 1559 | Network | of congestion | congestion | 1560 | | notifications | notifications | 1561 +-------------------------------+-----------------+-----------------+ 1562 | 1) Predominantly | Scenario a1) | Scenario b1) | 1563 | bit-congestible network | | | 1564 | 2) Mix of bit-congestible and | Scenario a2) | Scenario b2) | 1565 | pkt-congestible network | | | 1566 +-------------------------------+-----------------+-----------------+ 1568 Table 4: Four Possible Congestion Scenarios 1570 Appendix B.1 focuses on the horizontal dimension of Table 4 checking 1571 that packet-mode drop (or marking) gives sufficient information, 1572 whether or not the transport uses it--scenarios b) and a) 1573 respectively. 1575 Appendix B.2 focuses on the vertical dimension of Table 4, checking 1576 that packet-mode drop gives sufficient information to the transport 1577 whether resources in the network are bit-congestible or packet- 1578 congestible (these terms are defined in Section 1.1). 1580 Notation: To be concrete, we will compare two flows with different 1581 packet sizes, s_1 and s_2. As an example, we will take s_1 = 60B 1582 = 480b and s_2 = 1500B = 12,000b. 1584 A flow's bit rate, x [bps], is related to its packet rate, u 1585 [pps], by 1587 x(t) = s.u(t). 1589 In the bit-congestible case, path congestion will be denoted by 1590 p_b, and in the packet-congestible case by p_p. When either case 1591 is implied, the letter p alone will denote path congestion. 1593 B.1. Packet-Size (In)Dependence in Transports 1595 In all cases we consider a packet-mode drop queue that indicates 1596 congestion by dropping (or marking) packets with probability p 1597 irrespective of packet size. We use an example value of loss 1598 (marking) probability, p=0.1%. 1600 A transport like RFC5681 TCP treats a congestion notification on any 1601 packet whatever its size as one event. However, a network with just 1602 the packet-mode drop algorithm does give more information if the 1603 transport chooses to use it. We will use Table 5 to illustrate this. 1605 We will set aside the last column until later. The columns labelled 1606 "Flow 1" and "Flow 2" compare two flows consisting of 60B and 1500B 1607 packets respectively. The body of the table considers two separate 1608 cases, one where the flows have equal bit-rate and the other with 1609 equal packet-rates. In both cases, the two flows fill a 96Mbps link. 1610 Therefore, in the equal bit-rate case they each have half the bit- 1611 rate (48Mbps). Whereas, with equal packet-rates, flow 1 uses 25 1612 times smaller packets so it gets 25 times less bit-rate--it only gets 1613 1/(1+25) of the link capacity (96Mbps/26 = 4Mbps after rounding). In 1614 contrast flow 2 gets 25 times more bit-rate (92Mbps) in the equal 1615 packet rate case because its packets are 25 times larger. The packet 1616 rate shown for each flow could easily be derived once the bit-rate 1617 was known by dividing bit-rate by packet size, as shown in the column 1618 labelled "Formula". 1620 Parameter Formula Flow 1 Flow 2 Combined 1621 ----------------------- ----------- ------- ------- -------- 1622 Packet size s/8 60B 1,500B (Mix) 1623 Packet size s 480b 12,000b (Mix) 1624 Pkt loss probability p 0.1% 0.1% 0.1% 1626 EQUAL BIT-RATE CASE 1627 Bit-rate x 48Mbps 48Mbps 96Mbps 1628 Packet-rate u = x/s 100kpps 4kpps 104kpps 1629 Absolute pkt-loss-rate p*u 100pps 4pps 104pps 1630 Absolute bit-loss-rate p*u*s 48kbps 48kbps 96kbps 1631 Ratio of lost/sent pkts p*u/u 0.1% 0.1% 0.1% 1632 Ratio of lost/sent bits p*u*s/(u*s) 0.1% 0.1% 0.1% 1634 EQUAL PACKET-RATE CASE 1635 Bit-rate x 4Mbps 92Mbps 96Mbps 1636 Packet-rate u = x/s 8kpps 8kpps 15kpps 1637 Absolute pkt-loss-rate p*u 8pps 8pps 15pps 1638 Absolute bit-loss-rate p*u*s 4kbps 92kbps 96kbps 1639 Ratio of lost/sent pkts p*u/u 0.1% 0.1% 0.1% 1640 Ratio of lost/sent bits p*u*s/(u*s) 0.1% 0.1% 0.1% 1642 Table 5: Absolute Loss Rates and Loss Ratios for Flows of Small and 1643 Large Packets and Both Combined 1645 So far we have merely set up the scenarios. We now consider 1646 congestion notification in the scenario. Two TCP flows with the same 1647 round trip time aim to equalise their packet-loss-rates over time. 1648 That is the number of packets lost in a second, which is the packets 1649 per second (u) multiplied by the probability that each one is dropped 1650 (p). Thus TCP converges on the "Equal packet-rate" case, where both 1651 flows aim for the same "Absolute packet-loss-rate" (both 8pps in the 1652 table). 1654 Packet-mode drop actually gives flows sufficient information to 1655 measure their loss-rate in bits per second, if they choose, not just 1656 packets per second. Each flow can count the size of a lost or marked 1657 packet and scale its rate-response in proportion (as TFRC-SP does). 1658 The result is shown in the row entitled "Absolute bit-loss-rate", 1659 where the bits lost in a second is the packets per second (u) 1660 multiplied by the probability of losing a packet (p) multiplied by 1661 the packet size (s). Such an algorithm would try to remove any 1662 imbalance in bit-loss-rate such as the wide disparity in the "Equal 1663 packet-rate" case (4kbps vs. 92kbps). Instead, a packet-size- 1664 dependent algorithm would aim for equal bit-loss-rates, which would 1665 drive both flows towards the "Equal bit-rate" case, by driving them 1666 to equal bit-loss-rates (both 48kbps in this example). 1668 The explanation so far has assumed that each flow consists of packets 1669 of only one constant size. Nonetheless, it extends naturally to 1670 flows with mixed packet sizes. In the right-most column of Table 5 a 1671 flow of mixed size packets is created simply by considering flow 1 1672 and flow 2 as a single aggregated flow. There is no need for a flow 1673 to maintain an average packet size. It is only necessary for the 1674 transport to scale its response to each congestion indication by the 1675 size of each individual lost (or marked) packet. Taking for example 1676 the "Equal packet-rate" case, in one second about 8 small packets and 1677 8 large packets are lost (making closer to 15 than 16 losses per 1678 second due to rounding). If the transport multiplies each loss by 1679 its size, in one second it responds to 8*480b and 8*12,000b lost 1680 bits, adding up to 96,000 lost bits in a second. This double checks 1681 correctly, being the same as 0.1% of the total bit-rate of 96Mbps. 1682 For completeness, the formula for absolute bit-loss-rate is p(u1*s1+ 1683 u2*s2). 1685 Incidentally, a transport will always measure the loss probability 1686 the same irrespective of whether it measures in packets or in bytes. 1687 In other words, the ratio of lost to sent packets will be the same as 1688 the ratio of lost to sent bytes. (This is why TCP's bit rate is 1689 still proportional to packet size even when byte-counting is used, as 1690 recommended for TCP in [RFC5681], mainly for orthogonal security 1691 reasons.) This is intuitively obvious by comparing two example 1692 flows; one with 60B packets, the other with 1500B packets. If both 1693 flows pass through a queue with drop probability 0.1%, each flow will 1694 lose 1 in 1,000 packets. In the stream of 60B packets the ratio of 1695 bytes lost to sent will be 60B in every 60,000B; and in the stream of 1696 1500B packets, the loss ratio will be 1,500B out of 1,500,000B. When 1697 the transport responds to the ratio of lost to sent packets, it will 1698 measure the same ratio whether it measures in packets or bytes: 0.1% 1699 in both cases. The fact that this ratio is the same whether measured 1700 in packets or bytes can be seen in Table 5, where the ratio of lost 1701 to sent packets and the ratio of lost to sent bytes is always 0.1% in 1702 all cases (recall that the scenario was set up with p=0.1%). 1704 This discussion of how the ratio can be measured in packets or bytes 1705 is only raised here to highlight that it is irrelevant to this memo! 1706 Whether a transport depends on packet size or not depends on how this 1707 ratio is used within the congestion control algorithm. 1709 So far we have shown that packet-mode drop passes sufficient 1710 information to the transport layer so that the transport can take 1711 account of bit-congestion, by using the sizes of the packets that 1712 indicate congestion. We have also shown that the transport can 1713 choose not to take packet size into account if it wishes. We will 1714 now consider whether the transport can know which to do. 1716 B.2. Bit-Congestible and Packet-Congestible Indications 1718 As a thought-experiment, imagine an idealised congestion notification 1719 protocol that supports both bit-congestible and packet-congestible 1720 resources. It would require at least two ECN flags, one for each of 1721 bit-congestible and packet-congestible resources. 1723 1. A packet-congestible resource trying to code congestion level p_p 1724 into a packet stream should mark the idealised `packet 1725 congestion' field in each packet with probability p_p 1726 irrespective of the packet's size. The transport should then 1727 take a packet with the packet congestion field marked to mean 1728 just one mark, irrespective of the packet size. 1730 2. A bit-congestible resource trying to code time-varying byte- 1731 congestion level p_b into a packet stream should mark the `byte 1732 congestion' field in each packet with probability p_b, again 1733 irrespective of the packet's size. Unlike before, the transport 1734 should take a packet with the byte congestion field marked to 1735 count as a mark on each byte in the packet. 1737 This hides a fundamental problem--much more fundamental than whether 1738 we can magically create header space for yet another ECN flag, or 1739 whether it would work while being deployed incrementally. 1740 Distinguishing drop from delivery naturally provides just one 1741 implicit bit of congestion indication information--the packet is 1742 either dropped or not. It is hard to drop a packet in two ways that 1743 are distinguishable remotely. This is a similar problem to that of 1744 distinguishing wireless transmission losses from congestive losses. 1746 This problem would not be solved even if ECN were universally 1747 deployed. A congestion notification protocol must survive a 1748 transition from low levels of congestion to high. Marking two states 1749 is feasible with explicit marking, but much harder if packets are 1750 dropped. Also, it will not always be cost-effective to implement AQM 1751 at every low level resource, so drop will often have to suffice. 1753 We are not saying two ECN fields will be needed (and we are not 1754 saying that somehow a resource should be able to drop a packet in one 1755 of two different ways so that the transport can distinguish which 1756 sort of drop it was!). These two congestion notification channels 1757 are a conceptual device to illustrate a dilemma we could face in the 1758 future. Section 3 gives four good reasons why it would be a bad idea 1759 to allow for packet size by biasing drop probability in favour of 1760 small packets within the network. The impracticality of our thought 1761 experiment shows that it will be hard to give transports a practical 1762 way to know whether to take account of the size of congestion 1763 indication packets or not. 1765 Fortunately, this dilemma is not pressing because by design most 1766 equipment becomes bit-congested before its packet-processing becomes 1767 congested (as already outlined in Section 1.1). Therefore transports 1768 can be designed on the relatively sound assumption that a congestion 1769 indication will usually imply bit-congestion. 1771 Nonetheless, although the above idealised protocol isn't intended for 1772 implementation, we do want to emphasise that research is needed to 1773 predict whether there are good reasons to believe that packet 1774 congestion might become more common, and if so, to find a way to 1775 somehow distinguish between bit and packet congestion [RFC3714]. 1777 Recently, the dual resource queue (DRQ) proposal [DRQ] has been made 1778 on the premise that, as network processors become more cost 1779 effective, per packet operations will become more complex 1780 (irrespective of whether more function in the network is desirable). 1781 Consequently the premise is that CPU congestion will become more 1782 common. DRQ is a proposed modification to the RED algorithm that 1783 folds both bit congestion and packet congestion into one signal 1784 (either loss or ECN). 1786 Finally, we note one further complication. Strictly, packet- 1787 congestible resources are often cycle-congestible. For instance, for 1788 routing look-ups load depends on the complexity of each look-up and 1789 whether the pattern of arrivals is amenable to caching or not. This 1790 also reminds us that any solution must not require a forwarding 1791 engine to use excessive processor cycles in order to decide how to 1792 say it has no spare processor cycles. 1794 Appendix C. Byte-mode Drop Complicates Policing Congestion Response 1796 This section is informative, not normative. 1798 There are two main classes of approach to policing congestion 1799 response: i) policing at each bottleneck link or ii) policing at the 1800 edges of networks. Packet-mode drop in RED is compatible with 1801 either, while byte-mode drop precludes edge policing. 1803 The simplicity of an edge policer relies on one dropped or marked 1804 packet being equivalent to another of the same size without having to 1805 know which link the drop or mark occurred at. However, the byte-mode 1806 drop algorithm has to depend on the local MTU of the line--it needs 1807 to use some concept of a 'normal' packet size. Therefore, one 1808 dropped or marked packet from a byte-mode drop algorithm is not 1809 necessarily equivalent to another from a different link. A policing 1810 function local to the link can know the local MTU where the 1811 congestion occurred. However, a policer at the edge of the network 1812 cannot, at least not without a lot of complexity. 1814 The early research proposals for type (i) policing at a bottleneck 1815 link [pBox] used byte-mode drop, then detected flows that contributed 1816 disproportionately to the number of packets dropped. However, with 1817 no extra complexity, later proposals used packet mode drop and looked 1818 for flows that contributed a disproportionate amount of dropped bytes 1819 [CHOKe_Var_Pkt]. 1821 Work is progressing on the congestion exposure protocol (ConEx 1822 [RFC6789]), which enables a type (ii) edge policer located at a 1823 user's attachment point. The idea is to be able to take an 1824 integrated view of the effect of all a user's traffic on any link in 1825 the internetwork. However, byte-mode drop would effectively preclude 1826 such edge policing because of the MTU issue above. 1828 Indeed, making drop probability depend on the size of the packets 1829 that bits happen to be divided into would simply encourage the bits 1830 to be divided into smaller packets in order to confuse policing. In 1831 contrast, as long as a dropped/marked packet is taken to mean that 1832 all the bytes in the packet are dropped/marked, a policer can remain 1833 robust against bits being re-divided into different size packets or 1834 across different size flows [Rate_fair_Dis]. 1836 Appendix D. Changes from Previous Versions 1838 To be removed by the RFC Editor on publication. 1840 Full incremental diffs between each version are available at 1841 1842 (courtesy of the rfcdiff tool): 1844 From -11 to -12: Following the second pass through the IESG: 1846 * Section 2.1 [Barry Leiba]: 1848 + s/No other choice makes sense,/Subject to the exceptions 1849 below, no other choice makes sense,/ 1851 + s/Exceptions to these recommendations MAY be necessary 1852 /Exceptions to these recommendations may be necessary / 1854 * Sections 3.2 and 4.2.3 [Joel Jaeggli]: 1856 + Added comment to section 4.2.3 that the examples given are 1857 not in widespread production use, but they give evidence 1858 that it is possible to follow the advice given. 1860 + Section 4.2.3: 1862 - OLD: Although there are no known proposals, it would also 1863 be possible and perfectly valid to make control packets 1864 robust against drop by explicitly requesting a lower drop 1865 probability using their Diffserv code point [RFC2474] to 1866 request a scheduling class with lower drop. 1867 NEW: Although there are no known proposals, it would also 1868 be possible and perfectly valid to make control packets 1869 robust against drop by requesting a scheduling class with 1870 lower drop probability, by re-marking to a Diffserv code 1871 point [RFC2474] within the same behaviour aggregate. 1873 - appended "Similarly applications, over non-TCP transports 1874 could make any packets that are effectively control 1875 packets more robust by using Diffserv, data duplication, 1876 FEC etc." 1878 + Updated Wischik ref and added "Reducing Web Latency: the 1879 Virtue of Gentle Aggression" ref. 1881 * Expanded more abbreviations (CoDel, PIE, MTU). 1883 * Section 1. Intro [Stephen Farrell]: 1885 + In the places where the doc desribes the dichotomy between 1886 'long-term goal' and 'expediency' the words long term goal 1887 and expedient have been introduced, to more explicitly refer 1888 back to this introductory para (S.2.1 & S.2.3). 1890 + Added explanation of what scaling with packet size means. 1892 * Conclusions [Benoit Claise]: 1894 + OLD: For the specific case of RED, this means that byte-mode 1895 queue measurement will often be appropriate although byte- 1896 mode drop is strongly deprecated. 1897 NEW: For the specific case of RED, this means that byte-mode 1898 queue measurement will often be appropriate but the use of 1899 byte-mode drop is very strongly discouraged. 1901 From -10 to -11: Following a further WGLC: 1903 * Abstract: clarified that advice applies to all AQMs including 1904 newer ones 1906 * Abstract & Intro: changed 'read' to 'detect', because you don't 1907 read losses, you detect them. 1909 * S.1. Introduction: Disambiguated summary of advice on queue 1910 measurement. 1912 * Clarified that the doc deprecates any preference based solely 1913 on packet size, it's not only against preferring smaller 1914 packets. 1916 * S.4.1.2. Congestion Measurement without a Queue: Explained 1917 that a queue of TXOPs represents a queue into spectrum 1918 congested by too many bits. 1920 * S.5.2: Bit- & Packet-congestible Network: Referred to 1921 explanation in S.4.1.2 to make the point that TXOPs are not a 1922 primary unit of workload like bits and packets are, even though 1923 you get queues of TXOPs. 1925 * 6. Security: Disambiguated 'bias towards'. 1927 * 8. Conclusions: Made consistent with recommendation to use 1928 time if possible for queue measurement. 1930 From -09 to -10: Following IESG review: 1932 * Updates 2309: Left header unchanged reflecting eventual IESG 1933 consensus [Sean Turner, Pete Resnick]. 1935 * S.1 Intro: This memo adds to the congestion control principles 1936 enumerated in BCP 41 [Pete Resnick] 1938 * Abstract, S.1, S.1.1, s.1.2 Intro, Scoping and Example: Made 1939 applicability to all AQMs clearer listing some more example 1940 AQMs and explained that we always use RED for examples, but 1941 this doesn't mean it's not applicable to other AQMs. [A number 1942 of reviewers have described the draft as "about RED"] 1944 * S.1 & S.2.1 Queue measurement: Explained that the choice 1945 between measuring the queue in packets or bytes is only 1946 relevant if measuring it in time units is infeasible [So as not 1947 to imply that we haven't noticed the advances made by PDPC & 1948 CoDel] 1950 * S.1.1. Terminology: Better explained why hybrid systems 1951 congested by both packets and bytes are often designed to be 1952 treated as bit-congestible [Richard Barnes]. 1954 * S.2.1. Queue measurement advice: Added examples. Added a 1955 counter-example to justify SHOULDs rather than MUSTs. Pointed 1956 to S.4.1 for a list of more complicated scenarios. [Benson 1957 Schliesser, OpsDir] 1959 * S2.2. Recommendation on Encoding Congestion Notification: 1960 Removed SHOULD treat packets equally, leaving only SHOULD NOT 1961 drop dependent on packet size, to avoid it sounding like we're 1962 saying QoS is not allowed. Pointed to possible app-specific 1963 legacy use of byte-mode as a counter-example that prevents us 1964 saying MUST NOT. [Pete Resnick] 1966 * S.2.3. Recommendation on Responding to Congestion: capitalised 1967 the two SHOULDs in recommendations for TCP, and gave possible 1968 counter-examples. [noticed while dealing with Pete Resnick's 1969 point] 1971 * S2.4. Splitting & Merging: RTCP -> RTP/RTCP [Pete McCann, Gen- 1972 ART] 1974 * S.3.2 Small != Control: many control packets are small -> 1975 ...tend to be small [Stephen Farrell] 1977 * S.3.1 Perverse incentives: Changed transport designers to app 1978 developers [Stephen Farrell] 1980 * S.4.1.1. Fixed Size Packet Buffers: Nearly completely re- 1981 written to simplify and to reverse the advice when the 1982 underlying resource is bit-congestible, irrespective of whether 1983 the buffer consists of fixed-size packet buffers. [Richard 1984 Barnes & Benson Schliesser] 1986 * S.4.2.1.2. Packet Size Bias Regardless of AQM: Largely re- 1987 written to reflect the earlier change in advice about fixed- 1988 size packet buffers, and to primarily focus on getting rid of 1989 tail-drop, not various nuances of tail-drop. [Richard Barnes & 1990 Benson Schliesser] 1992 * Editorial corrections [Tim Bray, AppsDir, Pete McCann, Gen-ART 1993 and others] 1995 * Updated refs (two I-Ds have become RFCs). [Pete McCann] 1997 From -08 to -09: Following WG last call: 1999 * S.2.1: Made RED-related queue measurement recommendations 2000 clearer 2002 * S.2.3: Added to "Recommendation on Responding to Congestion" to 2003 make it clear that we are definitely not saying transports have 2004 to equalise bit-rates, just how to do it and not do it, if you 2005 want to. 2007 * S.3: Clarified motivation sections S.3.3 "Transport-Independent 2008 Network" and S.3.5 "Implementation Efficiency" 2010 * S.3.4: Completely changed motivating argument from "Scaling 2011 Congestion Control with Packet Size" to "Partial Deployment of 2012 AQM". 2014 From -07 to -08: 2016 * Altered abstract to say it provides best current practice and 2017 highlight that it updates RFC2309 2019 * Added null IANA section 2021 * Updated refs 2023 From -06 to -07: 2025 * A mix-up with the corollaries and their naming in 2.1 to 2.3 2026 fixed. 2028 From -05 to -06: 2030 * Primarily editorial fixes. 2032 From -04 to -05: 2034 * Changed from Informational to BCP and highlighted non-normative 2035 sections and appendices 2037 * Removed language about consensus 2039 * Added "Example Comparing Packet-Mode Drop and Byte-Mode Drop" 2041 * Arranged "Motivating Arguments" into a more logical order and 2042 completely rewrote "Transport-Independent Network" & "Scaling 2043 Congestion Control with Packet Size" arguments. Removed "Why 2044 Now?" 2046 * Clarified applicability of certain recommendations 2048 * Shifted vendor survey to an Appendix 2050 * Cut down "Outstanding Issues and Next Steps" 2051 * Re-drafted the start of the conclusions to highlight the three 2052 distinct areas of concern 2054 * Completely re-wrote appendices 2056 * Editorial corrections throughout. 2058 From -03 to -04: 2060 * Reordered Sections 2 and 3, and some clarifications here and 2061 there based on feedback from Colin Perkins and Mirja 2062 Kuehlewind. 2064 From -02 to -03 (this version) 2066 * Structural changes: 2068 + Split off text at end of "Scaling Congestion Control with 2069 Packet Size" into new section "Transport-Independent 2070 Network" 2072 + Shifted "Recommendations" straight after "Motivating 2073 Arguments" and added "Conclusions" at end to reinforce 2074 Recommendations 2076 + Added more internal structure to Recommendations, so that 2077 recommendations specific to RED or to TCP are just 2078 corollaries of a more general recommendation, rather than 2079 being listed as a separate recommendation. 2081 + Renamed "State of the Art" as "Critical Survey of Existing 2082 Advice" and retitled a number of subsections with more 2083 descriptive titles. 2085 + Split end of "Congestion Coding: Summary of Status" into a 2086 new subsection called "RED Implementation Status". 2088 + Removed text that had been in the Appendix "Congestion 2089 Notification Definition: Further Justification". 2091 * Reordered the intro text a little. 2093 * Made it clearer when advice being reported is deprecated and 2094 when it is not. 2096 * Described AQM as in network equipment, rather than saying "at 2097 the network layer" (to side-step controversy over whether 2098 functions like AQM are in the transport layer but in network 2099 equipment). 2101 * Minor improvements to clarity throughout 2103 From -01 to -02: 2105 * Restructured the whole document for (hopefully) easier reading 2106 and clarity. The concrete recommendation, in RFC2119 language, 2107 is now in Section 8. 2109 From -00 to -01: 2111 * Minor clarifications throughout and updated references 2113 From briscoe-byte-pkt-mark-02 to ietf-byte-pkt-congest-00: 2115 * Added note on relationship to existing RFCs 2117 * Posed the question of whether packet-congestion could become 2118 common and deferred it to the IRTF ICCRG. Added ref to the 2119 dual-resource queue (DRQ) proposal. 2121 * Changed PCN references from the PCN charter & architecture to 2122 the PCN marking behaviour draft most likely to imminently 2123 become the standards track WG item. 2125 From -01 to -02: 2127 * Abstract reorganised to align with clearer separation of issue 2128 in the memo. 2130 * Introduction reorganised with motivating arguments removed to 2131 new Section 3. 2133 * Clarified avoiding lock-out of large packets is not the main or 2134 only motivation for RED. 2136 * Mentioned choice of drop or marking explicitly throughout, 2137 rather than trying to coin a word to mean either. 2139 * Generalised the discussion throughout to any packet forwarding 2140 function on any network equipment, not just routers. 2142 * Clarified the last point about why this is a good time to sort 2143 out this issue: because it will be hard / impossible to design 2144 new transports unless we decide whether the network or the 2145 transport is allowing for packet size. 2147 * Added statement explaining the horizon of the memo is long 2148 term, but with short term expediency in mind. 2150 * Added material on scaling congestion control with packet size 2151 (Section 3.4). 2153 * Separated out issue of normalising TCP's bit rate from issue of 2154 preference to control packets (Section 3.2). 2156 * Divided up Congestion Measurement section for clarity, 2157 including new material on fixed size packet buffers and buffer 2158 carving (Section 4.1.1 & Section 4.2.1) and on congestion 2159 measurement in wireless link technologies without queues 2160 (Section 4.1.2). 2162 * Added section on 'Making Transports Robust against Control 2163 Packet Losses' (Section 4.2.3) with existing & new material 2164 included. 2166 * Added tabulated results of vendor survey on byte-mode drop 2167 variant of RED (Table 3). 2169 From -00 to -01: 2171 * Clarified applicability to drop as well as ECN. 2173 * Highlighted DoS vulnerability. 2175 * Emphasised that drop-tail suffers from similar problems to 2176 byte-mode drop, so only byte-mode drop should be turned off, 2177 not RED itself. 2179 * Clarified the original apparent motivations for recommending 2180 byte-mode drop included protecting SYNs and pure ACKs more than 2181 equalising the bit rates of TCPs with different segment sizes. 2182 Removed some conjectured motivations. 2184 * Added support for updates to TCP in progress (ackcc & ecn-syn- 2185 ack). 2187 * Updated survey results with newly arrived data. 2189 * Pulled all recommendations together into the conclusions. 2191 * Moved some detailed points into two additional appendices and a 2192 note. 2194 * Considerable clarifications throughout. 2196 * Updated references 2198 Authors' Addresses 2200 Bob Briscoe 2201 BT 2202 B54/77, Adastral Park 2203 Martlesham Heath 2204 Ipswich IP5 3RE 2205 UK 2207 Phone: +44 1473 645196 2208 EMail: bob.briscoe@bt.com 2209 URI: http://bobbriscoe.net/ 2211 Jukka Manner 2212 Aalto University 2213 Department of Communications and Networking (Comnet) 2214 P.O. Box 13000 2215 FIN-00076 Aalto 2216 Finland 2218 Phone: +358 9 470 22481 2219 EMail: jukka.manner@aalto.fi 2220 URI: http://www.netlab.tkk.fi/~jmanner/