idnits 2.17.1 draft-ietf-tsvwg-byte-pkt-congest-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) -- The draft header indicates that this document updates RFC2309, but the abstract doesn't seem to mention this, which it should. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year (Using the creation date from RFC2309, updated by this document, for RFC5378 checks: 1997-03-25) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 23, 2009) is 5299 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '0' on line 536 -- Looks like a reference, but probably isn't: '1' on line 536 ** Obsolete normative reference: RFC 2309 (Obsoleted by RFC 7567) == Outdated reference: A later version (-09) exists of draft-briscoe-tsvwg-re-ecn-tcp-07 == Outdated reference: A later version (-08) exists of draft-irtf-iccrg-welzl-congestion-control-open-research-05 -- Obsolete informational reference (is this intentional?): RFC 3448 (Obsoleted by RFC 5348) Summary: 3 errors (**), 0 flaws (~~), 3 warnings (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe 3 Internet-Draft BT 4 Updates: 2309 (if approved) October 23, 2009 5 Intended status: Informational 6 Expires: April 26, 2010 8 Byte and Packet Congestion Notification 9 draft-ietf-tsvwg-byte-pkt-congest-01 11 Status of this Memo 13 This Internet-Draft is submitted to IETF in full conformance with the 14 provisions of BCP 78 and BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six months 22 and may be updated, replaced, or obsoleted by other documents at any 23 time. It is inappropriate to use Internet-Drafts as reference 24 material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt. 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 This Internet-Draft will expire on April 26, 2010. 34 Copyright Notice 36 Copyright (c) 2009 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents in effect on the date of 41 publication of this document (http://trustee.ietf.org/license-info). 42 Please review these documents carefully, as they describe your rights 43 and restrictions with respect to this document. 45 Abstract 47 This memo concerns dropping or marking packets using active queue 48 management (AQM) such as random early detection (RED) or pre- 49 congestion notification (PCN). The primary conclusion is that packet 50 size should be taken into account when transports read congestion 51 indications, not when network equipment writes them. Reducing drop 52 of small packets has some tempting advantages: i) it drops less 53 control packets, which tend to be small and ii) it makes TCP's bit- 54 rate less dependent on packet size. However, there are ways of 55 addressing these issues at the transport layer, rather than reverse 56 engineering network forwarding to fix specific transport problems. 57 Network layer algorithms like the byte-mode packet drop variant of 58 RED should not be used to drop fewer small packets, because that 59 creates a perverse incentive for transports to use tiny segments, 60 consequently also opening up a DoS vulnerability. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 6 65 1.1. Requirements Notation . . . . . . . . . . . . . . . . . . 9 66 2. Motivating Arguments . . . . . . . . . . . . . . . . . . . . . 9 67 2.1. Scaling Congestion Control with Packet Size . . . . . . . 9 68 2.2. Avoiding Perverse Incentives to (ab)use Smaller Packets . 10 69 2.3. Small != Control . . . . . . . . . . . . . . . . . . . . . 12 70 2.4. Implementation Efficiency . . . . . . . . . . . . . . . . 12 71 3. Working Definition of Congestion Notification . . . . . . . . 12 72 4. Congestion Measurement . . . . . . . . . . . . . . . . . . . . 13 73 4.1. Congestion Measurement by Queue Length . . . . . . . . . . 13 74 4.1.1. Fixed Size Packet Buffers . . . . . . . . . . . . . . 13 75 4.2. Congestion Measurement without a Queue . . . . . . . . . . 14 76 5. Idealised Wire Protocol Coding . . . . . . . . . . . . . . . . 15 77 6. The State of the Art . . . . . . . . . . . . . . . . . . . . . 17 78 6.1. Congestion Measurement: Status . . . . . . . . . . . . . . 17 79 6.2. Congestion Coding: Status . . . . . . . . . . . . . . . . 18 80 6.2.1. Network Bias when Encoding . . . . . . . . . . . . . . 18 81 6.2.2. Transport Bias when Decoding . . . . . . . . . . . . . 20 82 6.2.3. Making Transports Robust against Control Packet 83 Losses . . . . . . . . . . . . . . . . . . . . . . . . 21 84 6.2.4. Congestion Coding: Summary of Status . . . . . . . . . 22 85 7. Outstanding Issues and Next Steps . . . . . . . . . . . . . . 24 86 7.1. Bit-congestible World . . . . . . . . . . . . . . . . . . 24 87 7.2. Bit- & Packet-congestible World . . . . . . . . . . . . . 24 88 8. Security Considerations . . . . . . . . . . . . . . . . . . . 25 89 9. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 26 90 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 28 91 11. Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 28 92 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 28 93 12.1. Normative References . . . . . . . . . . . . . . . . . . . 28 94 12.2. Informative References . . . . . . . . . . . . . . . . . . 29 95 Editorial Comments . . . . . . . . . . . . . . . . . . . . . . . . 96 Appendix A. Example Scenarios . . . . . . . . . . . . . . . . . . 32 97 A.1. Notation . . . . . . . . . . . . . . . . . . . . . . . . . 32 98 A.2. Bit-congestible resource, equal bit rates (Ai) . . . . . . 32 99 A.3. Bit-congestible resource, equal packet rates (Bi) . . . . 33 100 A.4. Pkt-congestible resource, equal bit rates (Aii) . . . . . 34 101 A.5. Pkt-congestible resource, equal packet rates (Bii) . . . . 35 102 Appendix B. Congestion Notification Definition: Further 103 Justification . . . . . . . . . . . . . . . . . . . . 35 104 Appendix C. Byte-mode Drop Complicates Policing Congestion 105 Response . . . . . . . . . . . . . . . . . . . . . . 36 106 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 37 108 Changes from Previous Versions 110 To be removed by the RFC Editor on publication. 112 Full incremental diffs between each version are available at 113 114 or 115 116 (courtesy of the rfcdiff tool): 118 From -00 to -01 (this version): 120 * Minor clarifications throughout and updated references 122 From briscoe-byte-pkt-mark-02 to ietf-byte-pkt-congest-00: 124 * Added note on relationship to existing RFCs 126 * Posed the question of whether packet-congestion could become 127 common and deferred it to the IRTF ICCRG. Added ref to the 128 dual-resource queue (DRQ) proposal. 130 * Changed PCN references from the PCN charter & architecture to 131 the PCN marking behaviour draft most likely to imminently 132 become the standards track WG item. 134 From -01 to -02: 136 * Abstract reorganised to align with clearer separation of issue 137 in the memo. 139 * Introduction reorganised with motivating arguments removed to 140 new Section 2. 142 * Clarified avoiding lock-out of large packets is not the main or 143 only motivation for RED. 145 * Mentioned choice of drop or marking explicitly throughout, 146 rather than trying to coin a word to mean either. 148 * Generalised the discussion throughout to any packet forwarding 149 function on any network equipment, not just routers. 151 * Clarified the last point about why this is a good time to sort 152 out this issue: because it will be hard / impossible to design 153 new transports unless we decide whether the network or the 154 transport is allowing for packet size. 156 * Added statement explaining the horizon of the memo is long 157 term, but with short term expediency in mind. 159 * Added material on scaling congestion control with packet size 160 (Section 2.1). 162 * Separated out issue of normalising TCP's bit rate from issue of 163 preference to control packets (Section 2.3). 165 * Divided up Congestion Measurement section for clarity, 166 including new material on fixed size packet buffers and buffer 167 carving (Section 4.1.1 & Section 6.2.1) and on congestion 168 measurement in wireless link technologies without queues 169 (Section 4.2). 171 * Added section on 'Making Transports Robust against Control 172 Packet Losses' (Section 6.2.3) with existing & new material 173 included. 175 * Added tabulated results of vendor survey on byte-mode drop 176 variant of RED (Table 2). 178 * 180 From -00 to -01: 182 * Clarified applicability to drop as well as ECN. 184 * Highlighted DoS vulnerability. 186 * Emphasised that drop-tail suffers from similar problems to 187 byte-mode drop, so only byte-mode drop should be turned off, 188 not RED itself. 190 * Clarified the original apparent motivations for recommending 191 byte-mode drop included protecting SYNs and pure ACKs more than 192 equalising the bit rates of TCPs with different segment sizes. 193 Removed some conjectured motivations. 195 * Added support for updates to TCP in progress (ackcc & ecn-syn- 196 ack). 198 * Updated survey results with newly arrived data. 200 * Pulled all recommendations together into the conclusions. 202 * Moved some detailed points into two additional appendices and a 203 note. 205 * Considerable clarifications throughout. 207 * Updated references 209 1. Introduction 211 When notifying congestion, the problem of how (and whether) to take 212 packet sizes into account has exercised the minds of researchers and 213 practitioners for as long as active queue management (AQM) has been 214 discussed. Indeed, one reason AQM was originally introduced was to 215 reduce the lock-out effects that small packets can have on large 216 packets in drop-tail queues. This memo aims to state the principles 217 we should be using and to come to conclusions on what these 218 principles will mean for future protocol design, taking into account 219 the deployments we have already. 221 Note that the byte vs. packet dilemma concerns congestion 222 notification irrespective of whether it is signalled implicitly by 223 drop or using explicit congestion notification (ECN [RFC3168] or PCN 224 [I-D.ietf-pcn-marking-behaviour]). Throughout this document, unless 225 clear from the context, the term marking will be used to mean 226 notifying congestion explicitly, while congestion notification will 227 be used to mean notifying congestion either implicitly by drop or 228 explicitly by marking. 230 If the load on a resource depends on the rate at which packets 231 arrive, it is called packet-congestible. If the load depends on the 232 rate at which bits arrive it is called bit-congestible. 234 Examples of packet-congestible resources are route look-up engines 235 and firewalls, because load depends on how many packet headers they 236 have to process. Examples of bit-congestible resources are 237 transmission links, radio power and most buffer memory, because the 238 load depends on how many bits they have to transmit or store. Some 239 machine architectures use fixed size packet buffers, so buffer memory 240 in these cases is packet-congestible (see Section 4.1.1). 242 Note that information is generally processed or transmitted with a 243 minimum granularity greater than a bit (e.g. octets). The 244 appropriate granularity for the resource in question SHOULD be used, 245 but for the sake of brevity we will talk in terms of bytes in this 246 memo. 248 Resources may be congestible at higher levels of granularity than 249 packets, for instance stateful firewalls are flow-congestible and 250 call-servers are session-congestible. This memo focuses on 251 congestion of connectionless resources, but the same principles may 252 be applicable for congestion notification protocols controlling per- 253 flow and per-session processing or state. 255 The byte vs. packet dilemma arises at three stages in the congestion 256 notification process: 258 Measuring congestion When the congested resource decides locally how 259 to measure how congested it is. (Should the queue be measured in 260 bytes or packets?); 262 Coding congestion notification into the wire protocol: When the 263 congested resource decides how to notify the level of congestion. 264 (Should the level of notification depend on the byte-size of each 265 particular packet carrying the notification?); 267 Decoding congestion notification from the wire protocol: When the 268 transport interprets the notification. (Should the byte-size of a 269 missing or marked packet be taken into account?). 271 In RED, whether to use packets or bytes when measuring queues is 272 called packet-mode or byte-mode queue measurement. This choice is 273 now fairly well understood but is included in Section 4 to document 274 it in the RFC series. 276 The controversy is mainly around the other two stages: whether to 277 allow for packet size when the network codes or when the transport 278 decodes congestion notification. In RED, the variant that reduces 279 drop probability for packets based on their size in bytes is called 280 byte-mode drop, while the variant that doesn't is called packet mode 281 drop. Whether queues are measured in bytes or packets is an 282 orthogonal choice, termed byte-mode queue measurement or packet-mode 283 queue measurement. 285 Currently, the RFC series is silent on this matter other than a paper 286 trail of advice referenced from [RFC2309], which conditionally 287 recommends byte-mode (packet-size dependent) drop [pktByteEmail]. 288 However, all the implementers who responded to our survey 289 (Section 6.2.4) have not followed this advice. The primary purpose 290 of this memo is to build a definitive consensus against deliberate 291 preferential treatment for small packets in AQM algorithms and to 292 record this advice within the RFC series. 294 Now is a good time to discuss whether fairness between different 295 sized packets would best be implemented in the network layer, or at 296 the transport, for a number of reasons: 298 1. The packet vs. byte issue requires speedy resolution because the 299 IETF pre-congestion notification (PCN) working group is about to 300 standardise the external behaviour of a PCN congestion 301 notification (AQM) algorithm [I-D.ietf-pcn-marking-behaviour]; 303 2. [RFC2309] says RED may either take account of packet size or not 304 when dropping, but gives no recommendation between the two, 305 referring instead to advice on the performance implications in an 306 email [pktByteEmail], which recommends byte-mode drop. Further, 307 just before RFC2309 was issued, an addendum was added to the 308 archived email that revisited the issue of packet vs. byte-mode 309 drop in its last para, making the recommendation less clear-cut; 311 3. Without the present memo, the only advice in the RFC series on 312 packet size bias in AQM algorithms would be a reference to an 313 archived email in [RFC2309] (including an addendum at the end of 314 the email to correct the original). 316 4. The IRTF Internet Congestion Control Research Group (ICCRG) 317 recently took on the challenge of building consensus on what 318 common congestion control support should be required from network 319 forwarding functions in future 320 [I-D.irtf-iccrg-welzl-congestion-control-open-research]. The 321 wider Internet community needs to discuss whether the complexity 322 of adjusting for packet size should be in the network or in 323 transports; 325 5. Given there are many good reasons why larger path max 326 transmission units (PMTUs) would help solve a number of scaling 327 issues, we don't want to create any bias against large packets 328 that is greater than their true cost; 330 6. The IETF has started to consider the question of fairness between 331 flows that use different packet sizes (e.g. in the small-packet 332 variant of TCP-friendly rate control, TFRC-SP [RFC4828]). Given 333 transports with different packet sizes, if we don't decide 334 whether the network or the transport should allow for packet 335 size, it will be hard if not impossible to design any transport 336 protocol so that its bit-rate relative to other transports meets 337 design guidelines [RFC5033] (Note however that, if the concern 338 were fairness between users, rather than between flows 339 [Rate_fair_Dis], relative rates between flows would have to come 340 under run-time control rather than being embedded in protocol 341 designs). 343 This memo is initially concerned with how we should correctly scale 344 congestion control functions with packet size for the long term. But 345 it also recognises that expediency may be necessary to deal with 346 existing widely deployed protocols that don't live up to the long 347 term goal. It turns out that the 'correct' variant of RED to deploy 348 seems to be the one everyone has deployed, and no-one who responded 349 to our survey has implemented the other variant. However, at the 350 transport layer, TCP congestion control is a widely deployed protocol 351 that we argue doesn't scale correctly with packet size. To date this 352 hasn't been a significant problem because most TCPs have been used 353 with similar packet sizes. But, as we design new congestion 354 controls, we should build in scaling with packet size rather than 355 assuming we should follow TCP's example. 357 Motivating arguments for our advice are given next in Section 2. 358 Then the body of the memo starts from first principles, defining 359 congestion notification in Section 3 then determining the correct way 360 to measure congestion (Section 4) and to design an idealised 361 congestion notification protocol (Section 5). It then surveys the 362 advice given previously in the RFC series, the research literature 363 and the deployed legacy (Section 6) before listing outstanding issues 364 (Section 7) that will need resolution both to achieve the ideal 365 protocol and to handle legacy. After discussing security 366 considerations (Section 8) strong recommendations for the way forward 367 are given in the conclusions (Section 9). 369 1.1. Requirements Notation 371 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 372 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 373 document are to be interpreted as described in [RFC2119]. 375 2. Motivating Arguments 377 2.1. Scaling Congestion Control with Packet Size 379 There are two ways of interpreting a dropped or marked packet. It 380 can either be considered as a single loss event or as loss/marking of 381 the bytes in the packet. Here we try to design a test to see which 382 approach scales with packet size. 384 Given bit-congestible is the more common case, consider a bit- 385 congestible link shared by many flows, so that each busy period tends 386 to cause packets to be lost from different flows. The test compares 387 two identical scenarios with the same applications, the same numbers 388 of sources and the same load. But the sources break the load into 389 large packets in one scenario and small packets in the other. Of 390 course, because the load is the same, there will be proportionately 391 more packets in the small packet case. 393 The test of whether a congestion control scales with packet size is 394 that it should respond in the same way to the same congestion 395 excursion, irrespective of the size of the packets that the bytes 396 causing congestion happen to be broken down into. 398 A bit-congestible queue suffering a congestion excursion has to drop 399 or mark the same excess bytes whether they are in a few large packets 400 or many small packets. So for the same congestion excursion, the 401 same amount of bytes have to be shed to get the load back to its 402 operating point. But, of course, for smaller packets more packets 403 will have to be discarded to shed the same bytes. 405 If all the transports interpret each drop/mark as a single loss event 406 irrespective of the size of the packet dropped, those with smaller 407 packets will respond more to the same congestion excursion, failing 408 our test. On the other hand, if they respond proportionately less 409 when smaller packets are dropped/marked, overall they will be able to 410 respond the same to the same congestion excursion. 412 Therefore, for a congestion control to scale with packet size it 413 should respond to dropped or marked bytes (as TFRC-SP [RFC4828] 414 effectively does), not just to dropped or marked packets irrespective 415 of packet size (as TCP does). 417 The email [pktByteEmail] referred to by RFC2309 says the question of 418 whether a packet's own size should affect its drop probability 419 "depends on the dominant end-to-end congestion control mechanisms". 420 But we argue the network layer should not be optimised for whatever 421 transport is predominant. 423 TCP congestion control ensures that flows competing for the same 424 resource each maintain the same number of segments in flight, 425 irrespective of segment size. So under similar conditions, flows 426 with different segment sizes will get different bit rates. But even 427 though reducing the drop probability of small packets helps ensure 428 TCPs with different packet sizes will achieve similar bit rates, we 429 argue this should be achieved in TCP itself, not in the network. 431 Effectively, favouring small packets is reverse engineering of the 432 network layer around TCP, contrary to the excellent advice in 433 [RFC3426], which asks designers to question "Why are you proposing a 434 solution at this layer of the protocol stack, rather than at another 435 layer?" 437 2.2. Avoiding Perverse Incentives to (ab)use Smaller Packets 439 Increasingly, it is being recognised that a protocol design must take 440 care not to cause unintended consequences by giving the parties in 441 the protocol exchange perverse incentives [Evol_cc][RFC3426]. Again, 442 imagine a scenario where the same bit rate of packets will contribute 443 the same to congestion of a link irrespective of whether it is sent 444 as fewer larger packets or more smaller packets. A protocol design 445 that caused larger packets to be more likely to be dropped than 446 smaller ones would be dangerous in this case: 448 Malicious transports: A queue that gives an advantage to small 449 packets can be used to amplify the force of a flooding attack. By 450 sending a flood of small packets, the attacker can get the queue 451 to discard more traffic in large packets, allowing more attack 452 traffic to get through to cause further damage. Such a queue 453 allows attack traffic to have a disproportionately large effect on 454 regular traffic without the attacker having to do much work. 456 Note that, although the byte-mode drop variant of RED amplifies 457 small packet attacks, drop-tail queues amplify small packet 458 attacks even more (see Security Considerations in Section 8). 459 Wherever possible neither should be used. 461 Normal transports: Even if a transport is not malicious, if it finds 462 small packets go faster, it will tend to act in its own interest 463 and use them. Queues that give advantage to small packets create 464 an evolutionary pressure for transports to send at the same bit- 465 rate but break their data stream down into tiny segments to reduce 466 their drop rate. Encouraging a high volume of tiny packets might 467 in turn unnecessarily overload a completely unrelated part of the 468 system, perhaps more limited by header-processing than bandwidth. 470 Imagine two unresponsive flows arrive at a bit-congestible 471 transmission link each with the same bit rate, say 1Mbps, but one 472 consists of 1500B and the other 60B packets, which are 25x smaller. 473 Consider a scenario where gentle RED [gentle_RED] is used, along with 474 the variant of RED we advise against, i.e. where the RED algorithm is 475 configured to adjust the drop probability of packets in proportion to 476 each packet's size (byte mode packet drop). In this case, if RED 477 drops 25% of the larger packets, it will aim to drop 1% of the 478 smaller packets (but in practice it may drop more as congestion 479 increases [RFC4828](S.B.4)[Note_Variation]). Even though both flows 480 arrive with the same bit rate, the bit rate the RED queue aims to 481 pass to the line will be 750k for the flow of larger packet but 990k 482 for the smaller packets (but because of rate variation it will be 483 less than this target). 485 It can be seen that this behaviour reopens the same denial of service 486 vulnerability that drop tail queues offer to floods of small packet, 487 though not necessarily as strongly (see Section 8). 489 2.3. Small != Control 491 It is tempting to drop small packets with lower probability to 492 improve performance, because many control packets are small (TCP SYNs 493 & ACKs, DNS queries & responses, SIP messages, HTTP GETs, etc) and 494 dropping fewer control packets considerably improves performance. 495 However, we must not give control packets preference purely by virtue 496 of their smallness, otherwise it is too easy for any data source to 497 get the same preferential treatment simply by sending data in smaller 498 packets. Again we should not create perverse incentives to favour 499 small packets rather than to favour control packets, which is what we 500 intend. 502 Just because many control packets are small does not mean all small 503 packets are control packets. 505 So again, rather than fix these problems in the network layer, we 506 argue that the transport should be made more robust against losses of 507 control packets (see 'Making Transports Robust against Control Packet 508 Losses' in Section 6.2.3). 510 2.4. Implementation Efficiency 512 Allowing for packet size at the transport rather than in the network 513 ensures that neither the network nor the transport needs to do a 514 multiply operation--multiplication by packet size is effectively 515 achieved as a repeated add when the transport adds to its count of 516 marked bytes as each congestion event is fed to it. This isn't a 517 principled reason in itself, but it is a happy consequence of the 518 other principled reasons. 520 3. Working Definition of Congestion Notification 522 Rather than aim to achieve what many have tried and failed, this memo 523 will not try to define congestion. It will give a working definition 524 of what congestion notification should be taken to mean for this 525 document. Congestion notification is a changing signal that aims to 526 communicate the ratio E/L, where E is the instantaneous excess load 527 offered to a resource that it cannot (or would not) serve and L is 528 the instantaneous offered load. 530 The phrase `would not serve' is added, because AQM systems (e.g. 531 RED, PCN [I-D.ietf-pcn-marking-behaviour]) use a virtual capacity 532 smaller than actual capacity, then notify congestion of this virtual 533 capacity in order to avoid congestion of the actual capacity. 535 Note that the denominator is offered load, not capacity. Therefore 536 congestion notification is a real number bounded by the range [0,1]. 537 This ties in with the most well-understood measure of congestion 538 notification: drop fraction (often loosely called loss rate). It 539 also means that congestion has a natural interpretation as a 540 probability; the probability of offered traffic not being served (or 541 being marked as at risk of not being served). Appendix B describes a 542 further incidental benefit that arises from using load as the 543 denominator of congestion notification. 545 4. Congestion Measurement 547 4.1. Congestion Measurement by Queue Length 549 Queue length is usually the most correct and simplest way to measure 550 congestion of a resource. To avoid the pathological effects of drop 551 tail, an AQM function can then be used to transform queue length into 552 the probability of dropping or marking a packet (e.g. RED's 553 piecewise linear function between thresholds). If the resource is 554 bit-congestible, the length of the queue SHOULD be measured in bytes. 555 If the resource is packet-congestible, the length of the queue SHOULD 556 be measured in packets. No other choice makes sense, because the 557 number of packets waiting in the queue isn't relevant if the resource 558 gets congested by bytes and vice versa. We discuss the implications 559 on RED's byte mode and packet mode for measuring queue length in 560 Section 6. 562 4.1.1. Fixed Size Packet Buffers 564 Some, mostly older, queuing hardware sets aside fixed sized buffers 565 in which to store each packet in the queue. Also, with some 566 hardware, any fixed sized buffers not completely filled by a packet 567 are padded when transmitted to the wire. If we imagine a theoretical 568 forwarding system with both queuing and transmission in fixed, MTU- 569 sized units, it should clearly be treated as packet-congestible, 570 because the queue length in packets would be a good model of 571 congestion of the lower layer link. 573 If we now imagine a hybrid forwarding system with transmission delay 574 largely dependent on the byte-size of packets but buffers of one MTU 575 per packet, it should strictly require a more complex algorithm to 576 determine the probability of congestion. It should be treated as two 577 resources in sequence, where the sum of the byte-sizes of the packets 578 within each packet buffer models congestion of the line while the 579 length of the queue in packets models congestion of the queue. Then 580 the probability of congesting the forwarding buffer would be a 581 conditional probability--conditional on the previously calculated 582 probability of congesting the line. 584 However, in systems that use fixed size buffers, it is unusual for 585 all the buffers used by an interface to be the same size. Typically 586 pools of different sized buffers are provided (Cisco uses the term 587 'buffer carving' for the process of dividing up memory into these 588 pools [IOSArch]). Usually, if the pool of small buffers is 589 exhausted, arriving small packets can borrow space in the pool of 590 large buffers, but not vice versa. However, it is easier to work out 591 what should be done if we temporarily set aside the possibility of 592 such borrowing. Then, with fixed pools of buffers for different 593 sized packets and no borrowing, the size of each pool and the current 594 queue length in each pool would both be measured in packets. So an 595 AQM algorithm would have to maintain the queue length for each pool, 596 and judge whether to drop/mark a packet of a particular size by 597 looking at the pool for packets of that size and using the length (in 598 packets) of its queue. 600 We now return to the issue we temporarily set aside: small packets 601 borrowing space in larger buffers. In this case, the only difference 602 is that the pools for smaller packets have a maximum queue size that 603 includes all the pools for larger packets. And every time a packet 604 takes a larger buffer, the current queue size has to be incremented 605 for all queues in the pools of buffers less than or equal to the 606 buffer size used. 608 We will return to borrowing of fixed sized buffers when we discuss 609 biasing the drop/marking probability of a specific packet because of 610 its size in Section 6.2.1. But here we can give a simple summary of 611 the present discussion on how to measure the length of queues of 612 fixed buffers: no matter how complicated the scheme is, ultimately 613 any fixed buffer system will need to measure its queue length in 614 packets not bytes. 616 4.2. Congestion Measurement without a Queue 618 AQM algorithms are nearly always described assuming there is a queue 619 for a congested resource and the algorithm can use the queue length 620 to determine the probability that it will drop or mark each packet. 621 But not all congested resources lead to queues. For instance, 622 wireless spectrum is bit-congestible (for a given coding scheme), 623 because interference increases with the rate at which bits are 624 transmitted. But wireless link protocols do not always maintain a 625 queue that depends on spectrum interference. Similarly, power 626 limited resources are also usually bit-congestible if energy is 627 primarily required for transmission rather than header processing, 628 but it is rare for a link protocol to build a queue as it approaches 629 maximum power. 631 However, AQM algorithms don't require a queue in order to work. For 632 instance spectrum congestion can be modelled by signal quality using 633 target bit-energy-to-noise-density ratio. And, to model radio power 634 exhaustion, transmission power levels can be measured and compared to 635 the maximum power available. [ECNFixedWireless] proposes a practical 636 and theoretically sound way to combine congestion notification for 637 different bit-congestible resources at different layers along an end 638 to end path, whether wireless or wired, and whether with or without 639 queues. 641 5. Idealised Wire Protocol Coding 643 We will start by inventing an idealised congestion notification 644 protocol before discussing how to make it practical. The idealised 645 protocol is shown to be correct using examples in Appendix A. 647 Congestion notification involves the congested resource coding a 648 congestion notification signal into the packet stream and the 649 transports decoding it. The idealised protocol uses two different 650 (imaginary) fields in each datagram to signal congestion: one for 651 byte congestion and one for packet congestion. 653 We are not saying two ECN fields will be needed (and we are not 654 saying that somehow a resource should be able to drop a packet in one 655 of two different ways so that the transport can distinguish which 656 sort of drop it was!). These two congestion notification channels 657 are just a conceptual device. They allow us to defer having to 658 decide whether to distinguish between byte and packet congestion when 659 the network resource codes the signal or when the transport decodes 660 it. 662 However, although this idealised mechanism isn't intended for 663 implementation, we do want to emphasise that we may need to find a 664 way to implement it, because it could become necessary to somehow 665 distinguish between bit and packet congestion [RFC3714]. Currently a 666 design goal of network processing equipment such as routers and 667 firewalls is to keep packet processing uncongested even under worst 668 case bit rates with minimum packet sizes. Therefore, packet- 669 congestion is currently rare, but there is no guarantee that it will 670 not become common with future technology trends. 672 The idealised wire protocol is given below. It accounts for packet 673 sizes at the transport layer, not in the network, and then only in 674 the case of bit-congestible resources. This avoids the perverse 675 incentive to send smaller packets and the DoS vulnerability that 676 would otherwise result if the network were to bias towards them (see 677 the motivating argument about avoiding perverse incentives in 678 Section 2.2): 680 1. A packet-congestible resource trying to code congestion level p_p 681 into a packet stream should mark the idealised `packet 682 congestion' field in each packet with probability p_p 683 irrespective of the packet's size. The transport should then 684 take a packet with the packet congestion field marked to mean 685 just one mark, irrespective of the packet size. 687 2. A bit-congestible resource trying to code time-varying byte- 688 congestion level p_b into a packet stream should mark the `byte 689 congestion' field in each packet with probability p_b, again 690 irrespective of the packet's size. Unlike before, the transport 691 should take a packet with the byte congestion field marked to 692 count as a mark on each byte in the packet. 694 The worked examples in Appendix A show that transports can extract 695 sufficient and correct congestion notification from these protocols 696 for cases when two flows with different packet sizes have matching 697 bit rates or matching packet rates. Examples are also given that mix 698 these two flows into one to show that a flow with mixed packet sizes 699 would still be able to extract sufficient and correct information. 701 Sufficient and correct congestion information means that there is 702 sufficient information for the two different types of transport 703 requirements: 705 Ratio-based: Established transport congestion controls like TCP's 706 [RFC5681] aim to achieve equal segment rates per RTT through the 707 same bottleneck--TCP friendliness [RFC3448]. They work with the 708 ratio of dropped to delivered segments (or marked to unmarked 709 segments in the case of ECN). The example scenarios show that 710 these ratio-based transports are effectively the same whether 711 counting in bytes or packets, because the units cancel out. 712 (Incidentally, this is why TCP's bit rate is still proportional to 713 packet size even when byte-counting is used, as recommended for 714 TCP in [RFC5681], mainly for orthogonal security reasons.) 716 Absolute-target-based: Other congestion controls proposed in the 717 research community aim to limit the volume of congestion caused to 718 a constant weight parameter. [MulTCP][WindowPropFair] are 719 examples of weighted proportionally fair transports designed for 720 cost-fair environments [Rate_fair_Dis]. In this case, the 721 transport requires a count (not a ratio) of dropped/marked bytes 722 in the bit-congestible case and of dropped/marked packets in the 723 packet congestible case. 725 6. The State of the Art 727 The original 1993 paper on RED [RED93] proposed two options for the 728 RED active queue management algorithm: packet mode and byte mode. 729 Packet mode measured the queue length in packets and dropped (or 730 marked) individual packets with a probability independent of their 731 size. Byte mode measured the queue length in bytes and marked an 732 individual packet with probability in proportion to its size 733 (relative to the maximum packet size). In the paper's outline of 734 further work, it was stated that no recommendation had been made on 735 whether the queue size should be measured in bytes or packets, but 736 noted that the difference could be significant. 738 When RED was recommended for general deployment in 1998 [RFC2309], 739 the two modes were mentioned implying the choice between them was a 740 question of performance, referring to a 1997 email [pktByteEmail] for 741 advice on tuning. This email clarified that there were in fact two 742 orthogonal choices: whether to measure queue length in bytes or 743 packets (Section 6.1 below) and whether the drop probability of an 744 individual packet should depend on its own size (Section 6.2 below). 746 6.1. Congestion Measurement: Status 748 The choice of which metric to use to measure queue length was left 749 open in RFC2309. It is now well understood that queues for bit- 750 congestible resources should be measured in bytes, and queues for 751 packet-congestible resources should be measured in packets (see 752 Section 4). 754 Where buffers are not configured or legacy buffers cannot be 755 configured to the above guideline, we don't have to make allowances 756 for such legacy in future protocol design. If a bit-congestible 757 buffer is measured in packets, the operator will have set the 758 thresholds mindful of a typical mix of packets sizes. Any AQM 759 algorithm on such a buffer will be oversensitive to high proportions 760 of small packets, e.g. a DoS attack, and undersensitive to high 761 proportions of large packets. But an operator can safely keep such a 762 legacy buffer because any undersensitivity during unusual traffic 763 mixes cannot lead to congestion collapse given the buffer will 764 eventually revert to tail drop, discarding proportionately more large 765 packets. 767 Some modern queue implementations give a choice for setting RED's 768 thresholds in byte-mode or packet-mode. This may merely be an 769 administrator-interface preference, not altering how the queue itself 770 is measured but on some hardware it does actually change the way it 771 measures its queue. Whether a resource is bit-congestible or packet- 772 congestible is a property of the resource, so an admin SHOULD NOT 773 ever need to, or be able to, configure the way a queue measures 774 itself. 776 We believe the question of whether to measure queues in bytes or 777 packets is fairly well understood these days. The only outstanding 778 issues concern how to measure congestion when the queue is bit 779 congestible but the resource is packet congestible or vice versa (see 780 Section 4). But there is no controversy over what should be done. 781 It's just you have to be an expert in probability to work out what 782 should be done and, even if you have, it's not always easy to find a 783 practical algorithm to implement it. 785 6.2. Congestion Coding: Status 787 6.2.1. Network Bias when Encoding 789 The previously mentioned email [pktByteEmail] referred to by 790 [RFC2309] said that the choice over whether a packet's own size 791 should affect its drop probability "depends on the dominant end-to- 792 end congestion control mechanisms". [Section 2 argues against this 793 approach, citing the excellent advice in RFC3246.] The referenced 794 email went on to argue that drop probability should depend on the 795 size of the packet being considered for drop if the resource is bit- 796 congestible, but not if it is packet-congestible, but advised that 797 most scarce resources in the Internet were currently bit-congestible. 798 The argument continued that if packet drops were inflated by packet 799 size (byte-mode dropping), "a flow's fraction of the packet drops is 800 then a good indication of that flow's fraction of the link bandwidth 801 in bits per second". This was consistent with a referenced policing 802 mechanism being worked on at the time for detecting unusually high 803 bandwidth flows, eventually published in 1999 [pBox]. [The problem 804 could have been solved by making the policing mechanism count the 805 volume of bytes randomly dropped, not the number of packets.] 807 A few months before RFC2309 was published, an addendum was added to 808 the above archived email referenced from the RFC, in which the final 809 paragraph seemed to partially retract what had previously been said. 810 It clarified that the question of whether the probability of 811 dropping/marking a packet should depend on its size was not related 812 to whether the resource itself was bit congestible, but a completely 813 orthogonal question. However the only example given had the queue 814 measured in packets but packet drop depended on the byte-size of the 815 packet in question. No example was given the other way round. 817 In 2000, Cnodder et al [REDbyte] pointed out that there was an error 818 in the part of the original 1993 RED algorithm that aimed to 819 distribute drops uniformly, because it didn't correctly take into 820 account the adjustment for packet size. They recommended an 821 algorithm called RED_4 to fix this. But they also recommended a 822 further change, RED_5, to adjust drop rate dependent on the square of 823 relative packet size. This was indeed consistent with one stated 824 motivation behind RED's byte mode drop--that we should reverse 825 engineer the network to improve the performance of dominant end-to- 826 end congestion control mechanisms. 828 By 2003, a further change had been made to the adjustment for packet 829 size, this time in the RED algorithm of the ns2 simulator. Instead 830 of taking each packet's size relative to a `maximum packet size' it 831 was taken relative to a `mean packet size', intended to be a static 832 value representative of the `typical' packet size on the link. We 833 have not been able to find a justification for this change in the 834 literature, however Eddy and Allman conducted experiments [REDbias] 835 that assessed how sensitive RED was to this parameter, amongst other 836 things. No-one seems to have pointed out that this changed algorithm 837 can often lead to drop probabilities of greater than 1 [which should 838 ring alarm bells hinting that there's a mistake in the theory 839 somewhere]. On 10-Nov-2004, this variant of byte-mode packet drop 840 was made the default in the ns2 simulator. 842 The byte-mode drop variant of RED is, of course, not the only 843 possible bias towards small packets in queueing algorithms. We have 844 already mentioned that tail-drop queues naturally tend to lock-out 845 large packets once they are full. But also queues with fixed sized 846 buffers reduce the probability that small packets will be dropped if 847 (and only if) they allow small packets to borrow buffers from the 848 pools for larger packets. As was explained in Section 4.1.1 on fixed 849 size buffer carving, borrowing effectively makes the maximum queue 850 size for small packets greater than that for large packets, because 851 more buffers can be used by small packets while less will fit large 852 packets. 854 However, in itself, the bias towards small packets caused by buffer 855 borrowing is perfectly correct. Lower drop probability for small 856 packets is legitimate in buffer borrowing schemes, because small 857 packets genuinely congest the machine's buffer memory less than large 858 packets, given they can fit in more spaces. The bias towards small 859 packets is not artificially added (as it is in RED's byte-mode drop 860 algorithm), it merely reflects the reality of the way fixed buffer 861 memory gets congested. Incidentally, the bias towards small packets 862 from buffer borrowing is nothing like as large as that of RED's byte- 863 mode drop. 865 Nonetheless, fixed-buffer memory with tail drop is still prone to 866 lock-out large packets, purely because of the tail-drop aspect. So a 867 good AQM algorithm like RED with packet-mode drop should be used with 868 fixed buffer memories where possible. If RED is too complicated to 869 implement with multiple fixed buffer pools, the minimum necessary to 870 prevent large packet lock-out is to ensure smaller packets never use 871 the last available buffer in any of the pools for larger packets. 873 6.2.2. Transport Bias when Decoding 875 The above proposals to alter the network layer to give a bias towards 876 smaller packets have largely carried on outside the IETF process 877 (unless one counts a reference in an informational RFC to an archived 878 email!). Whereas, within the IETF, there are many different 879 proposals to alter transport protocols to achieve the same goals, 880 i.e. either to make the flow bit-rate take account of packet size, or 881 to protect control packets from loss. This memo argues that altering 882 transport protocols is the more principled approach. 884 A recently approved experimental RFC adapts its transport layer 885 protocol to take account of packet sizes relative to typical TCP 886 packet sizes. This proposes a new small-packet variant of TCP- 887 friendly rate control [RFC3448] called TFRC-SP [RFC4828]. 888 Essentially, it proposes a rate equation that inflates the flow rate 889 by the ratio of a typical TCP segment size (1500B including TCP 890 header) over the actual segment size [PktSizeEquCC]. (There are also 891 other important differences of detail relative to TFRC, such as using 892 virtual packets [CCvarPktSize] to avoid responding to multiple losses 893 per round trip and using a minimum inter-packet interval.) 895 Section 4.5.1 of this TFRC-SP spec discusses the implications of 896 operating in an environment where queues have been configured to drop 897 smaller packets with proportionately lower probability than larger 898 ones. But it only discusses TCP operating in such an environment, 899 only mentioning TFRC-SP briefly when discussing how to define 900 fairness with TCP. And it only discusses the byte-mode dropping 901 version of RED as it was before Cnodder et al pointed out it didn't 902 sufficiently bias towards small packets to make TCP independent of 903 packet size. 905 So the TFRC-SP spec doesn't address the issue of which of the network 906 or the transport _should_ handle fairness between different packet 907 sizes. In its Appendix B.4 it discusses the possibility of both 908 TFRC-SP and some network buffers duplicating each other's attempts to 909 deliberately bias towards small packets. But the discussion is not 910 conclusive, instead reporting simulations of many of the 911 possibilities in order to assess performance but not recommending any 912 particular course of action. 914 The paper originally proposing TFRC with virtual packets (VP-TFRC) 915 [CCvarPktSize] proposed that there should perhaps be two variants to 916 cater for the different variants of RED. However, as the TFRC-SP 917 authors point out, there is no way for a transport to know whether 918 some queues on its path have deployed RED with byte-mode packet drop 919 (except if an exhaustive survey found that no-one has deployed it!-- 920 see Section 6.2.4). Incidentally, VP-TFRC also proposed that byte- 921 mode RED dropping should really square the packet size compensation 922 factor (like that of RED_5, but apparently unaware of it). 924 Pre-congestion notification [I-D.ietf-pcn-marking-behaviour] is a 925 proposal to use a virtual queue for AQM marking for packets within 926 one Diffserv class in order to give early warning prior to any real 927 queuing. The proposed PCN marking algorithms have been designed not 928 to take account of packet size when forwarding through queues. 929 Instead the general principle has been to take account of the sizes 930 of marked packets when monitoring the fraction of marking at the edge 931 of the network. 933 6.2.3. Making Transports Robust against Control Packet Losses 935 Recently, two drafts have proposed changes to TCP that make it more 936 robust against losing small control packets [I-D.ietf-tcpm-ecnsyn] 937 [I-D.floyd-tcpm-ackcc]. In both cases they note that the case for 938 these TCP changes would be weaker if RED were biased against dropping 939 small packets. We argue here that these two proposals are a safer 940 and more principled way to achieve TCP performance improvements than 941 reverse engineering RED to benefit TCP. 943 Although no proposals exist as far as we know, it would also be 944 possible and perfectly valid to make control packets robust against 945 drop by explicitly requesting a lower drop probability using their 946 Diffserv code point [RFC2474] to request a scheduling class with 947 lower drop. 949 The re-ECN protocol proposal [I-D.briscoe-tsvwg-re-ecn-tcp] is 950 designed so that transports can be made more robust against losing 951 control packets. It gives queues an incentive to optionally give 952 preference against drop to packets with the 'feedback not 953 established' codepoint in the proposed 'extended ECN' field. Senders 954 have incentives to use this codepoint sparingly, but they can use it 955 on control packets to reduce their chance of being dropped. For 956 instance, the proposed modification to TCP for re-ECN uses this 957 codepoint on the SYN and SYN-ACK. 959 Although not brought to the IETF, a simple proposal from Wischik 960 [DupTCP] suggests that the first three packets of every TCP flow 961 should be routinely duplicated after a short delay. It shows that 962 this would greatly improve the chances of short flows completing 963 quickly, but it would hardly increase traffic levels on the Internet, 964 because Internet bytes have always been concentrated in the large 965 flows. It further shows that the performance of many typical 966 applications depends on completion of long serial chains of short 967 messages. It argues that, given most of the value people get from 968 the Internet is concentrated within short flows, this simple 969 expedient would greatly increase the value of the best efforts 970 Internet at minimal cost. 972 6.2.4. Congestion Coding: Summary of Status 974 +-----------+----------------+-----------------+--------------------+ 975 | transport | RED_1 (packet | RED_4 (linear | RED_5 (square byte | 976 | cc | mode drop) | byte mode drop) | mode drop) | 977 +-----------+----------------+-----------------+--------------------+ 978 | TCP or | s/sqrt(p) | sqrt(s/p) | 1/sqrt(p) | 979 | TFRC | | | | 980 | TFRC-SP | 1/sqrt(p) | 1/sqrt(sp) | 1/(s.sqrt(p)) | 981 +-----------+----------------+-----------------+--------------------+ 983 Table 1: Dependence of flow bit-rate per RTT on packet size s and 984 drop rate p when network and/or transport bias towards small packets 985 to varying degrees 987 Table 1 aims to summarise the positions we may now be in. Each 988 column shows a different possible AQM behaviour in different queues 989 in the network, using the terminology of Cnodder et al outlined 990 earlier (RED_1 is basic RED with packet-mode drop). Each row shows a 991 different transport behaviour: TCP [RFC5681] and TFRC [RFC3448] on 992 the top row with TFRC-SP [RFC4828] below. Suppressing all 993 inessential details the table shows that independence from packet 994 size should either be achievable by not altering the TCP transport in 995 a RED_5 network, or using the small packet TFRC-SP transport in a 996 network without any byte-mode dropping RED (top right and bottom 997 left). Top left is the `do nothing' scenario, while bottom right is 998 the `do-both' scenario in which bit-rate would become far too biased 999 towards small packets. Of course, if any form of byte-mode dropping 1000 RED has been deployed on a selection of congested queues, each path 1001 will present a different hybrid scenario to its transport. 1003 Whatever, we can see that the linear byte-mode drop column in the 1004 middle considerably complicates the Internet. It's a half-way house 1005 that doesn't bias enough towards small packets even if one believes 1006 the network should be doing the biasing. We argue below that _all_ 1007 network layer bias towards small packets should be turned off--if 1008 indeed any equipment vendors have implemented it--leaving packet size 1009 bias solely as the preserve of the transport layer (solely the 1010 leftmost, packet-mode drop column). 1012 A survey has been conducted of 84 vendors to assess how widely drop 1013 probability based on packet size has been implemented in RED. Prior 1014 to the survey, an individual approach to Cisco received confirmation 1015 that, having checked the code-base for each of the product ranges, 1016 Cisco has not implemented any discrimination based on packet size in 1017 any AQM algorithm in any of its products. Also an individual 1018 approach to Alcatel-Lucent drew a confirmation that it was very 1019 likely that none of their products contained RED code that 1020 implemented any packet-size bias. 1022 Turning to our more formal survey (Table 2), about 19% of those 1023 surveyed have replied so far, giving a sample size of 16. Although 1024 we do not have permission to identify the respondents, we can say 1025 that those that have responded include most of the larger vendors, 1026 covering a large fraction of the market. They range across the large 1027 network equipment vendors at L3 & L2, firewall vendors, wireless 1028 equipment vendors, as well as large software businesses with a small 1029 selection of networking products. So far, all those who have 1030 responded have confirmed that they have not implemented the variant 1031 of RED with drop dependent on packet size (2 are fairly sure they 1032 haven't but need to check more thoroughly). 1034 +-------------------------------+----------------+-----------------+ 1035 | Response | No. of vendors | %age of vendors | 1036 +-------------------------------+----------------+-----------------+ 1037 | Not implemented | 14 | 17% | 1038 | Not implemented (probably) | 2 | 2% | 1039 | Implemented | 0 | 0% | 1040 | No response | 68 | 81% | 1041 | Total companies/orgs surveyed | 84 | 100% | 1042 +-------------------------------+----------------+-----------------+ 1044 Table 2: Vendor Survey on byte-mode drop variant of RED (lower drop 1045 probability for small packets) 1047 Where reasons have been given, the extra complexity of packet bias 1048 code has been most prevalent, though one vendor had a more principled 1049 reason for avoiding it--similar to the argument of this document. We 1050 have established that Linux does not implement RED with packet size 1051 drop bias, although we have not investigated a wider range of open 1052 source code. 1054 Finally, we repeat that RED's byte mode drop is not the only way to 1055 bias towards small packets--tail-drop tends to lock-out large packets 1056 very effectively. Our survey was of vendor implementations, so we 1057 cannot be certain about operator deployment. But we believe many 1058 queues in the Internet are still tail-drop. My own company (BT) has 1059 widely deployed RED, but there are bound to be many tail-drop queues, 1060 particularly in access network equipment and on middleboxes like 1061 firewalls, where RED is not always available. Routers using a memory 1062 architecture based on fixed size buffers with borrowing may also 1063 still be prevalent in the Internet. As explained in Section 6.2.1, 1064 these also provide a marginal (but legitimate) bias towards small 1065 packets. So even though RED byte-mode drop is not prevalent, it is 1066 likely there is still some bias towards small packets in the Internet 1067 due to tail drop and fixed buffer borrowing. 1069 7. Outstanding Issues and Next Steps 1071 7.1. Bit-congestible World 1073 For a connectionless network with nearly all resources being bit- 1074 congestible we believe the recommended position is now unarguably 1075 clear--that the network should not make allowance for packet sizes 1076 and the transport should. This leaves two outstanding issues: 1078 o How to handle any legacy of AQM with byte-mode drop already 1079 deployed; 1081 o The need to start a programme to update transport congestion 1082 control protocol standards to take account of packet size. 1084 The sample of returns from our vendor survey Section 6.2.4 suggest 1085 that byte-mode packet drop seems not to be implemented at all let 1086 alone deployed, or if it is, it is likely to be very sparse. 1087 Therefore, we do not really need a migration strategy from all but 1088 nothing to nothing. 1090 A programme of standards updates to take account of packet size in 1091 transport congestion control protocols has started with TFRC-SP 1092 [RFC4828], while weighted TCPs implemented in the research community 1093 [WindowPropFair] could form the basis of a future change to TCP 1094 congestion control [RFC5681] itself. 1096 7.2. Bit- & Packet-congestible World 1098 Nonetheless, a connectionless network with both bit-congestible and 1099 packet-congestible resources is a different matter. If we believe we 1100 should allow for this possibility in the future, this space contains 1101 a truly open research issue. 1103 The idealised wire protocol coding described in Section 5 requires at 1104 least two flags for congestion of bit-congestible and packet- 1105 congestible resources. This hides a fundamental problem--much more 1106 fundamental than whether we can magically create header space for yet 1107 another ECN flag in IPv4, or whether it would work while being 1108 deployed incrementally. A congestion notification protocol must 1109 survive a transition from low levels of congestion to high. Marking 1110 two states is feasible with explicit marking, but much harder if 1111 packets are dropped. Also, it will not always be cost-effective to 1112 implement AQM at every low level resource, so drop will often have to 1113 suffice. Distinguishing drop from delivery naturally provides just 1114 one congestion flag--it is hard to drop a packet in two ways that are 1115 distinguishable remotely. This is a similar problem to that of 1116 distinguishing wireless transmission losses from congestive losses. 1118 We should also note that, strictly, packet-congestible resources are 1119 actually cycle-congestible because load also depends on the 1120 complexity of each look-up and whether the pattern of arrivals is 1121 amenable to caching or not. Further, this reminds us that any 1122 solution must not require a forwarding engine to use excessive 1123 processor cycles in order to decide how to say it has no spare 1124 processor cycles. 1126 Recently, the dual resource queue (DRQ) proposal [DRQ] has been made 1127 on the premise that, as network processors become more cost 1128 effective, per packet operations will become more complex 1129 (irrespective of whether more function in the network layer is 1130 desirable). Consequently the premise is that CPU congestion will 1131 become more common. DRQ is a proposed modification to the RED 1132 algorithm that folds both bit congestion and packet congestion into 1133 one signal (either loss or ECN). 1135 The problem of signalling packet processing congestion is not 1136 pressing, as most Internet resources are designed to be bit- 1137 congestible before packet processing starts to congest. However, the 1138 IRTF Internet congestion control research group (ICCRG) has set 1139 itself the task of reaching consensus on generic forwarding 1140 mechanisms that are necessary and sufficient to support the 1141 Internet's future congestion control requirements (the first 1142 challenge in 1143 [I-D.irtf-iccrg-welzl-congestion-control-open-research]). Therefore, 1144 rather than not giving this problem any thought at all, just because 1145 it is hard and currently hypothetical, we defer the question of 1146 whether packet congestion might become common and what to do if it 1147 does to the IRTF (the 'Small Packets' challenge in 1148 [I-D.irtf-iccrg-welzl-congestion-control-open-research]). 1150 8. Security Considerations 1152 This draft recommends that queues do not bias drop probability 1153 towards small packets as this creates a perverse incentive for 1154 transports to break down their flows into tiny segments. One of the 1155 benefits of implementing AQM was meant to be to remove this perverse 1156 incentive that drop-tail queues gave to small packets. Of course, if 1157 transports really want to make the greatest gains, they don't have to 1158 respond to congestion anyway. But we don't want applications that 1159 are trying to behave to discover that they can go faster by using 1160 smaller packets. 1162 In practice, transports cannot all be trusted to respond to 1163 congestion. So another reason for recommending that queues do not 1164 bias drop probability towards small packets is to avoid the 1165 vulnerability to small packet DDoS attacks that would otherwise 1166 result. One of the benefits of implementing AQM was meant to be to 1167 remove drop-tail's DoS vulnerability to small packets, so we 1168 shouldn't add it back again. 1170 If most queues implemented AQM with byte-mode drop, the resulting 1171 network would amplify the potency of a small packet DDoS attack. At 1172 the first queue the stream of packets would push aside a greater 1173 proportion of large packets, so more of the small packets would 1174 survive to attack the next queue. Thus a flood of small packets 1175 would continue on towards the destination, pushing regular traffic 1176 with large packets out of the way in one queue after the next, but 1177 suffering much less drop itself. 1179 Appendix C explains why the ability of networks to police the 1180 response of _any_ transport to congestion depends on bit-congestible 1181 network resources only doing packet-mode not byte-mode drop. In 1182 summary, it says that making drop probability depend on the size of 1183 the packets that bits happen to be divided into simply encourages the 1184 bits to be divided into smaller packets. Byte-mode drop would 1185 therefore irreversibly complicate any attempt to fix the Internet's 1186 incentive structures. 1188 9. Conclusions 1190 The strong conclusion is that AQM algorithms such as RED SHOULD NOT 1191 use byte-mode drop. More generally, the Internet's congestion 1192 notification protocols (drop, ECN & PCN) SHOULD take account of 1193 packet size when the notification is read by the transport layer, NOT 1194 when it is written by the network layer. This approach offers 1195 sufficient and correct congestion information for all known and 1196 future transport protocols and also ensures no perverse incentives 1197 are created that would encourage transports to use inappropriately 1198 small packet sizes. 1200 The alternative of deflating RED's drop probability for smaller 1201 packet sizes (byte-mode drop) has no enduring advantages. It is more 1202 complex, it creates the perverse incentive to fragment segments into 1203 tiny pieces and it reopens the vulnerability to floods of small- 1204 packets that drop-tail queues suffered from and AQM was designed to 1205 remove. Byte-mode drop is a change to the network layer that makes 1206 allowance for an omission from the design of TCP, effectively reverse 1207 engineering the network layer to contrive to make two TCPs with 1208 different packet sizes run at equal bit rates (rather than packet 1209 rates) under the same path conditions. It also improves TCP 1210 performance by reducing the chance that a SYN or a pure ACK will be 1211 dropped, because they are small. But we SHOULD NOT hack the network 1212 layer to improve or fix certain transport protocols. No matter how 1213 predominant a transport protocol is (even if it's TCP), trying to 1214 correct for its failings by biasing towards small packets in the 1215 network layer creates a perverse incentive to break down all flows 1216 from all transports into tiny segments. 1218 So far, our survey of 84 vendors across the industry has drawn 1219 responses from about 19%, none of whom have implemented the byte mode 1220 packet drop variant of RED. Given there appears to be little, if 1221 any, installed base it seems we can recommend removal of byte-mode 1222 drop from RED with little, if any, incremental deployment impact. 1224 If a vendor has implemented byte-mode drop, and an operator has 1225 turned it on, it is strongly RECOMMENDED that it SHOULD be turned 1226 off. Note that RED as a whole SHOULD NOT be turned off, as without 1227 it, a drop tail queue also biases against large packets. But note 1228 also that turning off byte-mode may alter the relative performance of 1229 applications using different packet sizes, so it would be advisable 1230 to establish the implications before turning it off. 1232 Instead, the IETF transport area should continue its programme of 1233 updating congestion control protocols to take account of packet size 1234 and to make transports less sensitive to losing control packets like 1235 SYNs and pure ACKS. 1237 NOTE WELL that RED's byte-mode queue measurement is fine, being 1238 completely orthogonal to byte-mode drop. If a RED implementation has 1239 a byte-mode but does not specify what sort of byte-mode, it is most 1240 probably byte-mode queue measurement, which is fine. However, if in 1241 doubt, the vendor should be consulted. 1243 The above conclusions cater for the Internet as it is today with 1244 most, if not all, resources being primarily bit-congestible. A 1245 secondary conclusion of this memo is that we may see more packet- 1246 congestible resources in the future, so research may be needed to 1247 extend the Internet's congestion notification (drop or ECN) so that 1248 it can handle a mix of bit-congestible and packet-congestible 1249 resources. 1251 10. Acknowledgements 1253 Thank you to Sally Floyd, who gave extensive and useful review 1254 comments. Also thanks for the reviews from Philip Eardley, Toby 1255 Moncaster and Arnaud Jacquet as well as helpful explanations of 1256 different hardware approaches from Larry Dunn and Fred Baker. I am 1257 grateful to Bruce Davie and his colleagues for providing a timely and 1258 efficient survey of RED implementation in Cisco's product range. 1259 Also grateful thanks to Toby Moncaster, Will Dormann, John Regnault, 1260 Simon Carter and Stefaan De Cnodder who further helped survey the 1261 current status of RED implementation and deployment and, finally, 1262 thanks to the anonymous individuals who responded. 1264 Bob Briscoe is partly funded by Trilogy, a research project (ICT- 1265 216372) supported by the European Community under its Seventh 1266 Framework Programme. The views expressed here are those of the 1267 author only. 1269 11. Comments Solicited 1271 Comments and questions are encouraged and very welcome. They can be 1272 addressed to the IETF Transport Area working group mailing list 1273 , and/or to the authors. 1275 12. References 1277 12.1. Normative References 1279 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1280 Requirement Levels", BCP 14, RFC 2119, March 1997. 1282 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 1283 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 1284 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 1285 S., Wroclawski, J., and L. Zhang, "Recommendations on 1286 Queue Management and Congestion Avoidance in the 1287 Internet", RFC 2309, April 1998. 1289 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1290 of Explicit Congestion Notification (ECN) to IP", 1291 RFC 3168, September 2001. 1293 [RFC3426] Floyd, S., "General Architectural and Policy 1294 Considerations", RFC 3426, November 2002. 1296 [RFC5033] Floyd, S. and M. Allman, "Specifying New Congestion 1297 Control Algorithms", BCP 133, RFC 5033, August 2007. 1299 12.2. Informative References 1301 [CCvarPktSize] 1302 Widmer, J., Boutremans, C., and J-Y. Le Boudec, 1303 "Congestion Control for Flows with Variable Packet Size", 1304 ACM CCR 34(2) 137--151, 2004, 1305 . 1307 [DRQ] Shin, M., Chong, S., and I. Rhee, "Dual-Resource TCP/AQM 1308 for Processing-Constrained Networks", IEEE/ACM 1309 Transactions on Networking Vol 16, issue 2, April 2008, 1310 . 1312 [DupTCP] Wischik, D., "Short messages", Royal Society workshop on 1313 networks: modelling and control , September 2007, . 1316 [ECNFixedWireless] 1317 Siris, V., "Resource Control for Elastic Traffic in CDMA 1318 Networks", Proc. ACM MOBICOM'02 , September 2002, . 1322 [Evol_cc] Gibbens, R. and F. Kelly, "Resource pricing and the 1323 evolution of congestion control", Automatica 35(12)1969-- 1324 1985, December 1999, 1325 . 1327 [I-D.briscoe-tsvwg-re-ecn-tcp] 1328 Briscoe, B., Jacquet, A., Moncaster, T., and A. Smith, 1329 "Re-ECN: Adding Accountability for Causing Congestion to 1330 TCP/IP", draft-briscoe-tsvwg-re-ecn-tcp-07 (work in 1331 progress), March 2009. 1333 [I-D.floyd-tcpm-ackcc] 1334 Floyd, S., "Adding Acknowledgement Congestion Control to 1335 TCP", draft-floyd-tcpm-ackcc-06 (work in progress), 1336 July 2009. 1338 [I-D.ietf-pcn-marking-behaviour] 1339 Eardley, P., "Metering and marking behaviour of PCN- 1340 nodes", draft-ietf-pcn-marking-behaviour-05 (work in 1341 progress), August 2009. 1343 [I-D.ietf-tcpm-ecnsyn] 1344 Floyd, S., "Adding Explicit Congestion Notification (ECN) 1345 Capability to TCP's SYN/ACK Packets", 1346 draft-ietf-tcpm-ecnsyn-10 (work in progress), May 2009. 1348 [I-D.irtf-iccrg-welzl-congestion-control-open-research] 1349 Welzl, M., Scharf, M., Briscoe, B., and D. Papadimitriou, 1350 "Open Research Issues in Internet Congestion Control", 1351 draft-irtf-iccrg-welzl-congestion-control-open-research-05 1352 (work in progress), September 2009. 1354 [IOSArch] Bollapragada, V., White, R., and C. Murphy, "Inside Cisco 1355 IOS Software Architecture", Cisco Press: CCIE Professional 1356 Development ISBN13: 978-1-57870-181-0, July 2000. 1358 [MulTCP] Crowcroft, J. and Ph. Oechslin, "Differentiated End to End 1359 Internet Services using a Weighted Proportional Fair 1360 Sharing TCP", CCR 28(3) 53--69, July 1998, . 1363 [PktSizeEquCC] 1364 Vasallo, P., "Variable Packet Size Equation-Based 1365 Congestion Control", ICSI Technical Report tr-00-008, 1366 2000, . 1369 [RED93] Floyd, S. and V. Jacobson, "Random Early Detection (RED) 1370 gateways for Congestion Avoidance", IEEE/ACM Transactions 1371 on Networking 1(4) 397--413, August 1993, 1372 . 1374 [REDbias] Eddy, W. and M. Allman, "A Comparison of RED's Byte and 1375 Packet Modes", Computer Networks 42(3) 261--280, 1376 June 2003, 1377 . 1379 [REDbyte] De Cnodder, S., Elloumi, O., and K. Pauwels, "RED behavior 1380 with different packet sizes", Proc. 5th IEEE Symposium on 1381 Computers and Communications (ISCC) 793--799, July 2000, 1382 . 1384 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 1385 "Definition of the Differentiated Services Field (DS 1386 Field) in the IPv4 and IPv6 Headers", RFC 2474, 1387 December 1998. 1389 [RFC3448] Handley, M., Floyd, S., Padhye, J., and J. Widmer, "TCP 1390 Friendly Rate Control (TFRC): Protocol Specification", 1391 RFC 3448, January 2003. 1393 [RFC3714] Floyd, S. and J. Kempf, "IAB Concerns Regarding Congestion 1394 Control for Voice Traffic in the Internet", RFC 3714, 1395 March 2004. 1397 [RFC4782] Floyd, S., Allman, M., Jain, A., and P. Sarolahti, "Quick- 1398 Start for TCP and IP", RFC 4782, January 2007. 1400 [RFC4828] Floyd, S. and E. Kohler, "TCP Friendly Rate Control 1401 (TFRC): The Small-Packet (SP) Variant", RFC 4828, 1402 April 2007. 1404 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1405 Control", RFC 5681, September 2009. 1407 [Rate_fair_Dis] 1408 Briscoe, B., "Flow Rate Fairness: Dismantling a Religion", 1409 ACM CCR 37(2)63--74, April 2007, 1410 . 1412 [WindowPropFair] 1413 Siris, V., "Service Differentiation and Performance of 1414 Weighted Window-Based Congestion Control and Packet 1415 Marking Algorithms in ECN Networks", Computer 1416 Communications 26(4) 314--326, 2002, . 1420 [gentle_RED] 1421 Floyd, S., "Recommendation on using the "gentle_" variant 1422 of RED", Web page , March 2000, 1423 . 1425 [pBox] Floyd, S. and K. Fall, "Promoting the Use of End-to-End 1426 Congestion Control in the Internet", IEEE/ACM Transactions 1427 on Networking 7(4) 458--472, August 1999, 1428 . 1430 [pktByteEmail] 1431 Floyd, S., "RED: Discussions of Byte and Packet Modes", 1432 email , March 1997, 1433 . 1435 [xcp-spec] 1436 Falk, A., "Specification for the Explicit Control Protocol 1437 (XCP)", draft-falk-xcp-spec-03 (work in progress), 1438 July 2007. 1440 (Expired) 1442 Editorial Comments 1444 [Note_Variation] The algorithm of the byte-mode drop variant of RED 1445 switches off any bias towards small packets 1446 whenever the smoothed queue length dictates that 1447 the drop probability of large packets should be 1448 100%. In the example in the Introduction, as the 1449 large packet drop probability varies around 25% the 1450 small packet drop probability will vary around 1%, 1451 but with occasional jumps to 100% whenever the 1452 instantaneous queue (after drop) manages to sustain 1453 a length above the 100% drop point for longer than 1454 the queue averaging period. 1456 Appendix A. Example Scenarios 1458 A.1. Notation 1460 To prove our idealised wire protocol (Section 5) is correct, we will 1461 compare two flows with different packet sizes, s_1 and s_2 [bit/pkt], 1462 to make sure their transports each see the correct congestion 1463 notification. Initially, within each flow we will take all packets 1464 as having equal sizes, but later we will generalise to flows within 1465 which packet sizes vary. A flow's bit rate, x [bit/s], is related to 1466 its packet rate, u [pkt/s], by 1468 x(t) = s.u(t). 1470 We will consider a 2x2 matrix of four scenarios: 1472 +-----------------------------+------------------+------------------+ 1473 | resource type and | A) Equal bit | B) Equal pkt | 1474 | congestion level | rates | rates | 1475 +-----------------------------+------------------+------------------+ 1476 | i) bit-congestible, p_b | (Ai) | (Bi) | 1477 | ii) pkt-congestible, p_p | (Aii) | (Bii) | 1478 +-----------------------------+------------------+------------------+ 1480 Table 3 1482 A.2. Bit-congestible resource, equal bit rates (Ai) 1484 Starting with the bit-congestible scenario, for two flows to maintain 1485 equal bit rates (Ai) the ratio of the packet rates must be the 1486 inverse of the ratio of packet sizes: u_2/u_1 = s_1/s_2. So, for 1487 instance, a flow of 60B packets would have to send 25x more packets 1488 to achieve the same bit rate as a flow of 1500B packets. If a 1489 congested resource marks proportion p_b of packets irrespective of 1490 size, the ratio of marked packets received by each transport will 1491 still be the same as the ratio of their packet rates, p_b.u_2/p_b.u_1 1492 = s_1/s_2. So of the 25x more 60B packets sent, 25x more will be 1493 marked than in the 1500B packet flow, but 25x more won't be marked 1494 too. 1496 In this scenario, the resource is bit-congestible, so it always uses 1497 our idealised bit-congestion field when it marks packets. Therefore 1498 the transport should count marked bytes not packets. But it doesn't 1499 actually matter for ratio-based transports like TCP (Section 5). The 1500 ratio of marked to unmarked bytes seen by each flow will be p_b, as 1501 will the ratio of marked to unmarked packets. Because they are 1502 ratios, the units cancel out. 1504 If a flow sent an inconsistent mixture of packet sizes, we have said 1505 it should count the ratio of marked and unmarked bytes not packets in 1506 order to correctly decode the level of congestion. But actually, if 1507 all it is trying to do is decode p_b, it still doesn't matter. For 1508 instance, imagine the two equal bit rate flows were actually one flow 1509 at twice the bit rate sending a mixture of one 1500B packet for every 1510 thirty 60B packets. 25x more small packets will be marked and 25x 1511 more will be unmarked. The transport can still calculate p_b whether 1512 it uses bytes or packets for the ratio. In general, for any 1513 algorithm which works on a ratio of marks to non-marks, either bytes 1514 or packets can be counted interchangeably, because the choice cancels 1515 out in the ratio calculation. 1517 However, where an absolute target rather than relative volume of 1518 congestion caused is important (Section 5), as it is for congestion 1519 accountability [Rate_fair_Dis], the transport must count marked bytes 1520 not packets, in this bit-congestible case. Aside from the goal of 1521 congestion accountability, this is how the bit rate of a transport 1522 can be made independent of packet size; by ensuring the rate of 1523 congestion caused is kept to a constant weight [WindowPropFair], 1524 rather than merely responding to the ratio of marked and unmarked 1525 bytes. 1527 Note the unit of byte-congestion-volume is the byte. 1529 A.3. Bit-congestible resource, equal packet rates (Bi) 1531 If two flows send different packet sizes but at the same packet rate, 1532 their bit rates will be in the same ratio as their packet sizes, x_2/ 1533 x_1 = s_2/s_1. For instance, a flow sending 1500B packets at the 1534 same packet rate as another sending 60B packets will be sending at 1535 25x greater bit rate. In this case, if a congested resource marks 1536 proportion p_b of packets irrespective of size, the ratio of packets 1537 received with the byte-congestion field marked by each transport will 1538 be the same, p_b.u_2/p_b.u_1 = 1. 1540 Because the byte-congestion field is marked, the transport should 1541 count marked bytes not packets. But because each flow sends 1542 consistently sized packets it still doesn't matter for ratio-based 1543 transports. The ratio of marked to unmarked bytes seen by each flow 1544 will be p_b, as will the ratio of marked to unmarked packets. 1545 Therefore, if the congestion control algorithm is only concerned with 1546 the ratio of marked to unmarked packets (as is TCP), both flows will 1547 be able to decode p_b correctly whether they count packets or bytes. 1549 But if the absolute volume of congestion is important, e.g. for 1550 congestion accountability, the transport must count marked bytes not 1551 packets. Then the lower bit rate flow using smaller packets will 1552 rightly be perceived as causing less byte-congestion even though its 1553 packet rate is the same. 1555 If the two flows are mixed into one, of bit rate x1+x2, with equal 1556 packet rates of each size packet, the ratio p_b will still be 1557 measurable by counting the ratio of marked to unmarked bytes (or 1558 packets because the ratio cancels out the units). However, if the 1559 absolute volume of congestion is required, the transport must count 1560 the sum of congestion marked bytes, which indeed gives a correct 1561 measure of the rate of byte-congestion p_b(x_1 + x_2) caused by the 1562 combined bit rate. 1564 A.4. Pkt-congestible resource, equal bit rates (Aii) 1566 Moving to the case of packet-congestible resources, we now take two 1567 flows that send different packet sizes at the same bit rate, but this 1568 time the pkt-congestion field is marked by the resource with 1569 probability p_p. As in scenario Ai with the same bit rates but a 1570 bit-congestible resource, the flow with smaller packets will have a 1571 higher packet rate, so more packets will be both marked and unmarked, 1572 but in the same proportion. 1574 This time, the transport should only count marks without taking into 1575 account packet sizes. Transports will get the same result, p_p, by 1576 decoding the ratio of marked to unmarked packets in either flow. 1578 If one flow imitates the two flows but merged together, the bit rate 1579 will double with more small packets than large. The ratio of marked 1580 to unmarked packets will still be p_p. But if the absolute number of 1581 pkt-congestion marked packets is counted it will accumulate at the 1582 combined packet rate times the marking probability, p_p(u_1+u_2), 26x 1583 faster than packet congestion accumulates in the single 1500B packet 1584 flow of our example, as required. 1586 But if the transport is interested in the absolute number of packet 1587 congestion, it should just count how many marked packets arrive. For 1588 instance, a flow sending 60B packets will see 25x more marked packets 1589 than one sending 1500B packets at the same bit rate, because it is 1590 sending more packets through a packet-congestible resource. 1592 Note the unit of packet congestion is a packet. 1594 A.5. Pkt-congestible resource, equal packet rates (Bii) 1596 Finally, if two flows with the same packet rate, pass through a 1597 packet-congestible resource, they will both suffer the same 1598 proportion of marking, p_p, irrespective of their packet sizes. On 1599 detecting that the pkt-congestion field is marked, the transport 1600 should count packets, and it will be able to extract the ratio p_p of 1601 marked to unmarked packets from both flows, irrespective of packet 1602 sizes. 1604 Even if the transport is monitoring the absolute amount of packets 1605 congestion over a period, still it will see the same amount of packet 1606 congestion from either flow. 1608 And if the two equal packet rates of different size packets are mixed 1609 together in one flow, the packet rate will double, so the absolute 1610 volume of packet-congestion will accumulate at twice the rate of 1611 either flow, 2p_p.u_1 = p_p(u_1+u_2). 1613 Appendix B. Congestion Notification Definition: Further Justification 1615 In Section 3 on the definition of congestion notification, load not 1616 capacity was used as the denominator. This also has a subtle 1617 significance in the related debate over the design of new transport 1618 protocols--typical new protocol designs (e.g. in XCP [xcp-spec] & 1619 Quickstart [RFC4782]) expect the sending transport to communicate its 1620 desired flow rate to the network and network elements to 1621 progressively subtract from this so that the achievable flow rate 1622 emerges at the receiving transport. 1624 Congestion notification with total load in the denominator can serve 1625 a similar purpose (though in retrospect not in advance like XCP & 1626 QuickStart). Congestion notification is a dimensionless fraction but 1627 each source can extract necessary rate information from it because it 1628 already knows what its own rate is. Even though congestion 1629 notification doesn't communicate a rate explicitly, from each 1630 source's point of view congestion notification represents the 1631 fraction of the rate it was sending a round trip ago that couldn't 1632 (or wouldn't) be served by available resources. After they were 1633 sent, all these fractions of each source's offered load added up to 1634 the aggregate fraction of offered load seen by the congested 1635 resource. So, the source can also know the total excess rate by 1636 multiplying total load by congestion level. Therefore congestion 1637 notification, as one scale-free dimensionless fraction, implicitly 1638 communicates the instantaneous excess flow rate, albeit a RTT ago. 1640 Appendix C. Byte-mode Drop Complicates Policing Congestion Response 1642 This appendix explains why the ability of networks to police the 1643 response of _any_ transport to congestion depends on bit-congestible 1644 network resources only doing packet-mode not byte-mode drop. 1646 To be able to police a transport's response to congestion when 1647 fairness can only be judged over time and over all an individual's 1648 flows, the policer has to have an integrated view of all the 1649 congestion an individual (not just one flow) has caused due to all 1650 traffic entering the Internet from that individual. This is termed 1651 congestion accountability. 1653 But a byte-mode drop algorithm has to depend on the local MTU of the 1654 line - an algorithm needs to use some concept of a 'normal' packet 1655 size. Therefore, one dropped or marked packet is not necessarily 1656 equivalent to another unless you know the MTU at the queue that where 1657 it was dropped/marked. To have an integrated view of a user, we 1658 believe congestion policing has to be located at an individual's 1659 attachment point to the Internet [I-D.briscoe-tsvwg-re-ecn-tcp]. But 1660 from there it cannot know the MTU of each remote queue that caused 1661 each drop/mark. Therefore it cannot take an integrated approach to 1662 policing all the responses to congestion of all the transports of one 1663 individual. Therefore it cannot police anything. 1665 The security/incentive argument _for_ packet-mode drop is similar. 1666 Firstly, confining RED to packet-mode drop would not preclude 1667 bottleneck policing approaches such as [pBox] as it seems likely they 1668 could work just as well by monitoring the volume of dropped bytes 1669 rather than packets. Secondly packet-mode dropping/marking naturally 1670 allows the congestion notification of packets to be globally 1671 meaningful without relying on MTU information held elsewhere. 1673 Because we recommend that a dropped/marked packet should be taken to 1674 mean that all the bytes in the packet are dropped/marked, a policer 1675 can remain robust against bits being re-divided into different size 1676 packets or across different size flows [Rate_fair_Dis]. Therefore 1677 policing would work naturally with just simple packet-mode drop in 1678 RED. 1680 In summary, making drop probability depend on the size of the packets 1681 that bits happen to be divided into simply encourages the bits to be 1682 divided into smaller packets. Byte-mode drop would therefore 1683 irreversibly complicate any attempt to fix the Internet's incentive 1684 structures. 1686 Author's Address 1688 Bob Briscoe 1689 BT 1690 B54/77, Adastral Park 1691 Martlesham Heath 1692 Ipswich IP5 3RE 1693 UK 1695 Phone: +44 1473 645196 1696 Email: bob.briscoe@bt.com 1697 URI: http://bobbriscoe.net/