idnits 2.17.1 draft-irtf-e2e-queue-mgt-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-19) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'Bolot94' is defined on line 504, but no explicit reference was found in the text == Unused Reference: 'McCanne96' is defined on line 539, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. 'Bolot94' -- Possible downref: Non-RFC (?) normative reference: ref. 'Demers90' -- Possible downref: Non-RFC (?) normative reference: ref. 'Floyd91' -- Possible downref: Non-RFC (?) normative reference: ref. 'Floyd95' -- Possible downref: Non-RFC (?) normative reference: ref. 'Gaynor96' -- Possible downref: Non-RFC (?) normative reference: ref. 'Jacobson88' -- Possible downref: Non-RFC (?) normative reference: ref. 'Lakshman96' -- Possible downref: Non-RFC (?) normative reference: ref. 'Leland94' -- Possible downref: Non-RFC (?) normative reference: ref. 'McCanne96' ** Obsolete normative reference: RFC 896 (ref. 'Nagle84') (Obsoleted by RFC 7805) -- Possible downref: Non-RFC (?) normative reference: ref. 'RED93' -- Possible downref: Non-RFC (?) normative reference: ref. 'Shenker96' -- Possible downref: Non-RFC (?) normative reference: ref. 'SRM96' -- Possible downref: Non-RFC (?) normative reference: ref. 'Villamizar94' -- Possible downref: Non-RFC (?) normative reference: ref. 'Willinger95' -- Possible downref: Non-RFC (?) normative reference: ref. 'Wroclawski96' Summary: 8 errors (**), 0 flaws (~~), 3 warnings (==), 17 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Draft Bob Braden 3 Expiration: September 1997 USC/ISI 4 File: draft-irtf-e2e-queue-mgt-00.txt Dave Clark 5 MIT LCS 6 Jon Crowcroft 7 UCL 8 Bruce Davie 9 Cisco Systems 10 Steve Deering 11 Cisco Systems 12 Deborah Estrin 13 USC 14 Sally Floyd 15 LBNL 16 Van Jacobson 17 LBNL 18 Greg Minshall 19 Ipsilon 20 Craig Partridge 21 BBN 22 Larry Peterson 23 University of Arizona 24 K. K. Ramakrishnan 25 ATT Labs Research 26 Scott Shenker 27 Xerox PARC 28 John Wroclawski 29 MIT LCS 30 Lixia Zhang 31 UCLA 33 Recommendations on Queue Management and Congestion Avoidance 35 in the Internet 37 March 25, 1997 39 Status of Memo 41 This document is an Internet-Draft. Internet-Drafts are working 42 documents of the Internet Engineering Task Force (IETF), its areas, 43 and its working groups. Note that other groups may also distribute 44 working documents as Internet-Drafts. 46 Internet-Drafts are draft documents valid for a maximum of six months 47 and may be updated, replaced, or obsoleted by other documents at any 48 time. It is inappropriate to use Internet-Drafts as reference 49 material or to cite them other than as "work in progress." 51 To learn the current status of any Internet-Draft, please check the 52 "1id-abstracts.txt" listing contained in the Internet- Drafts Shadow 53 Directories on ds.internic.net (US East Coast), nic.nordu.net 54 (Europe), ftp.isi.edu (US West Coast), or munnari.oz.au (Pacific 55 Rim). 57 Abstract 59 This memo presents two recommendations to the Internet community 60 concerning measures to improve and preserve Internet performance. It 61 presents a strong recommendation for testing, standardization, and 62 widespread deployment of active queue management in routers, to 63 improve the performance of today's Internet. It also urges a 64 concerted effort of research, measurement, and ultimate deployment of 65 router mechanisms to protect the Internet from flows that are not 66 sufficiently responsive to congestion notification. 68 1. INTRODUCTION 70 The Internet protocol architecture is based on a connectionless end- 71 to-end packet service using the IP protocol. The advantages of its 72 connectionless design, flexibility and robustness, have been amply 73 demonstrated. However, these advantages are not without cost: 74 careful design is required to provide good service under heavy load. 75 In fact, lack of attention to the dynamics of packet forwarding can 76 result in severe service degradation or "Internet meltdown". This 77 phenomenon was first observed during the early growth phase of the 78 Internet of the mid 1980s [Nagle84], and is technically called 79 "congestion collapse". 81 The original fix for Internet meltdown was provided by Van Jacobson. 82 Beginning in 1986, Jacobson developed the congestion avoidance 83 mechanisms that are now required in TCP implementations [Jacobson88, 84 HostReq89]. These mechanisms operate in the hosts to cause TCP 85 connections to "back off" during congestion. We say that TCP flows 86 are "responsive" to congestion signals (i.e., dropped packets) from 87 the network. It is primarily these TCP congestion avoidance 88 algorithms that prevent the congestion collapse of today's Internet. 90 However, that is not the end of the story. Considerable research has 91 been done on Internet dynamics since 1988, and the Internet has 92 grown. It has become clear that the TCP congestion avoidance 93 mechanisms, while necessary and powerful, are not sufficient to 94 provide good service in all circumstances. Basically, there is a 95 limit to how much control can be accomplished from the edges of the 96 network. Some mechanisms are needed in the routers to complement the 97 endpoint congestion avoidance mechanisms. 99 It is useful to distinguish between two classes of router algorithms 100 related to congestion control: "queue management" versus " 101 scheduling" algorithms. To a rough approximation, queue management 102 algorithms manage the length of packet queues by dropping packets 103 when necessary or appropriate, while scheduling algorithms determine 104 which packet to send next and are used primarily to manage the 105 allocation of bandwidth among flows. While these two router 106 mechanisms are closely related, they address rather different 107 performance issues. 109 This memo highlights two router performance issues. The first issue 110 is the need for an advanced form of router queue management that we 111 call "active queue management." Section 2 summarizes the benefits 112 that active queue management can bring. Section 3 describes a 113 recommended active queue management mechanism, called Random Early 114 Detection or "RED". We expect that the RED algorithm can be used 115 with a wide variety of scheduling algorithms, can be implemented 116 relatively efficiently, and will provide significant Internet 117 performance improvement. 119 The second issue, discussed in Section 4 of this memo, is the 120 potential for future congestion collapse of the Internet due to flows 121 that are unresponsive, or not sufficiently responsive, to congestion 122 indications. Unfortunately, there is no consensus solution to 123 controlling congestion caused by such aggressive flows; significant 124 research and engineering will be required before any solution will be 125 available. It is imperative that this work be energetically pursued, 126 to ensure the future stability of the Internet. 128 Section 5 concludes the memo with a set of recommendations to the 129 IETF concerning these topics. 131 The discussion in this memo applies to "best-effort" traffic. The 132 Internet integrated services architecture, which provides a mechanism 133 for protecting individual flows from congestion, introduces its own 134 queue management and scheduling algorithms [Shenker96, Wroclawski96]. 135 However, we do not expect deployment of integrated services to 136 significantly diminish the importance of the best-effort traffic 137 issues discussed in this memo. 139 Preparation of this memo resulted from past discussions of end-to-end 140 performance, Internet congestion, and RED in the End-to-End Research 141 Group of the Internet Research Task Force (IRTF). 143 2. THE NEED FOR ACTIVE QUEUE MANAGEMENT 145 The traditional technique for managing router queue lengths is to set 146 a maximum length (in terms of packets) for each queue, accept packets 147 for the queue until the maximum length is reached, then reject (drop) 148 subsequent incoming packets until the queue decreases because a 149 packet from the queue has been transmitted. This technique is known 150 as "tail drop", since the packet that arrived most recently (i.e., 151 the one on the tail of the queue) is dropped when the queue is full. 152 This method has served the Internet well for years, but it has two 153 important drawbacks. 155 1. Lock-Out 157 In some situations tail drop allows a single connection or a few 158 flows to monopolize queue space, preventing other connections 159 from getting room in the queue. This "lock-out" phenomenon is 160 often the result of synchronization or other timing effects. 162 2. Full Queues 164 The tail drop discipline allows queues to maintain a full (or, 165 almost full) status for long periods of time, since tail drop 166 signals congestion (via a packet drop) only when the queue has 167 become full. It is important to reduce the steady-state queue 168 size, and this is perhaps queue management's most important 169 goal. 171 The naive assumption might be that there is a simple tradeoff 172 between delay and throughput, and that the recommendation that 173 queues be maintained in a "non-full" state essentially 174 translates to a recommendation that low end-to-end delay is more 175 important than high throughput. However, this does not take 176 into account the critical role that packet bursts play in 177 Internet performance. Even though TCP constrains a flow's 178 window size, packets often arrive at routers in bursts 179 [Leland94]. If the queue is full or almost full, an arriving 180 burst will cause multiple packets to be dropped. This can 181 result in a global synchronization of flows throttling back, 182 followed by a sustained period of lowered link utilization, 183 reducing overall throughput. 185 The point of buffering in the network is to absorb data bursts 186 and to transmit them during the (hopefully) ensuing bursts of 187 silence. This is essential to permit the transmission of bursty 188 data. It should be clear why we would like to have normally- 189 small queues in routers: we want to have queue capacity to 190 absorb the bursts. The counter-intuitive result is that 191 maintaining normally-small queues can result in higher 192 throughput as well as lower end-to-end delay. In short, queue 193 limits should not reflect the steady state queues we want 194 maintained in the network; instead, they should reflect the size 195 of bursts we need to absorb. 197 Besides tail drop, two alternative queue disciplines that can be 198 applied when the queue becomes full are "random drop on full" or 199 "drop front on full". Under the random drop on full discipline, a 200 router drops a randomly selected packet from the queue (which can be 201 an expensive operation, since it naively requires an O(N) walk 202 through the packet queue) when the queue is full and a new packet 203 arrives. Under the "drop front on full" discipline [Lakshman96], the 204 router drops the packet at the front of the queue when the queue is 205 full and a new packet arrives. Both of these solve the lock-out 206 problem, but neither solves the full-queues problem described above. 208 We know in general how to solve the full-queues problem for 209 "responsive" flow, i.e., those flows that throttle back in response 210 to congestion notification. The solution involves dropping packets 211 before a queue becomes full, so that a router can control when and 212 how many packets to drop. We call such a proactive approach "active 213 queue management". The next section introduces RED, an active queue 214 management mechanism that solves both problems listed above (for 215 responsive flows). 217 In summary, an active queue management mechanism can provide the 218 following advantages for responsive flows. 220 1. Reduce number of packets dropped in routers 222 Packet bursts are just part of the networking business 223 [Willinger95]. If all the queue space in a router is already 224 committed to "steady state" traffic or if the buffer space is 225 inadequate, then the router will have no ability to buffer 226 bursts. By keeping the average queue size small, active queue 227 management will provide greater capacity to absorb naturally- 228 occurring bursts without dropping packets. 230 Furthermore, without active queue management, more packets will 231 be dropped when a queue does overflow. This is undesirable for 232 several reasons. First, with a shared queue and the tail drop 233 discipline, an unnecessary global synchronization of flows 234 cutting back can result in lowered average link utilization, and 235 hence lowered network throughput. Second, TCP recovers with 236 more difficulty from a burst of packet drops than from a single 237 packet drop. Third, unnecessary packet drops represent a 238 possible waste of bandwidth on the way to the drop point. 240 2. Provide lower-delay interactive service 242 By keeping the average queue size small, queue management will 243 reduce the delays seen by flows. This is particularly important 244 for interactive applications such as short Web transfers, Telnet 245 traffic, or interactive audio-video sessions, whose subjective 246 (and objective) performance is better when the end-to-end delay 247 is low. 249 3. Avoid lock-out behavior 251 Active queue management can prevent lock-out behavior by 252 ensuring that there will almost always be a buffer available for 253 an incoming packet. For the same reason, active queue 254 management can prevent a router bias against low bandwidth but 255 highly bursty flows. 257 It is clear that lock-out is undesirable because it constitutes 258 a gross unfairness among groups of flows. However, we stop 259 short of calling this benefit "increased fairness", because 260 general fairness among flows requires per-flow state, which is 261 not provided by queue management. For example, in a router 262 using queue management but only FIFO scheduling, two TCP flows 263 may receive very different bandwidths simply because they have 264 different round-trip times [Floyd91], and a flow that does not 265 use congestion control may receive more bandwidth than a flow 266 that does. Per-flow state to achieve general fairness might be 267 maintained by a per-flow scheduling algorithm such as Fair 268 Queueing (FQ) [Demers90], or a class-based scheduling algorithm 269 such as CBQ [Floyd95], for example. 271 On the other hand, active queue management is needed even for 272 routers that use per-flow scheduling algorithms such as FQ or 273 CBQ This is because per-flow scheduling algorithms by 274 themselves do nothing to control the overall queue size or the 275 size of individual queues. Active queue management is needed to 276 control the overall average queue sizes, so that arriving bursts 277 can be accommodated without dropping packets. In addition, 278 active queue management should be used to control the queue size 279 for each individual flow or class, so that they do not 280 experience unnecessarily high delays. Therefore, active queue 281 management should be applied across the classes or flows as well 282 as within each class or flow. 284 In short, scheduling algorithms and queue management should be 285 seen as complementary, not as replacements for each other. In 286 particular, there have been implementations of queue management 287 added to FQ, and work is in progress to add RED queue management 288 to CBQ. 290 3. THE QUEUE MANAGEMENT ALGORITHM "RED" 292 Random Early Detection, or RED, is an active queue management 293 algorithm for routers that will provide the Internet performance 294 advantages cited in the previous section [RED93]. In contrast to 295 traditional queue management algorithms, which drop packets only when 296 the buffer is full, the RED algorithm drops arriving packets 297 probabilistically. The probability of drop increases as the 298 estimated average queue size grows. Note that RED responds to a 299 time-averaged queue length, not an instantaneous one. Thus, if the 300 queue has been mostly empty in the "recent past", RED won't tend to 301 drop packets (unless the queue overflows, of course!). On the other 302 hand, if the queue has recently been relatively full, indicating 303 persistent congestion, newly arriving packets are more likely to be 304 dropped. 306 The RED algorithm itself consists of two main parts: estimation of 307 the average queue size and the decision of whether or not to drop an 308 incoming packet. 310 (a) Estimation of Average Queue Size 312 RED estimates the average queue size, either in the forwarding 313 path using a simple exponentially weighted moving average (such 314 as presented in Appendix A of [Jacobson88]), or in the 315 background (i.e., not in the forwarding path) using a similar 316 mechanism. 318 Note: when the average queue size is computed in the 319 forwarding path, there is a special case when a packet 320 arrives and the queue is empty. In this case, the 321 computation of the average queue size must take into account 322 how much time has passed since the queue went empty. This is 323 discussed further in [RED93]. 325 (b) Packet Drop Decision 327 In the second portion of the algorithm, RED decides whether or 328 not to drop an incoming packet. It is RED's particular 329 algorithm for dropping that results in performance improvement 330 for responsive flows. Two RED parameters, minth (minimum 331 threshold) and maxth (maximum threshold), figure prominently in 332 this decision process. Minth specifies the average queue size 333 *below which* no packets will be dropped, while maxth specifies 334 the average queue size *above which* all packets will be 335 dropped. As the average queue size varies from minth to maxth, 336 packets will be dropped with a probability that varies linearly 337 from 0 to maxp. 339 Note: a simplistic method of implementing this would be to 340 calculate a new random number at each packet arrival, then 341 compare that number with the above probability which varies 342 from 0 to maxp. A more efficient implementation, described 343 in [RED93], computes a random number *once* for each dropped 344 packet. 346 RED effectively controls the average queue size while still 347 accommodating bursts of packets without loss. RED's use of 348 randomness breaks up synchronized processes that lead to lock-out 349 phenomena. 351 There have been several implementations of RED in routers, and papers 352 have been published reporting on experience with these 353 implementations ([Villamizar94], [Gaynor96]). Additional reports of 354 implementation experience would be welcome. 356 All available empirical evidence shows that the deployment of active 357 queue management mechanisms in the Internet would have substantial 358 performance benefits. There are seemingly no disadvantages to using 359 the RED algorithm, and numerous advantages. Consequently, we believe 360 that the RED active queue management algorithm should be widely 361 deployed. 363 We should note that there are some extreme scenarios for which RED 364 will not be a cure, although it won't hurt and may still help. An 365 example of such a scenario would be a very large number of flows, 366 each so tiny that its fair share would be less than a single packet 367 per RTT. 369 4. MANAGING AGGRESSIVE FLOWS 371 One of the keys to the success of the Internet has been the 372 congestion avoidance mechanisms of TCP. Because TCP "backs off" 373 during congestion, a large number of TCP connections can share a 374 single, congested link in such a way that bandwidth is shared 375 reasonably equitably among similarly situated flows. The equitable 376 sharing of bandwidth among flows depends on the fact that all flows 377 are running basically the same congestion avoidance algorithms, 378 conformant with the current TCP specification [HostReq89]. 380 We introduce the term "TCP-compatible" for a flow that behaves under 381 congestion like a flow produced by a conformant TCP. A TCP- 382 compatible flow is responsive to congestion notification, and in 383 steady-state it uses no more bandwidth than a conformant TCP running 384 under comparable conditions (drop rate, RTT, MTU, etc.) 386 It is convenient to divide flows into three classes: (1) TCP- 387 compatible flows, (2) unresponsive flows, i.e., flows that do not 388 slow down when congestion occurs, and (3) flows that are responsive 389 but are not TCP-compatible. The last two classes contain more 390 aggressive flows that pose significant threats to Internet 391 performance, as we will now discuss. 393 o Non-Responsive Flows 395 There is a growing set of UDP-based applications whose 396 congestion avoidance algorithms are inadequate or nonexistent 397 (i.e, the flow does not throttle back upon receipt of congestion 398 notification). Such UDP applications include streaming 399 applications like packet voice and video, and also multicast 400 bulk data transport [SRM96]. If no action is taken, such 401 unresponsive flows could lead to a new congestion collapse. 403 In general, all UDP-based streaming applications should 404 incorporate effective congestion avoidance mechanisms. For 405 example, recent research has shown the possibility of 406 incorporating congestion avoidance mechanisms such as Receiver- 407 driven Layered Multicast (RLM) within UDP-based streaming 408 applications such as packet video [McCanne96; Bolot94]. Further 409 research and development on ways to accomplish congestion 410 avoidance for streaming applications will be very important. 412 However, it will also be important for the network to be able to 413 protect itself against unresponsive flows, and mechanisms to 414 accomplish this must be developed and deployed. Deployment of 415 such a mechanism would provide incentive for every streaming 416 application to become responsive by incorporating its own 417 congestion control. 419 o Non-TCP-Compatible Transport Protocols 421 The second threat is posed by transport protocol implementations 422 that are responsive to congestion notification but, either 423 deliberately or through faulty implementations, are not TCP- 424 compatible. Such applications can grab an unfair share of the 425 network bandwidth. 427 For example, the popularity of the Internet has caused a 428 proliferation in the number of TCP implementations. Some of 429 these may fail to implement the TCP congestion avoidance 430 mechanisms correctly because of poor implementation. Others may 431 deliberately be implemented with congestion avoidance algorithms 432 that are more aggressive in their use of bandwidth than other 433 TCP implementations; this would allow a vendor to claim to have 434 a "faster TCP". The logical consequence of such implementations 435 would be a spiral of increasingly aggressive TCP 436 implementations, leading back to the point where there is 437 effectively no congestion avoidance and the Internet is 438 chronically congested. 440 Note that there is a well-known way to achieve more aggressive 441 TCP performance without even changing TCP: open multiple 442 connections to the same place, as has been done in some Web 443 browsers. 445 The projected increase in more aggressive flows of both these 446 classes, as a fraction of total Internet traffic, clearly poses a 447 threat to the future Internet. There is an urgent need for 448 measurements of current conditions and for further research into the 449 various ways of managing such flows. There are many difficult issues 450 in identifying and isolating unresponsive or non-TCP-compatible flows 451 at an acceptable router overhead cost. Finally, there is little 452 measurement or simulation evidence available about the rate at which 453 these threats are likely to be realized, or about the expected 454 benefit of router algorithms for managing such flows. 456 There is an issue about the appropriate granularity of a "flow". 457 There are a few "natural" answers: 1) a TCP or UDP connection (source 458 address/port, destination address/port); 2) a source/destination host 459 pair; 3) a given source host or a given destination host. We would 460 guess that the source/destination host pair gives the most 461 appropriate granularity in many circumstances. However, it is 462 possible that different vendors/providers could set different 463 granularities for defining a flow (as a way of "distinguishing" 464 themselves from one another), or that different granularities could 465 be chosen for different places in the network. It may be the case 466 that the granularity is less important than the fact that we are 467 dealing with more unresponsive flows at *some* granularity. The 468 granularity of flows for congestion management is, at least in part, 469 a policy question that needs to be addressed in the wider IETF 470 community. 472 5. CONCLUSIONS AND RECOMMENDATIONS 474 This discussion leads us to make the following recommendations to the 475 IETF and to the Internet community as a whole. 477 o RECOMMENDATION 1: 479 Internet routers should implement some active queue management 480 mechanism to manage queue lengths, reduce end-to-end latency, 481 reduce packet dropping, and avoid lock-out phenomena within the 482 Internet. 484 The default mechanism for managing queue lengths to meet these 485 goals in FIFO queues is Random Early Detection (RED) [RED93]. 486 Unless a developer has reasons to provide another equivalent 487 mechanism, we recommend that RED be used. 489 o RECOMMENDATION 2: 491 It is urgent to begin or continue research, engineering, and 492 measurement efforts contributing to the design of mechanisms to 493 deal with flows that are unresponsive to congestion notification 494 or are responsive but more aggressive than TCP. 496 Widespread implementation and deployment of RED, as recommended 497 above, will expose a number of engineering issues. Examples of such 498 issues include: implementation questions for Gigabit routers, the 499 use of RED in layer 2 switches, and the possible use of additional 500 considerations, such as priority, in deciding which packets to drop. 502 6. References 504 [Bolot94] Bolot, J.-C., Turletti, T., and Wakeman, I., Scalable 505 Feedback Control for Multicast Video Distribution in the Internet, 506 ACM SIGCOMM '94, Sept. 1994. 508 [Demers90] Demers, A., Keshav, S., and Shenker, S., Analysis and 509 Simulation of a Fair Queueing Algorithm, Internetworking: Research 510 and Experience, Vol. 1, 1990, pp. 3-26. 512 [Floyd91] Floyd, S., Connections with Multiple Congested Gateways in 513 Packet-Switched Networks Part 1: One-way Traffic. Computer 514 Communications Review, Vol.21, No.5, October 1991, pp. 30-47. URL 515 http://ftp.ee.lbl.gov/floyd/. 517 [Floyd95] Floyd, S., and Jacobson, V., Link-sharing and Resource 518 Management Models for Packet Networks. IEEE/ACM Transactions on 519 Networking, Vol. 3 No. 4, pp. 365-386, August 1995. 521 [Gaynor96] Gaynor, M., Proactive Packet Dropping Methods for TCP 522 Gateways, October 1996, URL http://www.eecs.harvard.edu/~gaynor/ 523 final.ps. 525 [HostReq89] R. Braden, Ed., Requirements for Internet Hosts -- 526 Communication Layers, RFC-1122, October 1989. 528 [Jacobson88] V. Jacobson, Congestion Avoidance and Control, ACM 529 SIGCOMM '88, August 1988. 531 [Lakshman96] T. V. Lakshman, Arnie Neidhardt, Teunis Ott, The Drop 532 From Front Strategy in TCP Over ATM and Its Interworking with Other 533 Control Features, Infocom 96, MA28.1. 535 [Leland94] W. Leland, M. Taqqu, W. Willinger, and D. Wilson, On the 536 Self-Similar Nature of Ethernet Traffic (Extended Version), IEEE/ACM 537 Transactions on Networking, 2(1), pp. 1-15, February 1994. 539 [McCanne96] McCanne, S., Jacobson, V., and M. Vetterli, Receiver- 540 driven Layered Multicast, ACM SIGCOMM 542 [Nagle84] J. Nagle, Congestion Control in IP/TCP, RFC-896, January 543 1984. 545 [RED93] Floyd, S., and Jacobson, V., Random Early Detection gateways 546 for Congestion Avoidance, IEEE/ACM Transactions on Networking, V.1 547 N.4, August 1993, pp. 397-413. Also available from 548 http://ftp.ee.lbl.gov/floyd/red.html. 550 [Shenker96] Shenker, S., Partridge, C., and Guerin, R., Specification 551 of Guaranteed Quality of Service, IETF Integrated Services Working 552 Group, Internet draft (work in progress), August 1996. 554 [SRM96] Floyd. S., Jacobson, V., McCanne, S., Liu, C., and L. Zhang, 555 A Reliable Multicast Framework for Light-weight Sessions and 556 Application Level Framing. ACM SIGCOMM '96, pp 342-355. 558 [Villamizar94] Villamizar, C., and Song, C., High Performance TCP in 559 ANSNET. Computer Communications Review, V. 24 N. 5, October 1994, pp. 560 45-60. URL http://ftp.ans.net/pub/papers/tcp-performance.ps. 562 [Willinger95] W. Willinger, M. S. Taqqu, R. Sherman, D. V. Wilson, 563 Self-Similarity Through High-Variability: Statistical Analysis of 564 Ethernet LAN Traffic at the Source Level, ACM SIGCOMM '95, pp. 100- 565 113, August 1995. 567 [Wroclawski96] J. Wroclawski, Specification of the Controlled-Load 568 Network Element Service, IETF Integrated Services Working Group, 569 Internet draft (work in progress), August 1996. 571 Security Considerations 573 While security is a very important issue, it is largely orthogonal 574 to the performance issues discussed in this memo. We note, 575 however, that denial-of-service attacks may create unresponsive 576 traffic flows that are indistinguishable from flows from normal 577 high-bandwidth isochronous applications, and the mechanism 578 suggested in Recommendation 2 will be equally applicable to such 579 attacks. 581 Authors' Addresses 583 Bob Braden 584 USC Information Sciences Institute 585 4676 Admiralty Way 586 Marina del Rey, CA 90292 588 Phone: 310-822-1511 590 Email: Braden@ISI.EDU 592 David D. Clark 593 MIT Laboratory for Computer Science 594 545 Technology Sq. 595 Cambridge, MA 02139 597 Phone: 617-253-6003 599 Email: DDC@lcs.mit.edu 601 Jon Crowcroft 602 University College London 603 Department of Computer Science 604 Gower Street 605 London, WC1E 6BT 606 ENGLAND 608 Phone: +44 171 380 7296 610 Email: Jon.Crowcroft@cs.ucl.ac.uk 611 Bruce Davie 612 Cisco Systems, Inc. 613 250 Apollo Drive 614 Chelmsford, MA 01824 616 Phone: 618 E-mail: bdavie@cisco.com 620 Steve Deering 621 Cisco Systems, Inc. 622 170 West Tasman Drive 623 San Jose, CA 95134-1706 625 Phone: 408-527-8213 627 Email: deering@cisco.com 629 Deborah Estrin 630 USC Information Sciences Institute 631 4676 Admiralty Way 632 Marina del Rey, CA 90292 634 Phone: 310-822-1511 636 Email: Estrin@usc.edu 638 Sally Floyd 639 Lawrence Berkeley National Laboratory, 640 MS 50B-2239, 641 One Cyclotron Road, 642 Berkeley CA 94720 644 Phone: 646 Email: Floyd@ee.lbl.gov 648 Van Jacobson 649 Lawrence Berkeley National Laboratory, 650 MS 46A, 651 One Cyclotron Road, 652 Berkeley CA 94720 654 Phone: 510-486-7519 656 Email: Van@ee.lbl.gov 658 Greg Minshall 659 Ipsilon Systems 660 232 Java Drive 661 Sunnyvale, CA 94089 663 Phone: 665 Email: Minshall@ipsilon.com 667 Craig Partridge 668 824 Kipling St 669 Palo Alto CA 94301-2831 671 Phone: 415-326-4541 673 Email: Craig@aland.bbn.com 675 Larry Peterson 677 Department of Computer Science 678 University of Arizona 679 Tucson, AZ 85721 681 Phone: 520-621-4231 683 Email: LLP@cs.arizona.edu 685 K. K. Ramakrishnan 686 AT&T Labs. Research 687 Rm. 2C-454, 688 600 Mountain Ave., 689 Murray Hill, N.J. 07974-0636. 691 Phone: 908-582-3154 693 Email: KKRama@research.att.com 695 Scott Shenker 696 Xerox PARC 697 3333 Coyote Hill Road 698 Palo Alto, CA 94304 700 Phone: 415-812-4840 702 Email: Shenker@parc.xerox.com 704 John Wroclawski 705 MIT Laboratory for Computer Science 706 545 Technology Sq. 708 Cambridge, MA 02139 710 Phone: 617-253-7885 712 Email: JTW@lcs.mit.edu 714 Lixia Zhang 715 UCLA 716 45316 Boelter Hall 717 Los Angeles, CA 90024 719 Phone: 310-825-2695 721 Email: Lixia@cs.ucla.edu