idnits 2.17.1 draft-briscoe-tsvwg-relax-fairness-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 17. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1119. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1130. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1137. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1143. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 12, 2007) is 6002 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 2309 (Obsoleted by RFC 7567) ** Obsolete normative reference: RFC 2581 (Obsoleted by RFC 5681) == Outdated reference: A later version (-04) exists of draft-floyd-tsvwg-besteffort-01 == Outdated reference: A later version (-15) exists of draft-ietf-capwap-protocol-specification-07 -- Obsolete informational reference (is this intentional?): RFC 2616 (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 3448 (Obsoleted by RFC 5348) Summary: 4 errors (**), 0 flaws (~~), 4 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe 3 Internet-Draft BT & UCL 4 Intended status: Informational T. Moncaster 5 Expires: May 15, 2008 L. Burness 6 BT 7 November 12, 2007 9 Problem Statement: We Don't Have To Do Fairness Ourselves 10 draft-briscoe-tsvwg-relax-fairness-00 12 Status of this Memo 14 By submitting this Internet-Draft, each author represents that any 15 applicable patent or other IPR claims of which he or she is aware 16 have been or will be disclosed, and any of which he or she becomes 17 aware will be disclosed, in accordance with Section 6 of BCP 79. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six months 25 and may be updated, replaced, or obsoleted by other documents at any 26 time. It is inappropriate to use Internet-Drafts as reference 27 material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt. 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html. 35 This Internet-Draft will expire on May 15, 2008. 37 Copyright Notice 39 Copyright (C) The IETF Trust (2007). 41 Abstract 43 Nowadays resource sharing on the Internet is largely a result of what 44 applications, users and operators do at run-time, rather than what 45 the IETF designs into transport protocols at design-time. The IETF 46 now needs to recognise this trend and consider how to allow resource 47 sharing to be properly controlled at run-time. 49 Requirements Language 51 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 52 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 53 document are to be interpreted as described in [RFC2119]. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. What Problem? . . . . . . . . . . . . . . . . . . . . . . . . 4 59 2.1. Two Incompatible Partial Worldviews . . . . . . . . . . . 4 60 2.1.1. Overlooked Degrees of Freedom . . . . . . . . . . . . 7 61 2.2. Average Rates are a Run-Time Issue . . . . . . . . . . . . 8 62 2.3. Protocol Dynamics is the Design-Time Issue . . . . . . . . 9 63 3. Concrete Consequences of Unfairness . . . . . . . . . . . . . 10 64 3.1. Higher Investment Risk . . . . . . . . . . . . . . . . . . 11 65 3.2. Losing Voluntarism . . . . . . . . . . . . . . . . . . . . 12 66 3.3. Networks using DPI to make Choices for Users . . . . . . . 13 67 3.4. Starvation during Anomalies and Emergencies . . . . . . . 14 68 4. Security Considerations . . . . . . . . . . . . . . . . . . . 15 69 5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 15 70 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 15 71 7. Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 15 72 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 73 8.1. Normative References . . . . . . . . . . . . . . . . . . . 15 74 8.2. Informative References . . . . . . . . . . . . . . . . . . 16 75 Editorial Comments . . . . . . . . . . . . . . . . . . . . . . . . 76 Appendix A. Example Scenario . . . . . . . . . . . . . . . . . . 19 77 A.1. Base Scenario . . . . . . . . . . . . . . . . . . . . . . 19 78 A.2. Compounding Overlooked Degrees of Freedom . . . . . . . . 20 79 A.3. Hybrid Users . . . . . . . . . . . . . . . . . . . . . . . 21 80 A.4. Upgrading Makes Most Users Worse Off . . . . . . . . . . . 21 81 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 23 82 Intellectual Property and Copyright Statements . . . . . . . . . . 25 84 1. Introduction 86 The strength of the Internet is that any of the thousand million or 87 so hosts may use nearly any network resource on the whole public 88 Internet without asking, whether in access or core networks, wireless 89 or fixed, local or remote. The question of how each resource is 90 shared is generally delegated to the congestion control algorithms 91 available on each endpoint, most often TCP. 93 We (the IETF) aim to ensure reasonably fair sharing of the congested 94 resources of the Internet [RFC2914]. Specifically, the TCP algorithm 95 aims to ensure every flow gets a roughly equal share of each 96 bottleneck, measured in packets per round trip time [RFC2581]. But 97 our efforts have become distorted by unfair use of protocols we 98 intended to be fair, and further by the attempts of operators to 99 correct the situation. The problem is we aim to control fairness at 100 protocol design-time, but resource shares are now primarily 101 determined at run-time--as the outcome of a tussle between users, app 102 developers and operators. 104 For instance, about 35% of total traffic currently seen (Sep'07) at a 105 core node on the public wireline Internet is p2p file-sharing {ToDo: 106 Add ref}. Even though file-sharing generally uses TCP, it uses the 107 well-known trick of opening multiple connections--currently around 108 100 actively transferring over different paths is not uncommon. A 109 competing Web application might open a couple of flows at a time, but 110 perhaps only actively transfer data 1-10% of the time (its activity 111 factor). Combining 50x less flows and 10-100x lower activity factor 112 means the traffic intensity from the Web app can be 500-5,000x less. 113 However, despite being so much lighter on the network, it gets 50x 114 less bit rate through the bottleneck. 116 The design-time approach worked well enough during the early days of 117 the Internet, because most users' activity factors and numbers of 118 flows were in proportion to their access link rate. But, now the 119 Internet has to support a jostling mix of different attitudes to 120 resource sharing: carelessness, unwitting self-interest, active self- 121 interest, malice and sometimes even a little consideration for 122 others. So although TCP sets an important baseline, it is no longer 123 the main determinant of how resources are shared between users at 124 run-time. 126 Just because we can no longer control resource sharing at design 127 time, we aren't saying it isn't important. In Section 3, we show 128 that badly skewed resource sharing has serious concrete knock-on 129 effects that are of great concern to the health of the Internet. 131 And we are not saying the IETF is powerless to do anything to help. 133 However, our role must now be to create the run-time _policy 134 framework_ within which users and operators can control relative 135 resource shares. So the debate is not about the IETF choosing 136 between TCP-friendliness, max-min fairness, cost fairness or any 137 other sort of fairness, because whatever we decide at design-time 138 won't be strong enough to change what happens at run-time. We need 139 to focus on giving principled and enforceable control to users and 140 operators, so they can agree between themselves which fair use policy 141 they want locally [Rate_fair_Dis]. 143 The requirements for this resource sharing framework will be the 144 subject of a future document, but the most important role of the IETF 145 is to promote _understanding_ of the sorts of resource sharing that 146 users and operators will want to use at run-time and to resolve the 147 misconceptions and differences between them (Section 2.1). 149 We are in an era where new congestion control requirements often 150 involve starting more aggressively than TCP or going faster than TCP, 151 or not responding to congestion as quickly as TCP. By shifting 152 control of fairness from design to run-time, we will free up all our 153 new congestion control design work, so that it can first and foremost 154 meet the objectives of these more demanding applications. But we can 155 still quantify, minimise and constrain the effect on others due to 156 faster average rate and different dynamics (Section 2.3). We can say 157 now that the framework will have to encompass and endorse the 158 practice of opening multiple flows, for instance. But alongside 159 recognition of such freedoms will come constraints, in order to 160 balance the side-effects on other users over time. 162 2. What Problem? 164 2.1. Two Incompatible Partial Worldviews 166 When looking at the current Internet, some people see a massive 167 fairness problem, while others think there's hardly a problem at all. 168 This is because two divergent ways of reasoning about resource 169 sharing have developed in the industry: 171 o IETF guidelines on fair sharing of congested resources 172 [RFC2357],[RFC2309],[RFC2914] have recommended that flows 173 experiencing the same congested path should aim to achieve broadly 174 equal window sizes, as TCP does [RFC2581]. We will characterise 175 this as the "flow rate equality" worldview, shared by the IETF and 176 large parts of the networking research community.[Note_Window] 178 o Network operators and Internet users tend to reason about the 179 problem of resources sharing very differently. Nowadays they do 180 not generally concern themselves with the rates of individual 181 flows. Instead they think in terms of the volume of data that 182 different users transfer over a period [Res_p2p]. We will term 183 this the "volume accounting" worldview. They do not believe 184 volume over a period (traffic intensity) is a measure of 185 unfairness in itself, but they believe it should be _taken into 186 account_ when deciding whether relative bit rates are fair. 188 The most obvious distinction between the two worldviews is that flow 189 rate equality is between _flows_, whereas volume accounting shares 190 resources between _users_. The IETF understands well that fairness 191 is actually between users, but generally considers flow fairness to 192 be a reasonable approximation as long as users aren't opening too 193 many flows. 195 However, there is a second much more subtle distinction. The flow 196 rate equality worldview discusses fair resource sharing in terms of 197 bit rates, but operators and users reason about fair bit rates in the 198 context of byte volume over a period. Given bit rate is an 199 instantaneous metric, it may aid understanding to convert 'volume 200 over a period' into an instantaneous metric too. The relevant metric 201 is traffic intensity, which like traffic rate is an instantaneous 202 metric, but it takes account of likely activity _over time_. The 203 traffic intensity from one user is the product of two metrics: i) the 204 user's desired bit rate when active and ii) how often they are active 205 over a period (their activity factor). 207 Operators have to provision capacity based on the aggregate traffic 208 intensity from all users over the busy period. And many users think 209 in terms of how much volume they can transfer over a period. So, 210 because traffic intensity is equivalent to 'volume over a period', 211 both operators and users often effectively share the same worldview. 213 To further aid understanding, Appendix A presents an example scenario 214 where heavy users compete for a bottleneck with light users. It has 215 enough similarities to the current Internet to be relevant, but it 216 has been stripped to its bare essentials to allow the main issues to 217 be grasped. 219 The base scenario in Appendix A.1 starts with the light users having 220 TCP connections open for less of the time than heavy users (a lower 221 activity factor). But, when they are active, they open as many 222 connections as heavy users. It shows that users with a lower 223 activity factor transfer less volume of traffic through the 224 bottleneck over a period because, even though TCP gives roughly equal 225 rate to each flow, the heavy users' flows are present more of the 226 time. 228 The volume accounting view is _not_ that it is unfair for some users 229 to transfer more volume than others--afterall the lighter users have 230 less traffic that they want to send. However, they believe it is 231 reasonable for users who put a heavier load on the system to be given 232 less bottleneck bit rate than lighter users. 234 Appendix A.2 continues the example, giving the heavy users the added 235 advantage of using 50x multiple flows, just as they do on the current 236 Internet. When multiple flows are compounded with their higher 237 activity factors, they can get 500-2,000x greater traffic intensity 238 through the bottleneck. 240 Certainly, the flow rate equality worldview recognises that opening 241 50x more flows than other users starts to become a serious fairness 242 problem, because some users get 50x more bit rate through a 243 bottleneck than others. But the volume accounting worldview sees 244 this as a much bigger problem. They first see 2,000x heavier use of 245 the bottleneck over time, then they judge that _also_ getting 50x 246 greater bit rate seems seriously unfair. 248 But are these numbers realistic? Attended use of something like the 249 Web might typically have an activity factor of 1-10%, while 250 unattended apps approach 100%. A Web browser might typically open 251 two TCPs when active [RFC2616], while a p2p file-sharing app on a 252 512kbps upstream DSL line actively uses anything from 40-500 253 connections [az-calc]. Heavy users generally compound the two 254 factors together (10-100x greater activity factor and 20-250x more 255 connections), achieving anything from 200x to 25,000x greater traffic 256 intensity through a bottleneck than light users. 258 The above question of what size the different worldviews think 259 resource shares _should_ be is separate from the question of whether 260 to _enforce_ them and how to (see Section 3.2). Within the volume 261 accounting worldview, many operators (particularly in Europe) already 262 limit the bit rate of their heaviest users at peak times in order to 263 protect the experience of the majority of their 264 customers.[Note_Neutral] But, enforcement is a separate question. 265 Although prevalent use of TCP seems to be continuing without any 266 enforcement, even the flow rate equality worldview generally accepts 267 that opening excessive multiple connections can't be solved 268 voluntarily. Quoting RFC2914, "...instead of a spiral of 269 increasingly aggressive transport protocols, we ... have a spiral of 270 increasingly ... aggressive applications"). 272 To summarise so far, one industry worldview aims for equal flow 273 rates, while the other prefers an outcome with very unequal flow 274 rates. Even though they both share the same intentions of fairer 275 resource sharing, the two worldviews have developed subgoals that are 276 fundamentally at odds. 278 2.1.1. Overlooked Degrees of Freedom 280 So which worldview is correct? Actually, our reason for pointing out 281 these divergent worldviews is to show that both contain valuable 282 insights, but that each also highlights weaknesses in the other. 283 Given our audience is the IETF, we have tried to explain the volume 284 accounting worldview in terms of flow rate equality, but volume 285 accounting is by no means rigorous or complete itself. Table 1 286 identifies the three degrees of freedom of resource sharing that are 287 missing in one or the other of the two worldviews. 289 +----------------------+--------------------+-------------------+ 290 | Degree of Freedom | Flow Rate Equality | Volume Accounting | 291 +----------------------+--------------------+-------------------+ 292 | Activity factor | X | Y | 293 | Multiple flows | X | Y | 294 | Congestion variation | Y | X | 295 +----------------------+--------------------+-------------------+ 297 Table 1: Resource Sharing Degrees of Freedom Encompassed by Different 298 Worldviews; Y = yes and X = no. 300 Activity factor: We have already pointed out how flow rate equality 301 does not take different activity factors into account. On the 302 other hand, volume accounting naturally takes the on-off activity 303 of flows into account, because in the process of counting volume 304 over time, the off periods are naturally excluded. 306 Multiple flows: Similarly, it is well-known [RFC2309] [RFC2914] that 307 flow rate equality does not make allowance for multiple flows, 308 whereas counting volume naturally includes all flows from a user, 309 whether they terminate at the same remote endpoint or many 310 different ones. 312 Congestion variation: Flow rate equality, of course, takes full 313 account of how congested different bottlenecks are at different 314 times, ensuring that the same volume must be squeezed out over a 315 longer duration, the more flows it competes with. However, volume 316 accounting doesn't recognise that congestion can vary by orders of 317 magnitude, making it fairly useless for encouraging congestion 318 control. The best it can do is only count volume during a 'peak 319 period', effectively considering congestion as either 1 everywhere 320 during this time or 0 everywhere otherwise. 322 These clearly aren't just problems of detail. Having each overlooked 323 whole dimensions of the problem, both worldviews seem to require a 324 fundamental rethink. In a future document defining the requirements 325 for a new resource sharing framework, we plan to unify both 326 worldviews. But, in the present problem statement, it is sufficient 327 to register that we need to reconcile the fundamentally contradictory 328 worldviews that the industry has developed about resource sharing. 330 2.2. Average Rates are a Run-Time Issue 332 A less obvious difference between the two worldviews is that flow 333 rate equality tries to control resource shares at design-time, while 334 volume accounting controls resource shares once the run-time 335 situation is known. Also the volume accounting worldview actually 336 involves two separate functions: passive monitoring and active 337 intervention. So, importantly, the run-time questions of whether to 338 and how to intervene can depend on policy. 340 The "spiral of increasingly aggressive applications" [RFC2914] has 341 shifted the resource sharing problem out of the IETF's design-time 342 space, making flow rate equality insufficient (or perhaps even 343 inappropriate) in technical and in policy terms: 345 Technical: At design time, it is impossible to know whether a 346 congestion control will be fair at run-time without knowing more 347 about the run-time situation it will be used in--how long flow 348 durations will be and whether users will open multiple flows. 350 Policy: At design time, we cannot (and should not) prejudge the 351 'fair use' policy that has been agreed between users and their 352 network operators. 354 A transport protocol can no longer be made 'fair' at design time--it 355 all now depends how 'unfairly' it is used at run-time, and what has 356 been agreed as 'unfair'. 358 However, we are not saying that volume accounting is the answer. It 359 just gives us the insight that resource sharing has to be controlled 360 at run-time by policy, not at design-time by the IETF. Volume 361 accounting would be more useful if it took a more precise approach to 362 congestion than either 'everything is congested' or 'nothing is 363 congested'. 365 What operators and users need from the IETF is a framework to judge 366 and to control resource sharing at run-time. It needs to work across 367 all a user's flows (like volume accounting). It needs to take 368 account of idle periods over time (like volume accounting). And it 369 needs to take account of congestion variation (like flow rate 370 equality). 372 2.3. Protocol Dynamics is the Design-Time Issue 374 Although fairness is a run-time issue, at protocol design-time it 375 requires more from the IETF than just a policy control framework. 376 Policy can control the _average_ amount of congestion that a 377 particular application causes, but the Internet also needs the 378 collective expertise of the IETF and the IRTF to standardise best 379 practice in the _dynamics_ of transport protocols. The IETF has a 380 duty to provide standard transports with a response to congestion 381 that is always safe and robust. But the hard part is to keep the 382 network safe while still meeting the needs of more demanding 383 applications (e.g. high speed transfer of data objects or media 384 streaming that can adapt its rate but not too abruptly). 386 If we assume for a moment that we will have a framework to judge and 387 control _average_ rates, we will still need a framework to assess 388 which proposed congestion controls make the trade-off between 389 achieving the task effectively and minimising congestion caused to 390 others, during _dynamics_: 392 o The faster a new flow accelerates the more packets it will have in 393 flight when it detects its first loss, potentially leading many 394 other flows to experience a long burst of losses as queues 395 overrun. When is a fast start fast enough? Or too fast 396 [RFC3742]? 398 o One way for a small number of high speed flows to better utilise a 399 high speed link is to respond more smoothly to congestion events 400 than TCP's rate-halving saw-tooth does [proprietary fast TCPs] 401 [FAST],[RFC3649]. But then new flows will take much longer to 402 'push-in' and reach a high rate themselves. 404 o Transports like TCP-friendly rate control [proprietary media 405 players], [RFC3448], [RFC4828] are designed to respond more 406 smoothly to congestion than TCP. But even if a TFRC flow has the 407 same average bit rate as a TCP flow, the more sluggish it is, the 408 more congestion it will cause [Rate_fair_Dis]. How do we decide 409 how much smoother we should go? How large a proportion of 410 Internet traffic could we allow to be unresponsive to congestion 411 over long durations, before we were at risk of causing growing 412 periods of congestion collapse [RFC2914]?[Note_Collapse] 414 o TFRC has been proposed as a possible way for aggregates of flows 415 crossing the public Internet to respond to congestion (pseudo-wire 416 emulations may contain flows that cannot, or do not want to 417 respond quickly to congestion themselves) 418 [I-D.rosen-pwe3-congestion], 419 [I-D.ietf-capwap-protocol-specification], [TSV_CAPWAP_issues]. 421 But it doesn't make any sense to insist that, wherever flows are 422 aggregated together into one identifiable bundle, the whole bundle 423 of perhaps hundreds of flows must keep to the same mean rate as a 424 single TCP flow. 426 In view of the continual demand for alternate congestion controls, 427 the IETF has recently agreed a new process for standardising them 428 [ion-tsv-alt-cc]. The IETF will use the expertise of the IRTF 429 Internet congestion control research group, governed by agreed 430 general guidelines for the design of new congestion controls 431 [RFC5033]. However, in writing those guidelines it proved very 432 difficult to give any specific guidance on where a line could be 433 drawn between fair and unfair protocols. The best we could do were 434 phrases like, "Alternate congestion controllers that have a 435 significantly negative impact on traffic using standard congestion 436 control may be suspect..." and "In environments with multiple 437 competing flows all using the same alternate congestion control 438 algorithm, the proposal should explore how bandwidth is shared among 439 the competing flows." 441 Once we have agreed that average behaviour should be a policy issue, 442 we can focus on the dynamic behaviour of congestion controls, which 443 is where the important standards issues lie, such as preventing 444 congestion collapse or preventing new flows causing bursts of 445 congestion by unnecessarily overrunning as they seek out the 446 operating point of their path. 448 As always, the IETF will not want to standardise aspects where 449 implementers can gain an edge over their competitors, but we must set 450 standards to prevent serious harm to the stability and usefulness of 451 the Internet, and to make transports available that avoid causing 452 _unnecessary_ congestion in the course of achieving any particular 453 application objective. 455 3. Concrete Consequences of Unfairness 457 People have different levels of tolerance for unfairness. Even when 458 we agree how to measure fairness, there are a range of views on how 459 unfair the situation needs to get before the IETF should do anything 460 about it. Nonetheless, lack of fairness can lead to more concretely 461 pathological knock-on effects. Even if we don't particularly care if 462 some users get more than their fair share and others less, we should 463 care about the more concrete knock-on effects below. 465 3.1. Higher Investment Risk 467 Some users want more Internet capacity to transfer large volumes of 468 data, while others want more capacity to be able to interact more 469 quickly with other sites and other users. We have called these heavy 470 and light users, although of course, many users are mix of the two in 471 differing proportions. 473 We have shown that heavy users can use applications that open 474 multiple connections, so that TCP gives the light users very little 475 of a bottleneck. But unfortunately, upgrading capacity does little 476 for the light users unless the heavy users run out of data to send 477 (which doesn't tend to happen often). In the reasonably realistic 478 example in Appendix A.4, the light users start off only being able to 479 use 10kbps of their 2Mbps line because heavy users are skewing the 480 sharing of the bottleneck by using multiple flows. But a 4x upgrade 481 to the bottleneck, which should add 500kbps per user if shared 482 equally, only gives the light users 30kbps extra. 484 But, the upgrade has to be paid for. A commercial ISP will generally 485 pass on the cost equally to all its customers through its monthly 486 fees. So, to rub salt in the wound, the light users end up paying 487 the cost of this 500kbps upgrade but we have seen they only get 488 30kbps. Ultimately, extreme unfairness in the sharing of capacity 489 tends to drive operators to stop investing in capacity. Because all 490 the light users, who experience so little of the benefit, won't be 491 prepared to pay an equal share to recover the costs--the ISP risks 492 losing them to a 'fairer' competitor. 494 But there seems to be plenty of evidence that operators around the 495 world are still investing in capacity growth despite the prevalence 496 of TCP. How can this be, if flow rate equality makes investment so 497 risky? One explanation, particularly in parts of Asia, is that some 498 investments are Governernment subsidised. In the US, the explanation 499 is probably more down to weak competition. In Europe, the main 500 explanation is that many commercial operators haven't allowed their 501 networks to become as unfair as the above example--they have made 502 resource sharing fairer by _overriding_ TCP's flow rate equality. 504 Competitive operators in many countries limit the volume transferred 505 by heavy users, particularly at peak times. They have effectively 506 overriden flow rate equality to achieve a different allocation of 507 resources that they believe is better for the majority of their 508 customers (and consequently better for their competitive position). 509 Typically these operators use a combination of tiered pricing of 510 volume caps and throttling of the heaviest so-called 'unlimited' 511 users at peak times. In this way they have removed some of the 512 investment risk that would otherwise have resulted if flow rate 513 equality had been relied on to share congested resources. 515 3.2. Losing Voluntarism 517 Throughout the early years of the Internet, flow rate equality 518 resulted in approximate fairness that most people considered 519 sufficient. This was because most users' traffic during peak hours 520 tended to correlate with their access rate. Those who bought high 521 capacity access also generally sent more traffic at peak times (e.g. 522 heavy users or server farms). 524 As higher access rates have become more affordable, this happy 525 coincidence has been eroded. Some people only require their higher 526 access rate occasionally, while others require it more continuously. 527 But once they all have more access capacity, even those who don't 528 really require it all the time often fill it anyway--as long as 529 there's nothing to dissuade them. People tend to use what they 530 desire, not just what they require. 532 Of course, more access traffic requires more shared capacity at 533 relevant network bottlenecks. But if we rely on TCP to share out 534 these bottlenecks, we have seen how those who just desire more can 535 swamp those who require more (Section 3.1). 537 Some operators have continued to provision sufficiently excessive 538 shared capacity and just passed the cost on to all their customers. 539 But many operators have found that those customers who don't actually 540 require all that shared infrastructure would rather not have to pay 541 towards its cost. So, to avoid losing customers, they have 542 introduced tiered volume limits (this hasn't happened in the US yet 543 though). It is well known that many users are averse to 544 unpredictable charges [PMP] (S.5), so many now choose ISPs who limit 545 their volume (with suitable override facilities) rather than charge 546 more when they use more. 548 Thus, we are seeing a move away from voluntary restraint (within peak 549 access rates) towards a preference for enforced fairness, as long as 550 the user stays in overall control. This has implications on the 551 Internet infrastructure that the IETF needs to recognise and address. 552 Effectively, parts of the best effort Internet are becoming like the 553 other Diffserv classes, with traffic policers and traffic 554 conditioning agreements (TCAs [RFC2475]), albeit volume-based rather 555 than rate and burst-based TCAs. (In fact, the addition of congestion 556 accounting or policing need not be confined to just the best effort 557 class.) 559 We are not saying that the Internet _requires_ fairness enforcement, 560 merely that it has become prevalent. We need to acknowledge the 561 trend towards enforcement to ensure that it does not introduce 562 unnecessary complexity into the basic functioning of the Internet, 563 and that our current approach to fairness (embedded in endpoint 564 congestion control) remains compatible with this changing world. For 565 instance, when a rate policer introduces drops, are they equivalent 566 to drops due to congestion? are they equivalent to drops when you 567 exceed your own access rate? do we need to tell the difference? 569 3.3. Networks using DPI to make Choices for Users 571 We have seen how network operators might well believe it is in their 572 customers' interests to override the resource sharing decisions of 573 TCP. They seem to have sound reasons for throttling their heaviest 574 users at peak times. But this is leading to a far more controversial 575 side-effect: network operators have started making performance 576 choices between _applications_ on behalf of their customers. 578 Once operators start throttling heavy users, they hit a problem. 579 Most heavy volume users are actually a mix of the two types 580 characterised in our example scenario (Appendix A). Some of their 581 traffic is attended and some is unattended. If the operator 582 throttles all traffic from a heavy user indiscriminately, it will 583 severely degrade the customer's attended applications, but it 584 actually only needs to throttle the unattended applications to 585 protect the traffic of others. 587 Ideally, the threat of heavy throttling of all a user's traffic would 588 encourage the user to self-throttle the traffic she least valued, in 589 order to avoid the operator's indiscriminate throttling. But many 590 users these days have neither the expertise nor the software to do 591 this. Instead, operators have generally decided to infer what they 592 think the user would do, using readily available deep packet 593 inspection (DPI) equipment. 595 An operator may infer customer priorities with honourable intentions, 596 but such activity is easily confusible with attempts to discriminate 597 against certain applications that the operator happens not to like. 598 Also customers get understandably upset every time the operator 599 guesses their priorities wrongly. 601 It is well documented (but less well-known) that user priorities are 602 task-specific, not application-specific [AppVsTask]. P2p filesharing 603 can be used for downloading music with some vague intent to listen to 604 it some day soon, or to download a critical security patch. User 605 intent cannot be inferred at the network layer just by working out 606 what the application is. The end-to-end design principle [RFC1958] 607 warns that a function should only be implemented at a lower layer 608 after trying really hard to implement it at a higher layer. 610 Otherwise, the network layer gradually becomes specialised around the 611 functions and priorities of the moment--the middlebox problem 612 [RFC3234]. 614 To address this problem of feature creep into the network layer, we 615 need to understand whether there are valid reasons why this DPI is 616 being deployed to override TCP's decisions. We shouldn't deny the 617 existence of a problem just because one solution to it breaks a 618 fundamental Internet design principle. We should instead find a 619 better solution. 621 3.4. Starvation during Anomalies and Emergencies 623 The problems due to unfairness that we have outlined so far all arise 624 when the Internet is working normally. However, fairness concerns 625 become far more acute when a part of the Internet infrastructure 626 becomes extremely stressed, either because there's much more traffic 627 than expected (e.g. flash crowds), or much less capacity than 628 expected (e.g. physical attack, accident, disaster). 630 Under non-disaster conditions, we have already said that fair sharing 631 of congested resources is a matter that should be decided between 632 users and their providers at run-time. Often that will mean "you get 633 what you've paid for" becomes the rule, at least in commercial parts 634 of the Internet. But during really acute emergencies many people 635 would expect such commercial concerns to be set aside 636 [I-D.floyd-tsvwg-besteffort]. 638 We agree that users shouldn't be able to squeeze out others during 639 emergencies. But the mechanisms we have in place at the moment don't 640 allow anyone to control whether this happens or not, because they can 641 be overriden at run-time by using the extra degress of freedom 642 available to get round TCP. It could equally be argued that each 643 user (not each flow) should get an equal share of remaining capacity 644 in an emergency. Indeed, it would seem wrong for one user to expect 645 100 continuously running flows downloading music & videos to take 100 646 times more capacity than other users sending brief flows containing 647 messages trying to contact loved ones or the emergency services 648 [Hengchun_quake].[Note_Earthquake] 650 We argue that fairness during emergencies is, more than anything 651 else, a policy matter to be decided at run-time (either before or 652 during an anomaly) by users, operators, regulators and governments-- 653 not at design time by the IETF. The IETF should however provide the 654 framework within which typical policies can be enforced. And the 655 IETF should ensure that the Internet is still likely to utilise 656 resources _efficiently_ under extreme stress, assuming a reasonable 657 mix of likely policies, including none. 659 The main take-away point from this section is that the IETF should 660 not, and need not, make such life-and-death decisions. It should 661 provide protocols that allow any of these policy options to be chosen 662 at the time of need or by making contingencies beforehand. The 663 congestion accountability framework in {ToDo: ref sister doc} 664 provides such control, while also allowing different controls 665 (including no control at all) in normal circumstances. For instance 666 an ISP might normally allow its customers to pay to override any 667 usage limits. But during a disaster it might suspend this right. 668 Then users would get only the shares they had established before the 669 disaster broke out (the ISP would thus also avoid accusations of 670 profiteering from people's misery). Whatever, it is not for the IETF 671 to embed answers to questions like these in our protocols. 673 4. Security Considerations 675 {ToDo:} 677 5. Conclusions 679 {ToDo:} 681 6. Acknowledgements 683 Arnaud Jacquet, Phil Eardley. 685 7. Comments Solicited 687 Comments and questions are encouraged and very welcome. They can be 688 addressed to the IETF Transport Area working group mailing list 689 , and/or to the authors. 691 8. References 693 8.1. Normative References 695 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 696 Requirement Levels", BCP 14, RFC 2119, March 1997. 698 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 699 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 700 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 701 S., Wroclawski, J., and L. Zhang, "Recommendations on 702 Queue Management and Congestion Avoidance in the 703 Internet", RFC 2309, April 1998. 705 [RFC2581] Allman, M., Paxson, V., and W. Stevens, "TCP Congestion 706 Control", RFC 2581, April 1999. 708 [RFC2914] Floyd, S., "Congestion Control Principles", BCP 41, 709 RFC 2914, September 2000. 711 8.2. Informative References 713 [AppVsTask] 714 Bouch, A., Sasse, M., and H. DeMeer, "Of packets and 715 people: A user-centred approach to Quality of Service", 716 Proc. IEEE/IFIP Proc. International Workshop on QoS 717 (IWQoS'00) , May 2000, 718 . 720 [FAST] Jin, C., Wei, D., and S. Low, "FAST TCP: Motivation, 721 Architecture, Algorithms, and Performance", Proc. IEEE 722 Conference on Computer Communications (Infocom'04) , 723 March 2004, 724 . 726 [Hengchun_quake] 727 Wikipedia, "2006 Hengchun earthquake", Wikipedia Web page 728 (accessed Oct'07) , 2006, 729 . 731 [I-D.floyd-tsvwg-besteffort] 732 Floyd, S. and M. Allman, "Comments on the Usefulness of 733 Simple Best-Effort Traffic", 734 draft-floyd-tsvwg-besteffort-01 (work in progress), 735 August 2007. 737 [I-D.ietf-capwap-protocol-specification] 738 Calhoun, P., "CAPWAP Protocol Specification", 739 draft-ietf-capwap-protocol-specification-07 (work in 740 progress), June 2007. 742 [I-D.rosen-pwe3-congestion] 743 Rosen, E., "Pseudowire Congestion Control Framework", 744 draft-rosen-pwe3-congestion-04 (work in progress), 745 October 2006. 747 [PMP] Odlyzko, A., "A modest proposal for preventing Internet 748 congestion", AT&T technical report TR 97.35.1, 749 September 1997, 750 . 752 [RFC1958] Carpenter, B., "Architectural Principles of the Internet", 753 RFC 1958, June 1996. 755 [RFC2357] Mankin, A., Romanov, A., Bradner, S., and V. Paxson, "IETF 756 Criteria for Evaluating Reliable Multicast Transport and 757 Application Protocols", RFC 2357, June 1998. 759 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 760 and W. Weiss, "An Architecture for Differentiated 761 Services", RFC 2475, December 1998. 763 [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., 764 Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 765 Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999. 767 [RFC3234] Carpenter, B. and S. Brim, "Middleboxes: Taxonomy and 768 Issues", RFC 3234, February 2002. 770 [RFC3448] Handley, M., Floyd, S., Padhye, J., and J. Widmer, "TCP 771 Friendly Rate Control (TFRC): Protocol Specification", 772 RFC 3448, January 2003. 774 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows", 775 RFC 3649, December 2003. 777 [RFC3742] Floyd, S., "Limited Slow-Start for TCP with Large 778 Congestion Windows", RFC 3742, March 2004. 780 [RFC4828] Floyd, S. and E. Kohler, "TCP Friendly Rate Control 781 (TFRC): The Small-Packet (SP) Variant", RFC 4828, 782 April 2007. 784 [RFC5033] Floyd, S. and M. Allman, "Specifying New Congestion 785 Control Algorithms", BCP 133, RFC 5033, August 2007. 787 [Rate_fair_Dis] 788 Briscoe, B., "Flow Rate Fairness: Dismantling a Religion", 789 ACM CCR 37(2)63--74, April 2007, 790 . 792 [Res_p2p] Cho, K., Fukuda, K., Esaki, H., and A. Kato, "The Impact 793 and Implications of the Growth in Residential User-to-User 794 Traffic", ACM SIGCOMM CCR 36(4)207--218, October 2006, 795 . 797 [TSV_CAPWAP_issues] 798 Borman, D. and IESG, "Transport Issues in CAPWAP", In 799 Proc. IETF-69 CAPWAP w-g, July 2007, . 802 [az-calc] Infinite-Source, "Azureus U/L settings calculator", Web 803 page (accessed Oct'07) , 2007, 804 . 806 [ion-tsv-alt-cc] 807 "Experimental Specification of New Congestion Control 808 Algorithms", July 2007, 809 . 812 Editorial Comments 814 [Note_Collapse] Some would say that it is not a congestion 815 collapse if congestion control automatically 816 recovers the situation after a while. However, 817 even though lack of autorecovery would be truly 818 devastating, it isn't part of the definition 819 [RFC2914]. 821 [Note_Earthquake] On 26 Dec 2006, the Hengchun earthquake caused 822 faults on 12 of the 18 undersea cables passing 823 between Taiwan and the Philippines. The Internet 824 was virtually unusable for those trying to make 825 their emergency arrangements over these cables (as 826 well as for much of Asia generally). Each of these 827 flows was still having to compete with the 828 multiple flows of video downloads for remote users 829 who were presumably oblivious to the fact they 830 were consuming much of the surviving capacity. 831 When the Singaporean ISP, SingNet, announced 832 restoration of service before the cables were 833 repaired, it revealed that it had achieved this at 834 the expense of video downloads and gaming traffic 835 . 837 [Note_Neutral] Enforcement of /overall/ traffic limits within an 838 agreed acceptable use policy is a completely 839 different question to that of whether operators 840 should disciminate against /specific/ applications 841 or service providers (but they are confusible& 842 mdash;see the section on DPI. 844 [Note_Window] Within the flow rate equality worldview, there are 845 differences in views over whether window sizes 846 should be compared in packets or bytes, and 847 whether a longer round trip time (RTT) should 848 reduce the target rate or merely slow down how 849 quickly the rate changes in order to reach a 850 target rate that is independent of RTT [FAST]. 851 However, although these details are important, 852 they are merely minor internal differences within 853 the flow rate equality worldview when compared 854 against the differences with volume accounting. 856 Appendix A. Example Scenario 858 A.1. Base Scenario 860 We will consider 100 users all sharing a link from the Internet with 861 2Mbps downstream access capacity. Eighty bought their line for 862 occasional flurries of activity like browsing the Web, booking their 863 travel arrangements or reading their email. The other twenty bought 864 it mainly for unattended volume transfer of large files. We will 865 call these two types of use attended (or light) and unattended (or 866 heavy). Ignoring the odd UDP packet, we will assume all these 867 applications use TCP congestion control, and that all flows have 868 approximately equal round trip times. 870 Imagine the network operator has provisioned the shared link for a 871 contention ratio of 20:1, ie 100x2Mbps/20 = 10Mbps. For simplicity, 872 we assume a 16hr 'day' and that the attended use is only in the 873 'day', while unattended use is always present, having the night to 874 itself. 876 During the 'day', flows from the sixty attended users come and go 877 with about 1 in 10 actively downloading flows at any one time (a 878 downstream activity factor of 10%). To start with, we will further 879 assume that, when active, every user has approximately the same 880 number of flows open, whether attended or unattended. So, once all 881 flows have stabilised, at any instant TCP will ensure every user 882 (when active) gets about 10Mbps/(80*10% + 20*100%) = 357kbps of the 883 bottleneck. 885 Table 2 tabulates the salient features of this scenario. Also the 886 rightmost column shows the volume transferred per user during the 887 day, and for completeness the bottom row shows the aggregate. 889 +------------+----------+------------+--------------+---------------+ 890 | Type of | No. of | Activ- ity | Day rate | Day volume | 891 | use | users | factor | /user (16hr) | /user (16hr) | 892 +------------+----------+------------+--------------+---------------+ 893 | Attended | 80 | 10% | 357kbps | 257MB | 894 | Unattended | 20 | 100% | 357kbps | 2570MB | 895 | | | | | | 896 | Aggregate | 100 | | 10Mbps | 72GB | 897 +------------+----------+------------+--------------+---------------+ 899 Table 2: Base Scenario assuming 100% utilisation of 10Mbps bottleneck 900 and each user runs approx. equal numbers of flows with equal RTTs. 902 This scenario is not meant to be an accurate model of the current 903 Internet, for instance: 905 o Utilisation is never 100%. 907 o Upstream not downstream constrains most p2p apps on DSL (but not 908 all fixed & wireless access technologies). 910 o The activity factor of 10% in our base example scenario is perhaps 911 an optimistic estimate for attended use over a 16hr peak period. 912 1% is just as likely for many users (before file-sharing became 913 popular, DSL networks were provisioned for a contention ratio of 914 about 25:1, aiming to handle a peak average activity factor of 4% 915 across all user types). 917 o And rather than falling into two neat categories, users sit on a 918 wide spectrum that extends to far more extreme types in both 919 directions, while in between there are users who mix both types in 920 different proportions [Res_p2p]. 922 But the scenario has merely been chosen because it makes it simple to 923 grasp the main issues while still retaining some similarity to the 924 real Internet. We will also develop the scenario as we go, to add 925 more realism (e.g. adding mixed user types). 927 A.2. Compounding Overlooked Degrees of Freedom 929 Table 3 extends the base scenario of Appendix A to compound 930 differences in average activity factor with differences in average 931 numbers of active flows. 933 During the 'day' at any instant we assume on average that attended 934 use results in 2 flows per user (which are still only open 10% of the 935 time), while unattended use results in 100 flows per user open 936 continuously. So at any one time 2016 flows are active, 16 from 937 attended use (10%*80=8 users at any one time * 2 flows) and 2000 from 938 unattended use (20 users * 100 flows). TCP will ensure each of the 8 939 users who are active at any one time gets about 2*10Mbps/2016 = 940 9.9kbps of the bottleneck, while each of the 20 unattended users gets 941 about 100*10Mbps/2016 = 496kbps. This ignores flow start up effects, 942 which will tend to make matters even worse for attended use, given 943 briefer flows start more often. 945 +------------+-------+--------+---------------+----------+----------+ 946 | Type of | No. | Activ- | Ave | Day rate | Day | 947 | use | of | ity | simultaneous | /user | volume | 948 | | users | factor | flows /user | (16hr) | /user | 949 | | | | | | (16hr) | 950 +------------+-------+--------+---------------+----------+----------+ 951 | Attended | 80 | 10% | 2 | 9.9kbps | 7.1MB | 952 | Unattended | 20 | 100% | 100 | 496kbps | 3.6GB | 953 | | | | | | | 954 | Aggregate | 100 | | 2016 | 10Mbps | 72GB | 955 +------------+-------+--------+---------------+----------+----------+ 957 Table 3: Compounded scenario with attentive users less frequently 958 active and running less flows than unattentive users, assuming 100% 959 utilisation of 10Mbps bottleneck and all equal RTTs. 961 A.3. Hybrid Users 963 {ToDo:} 965 A.4. Upgrading Makes Most Users Worse Off 967 Now that the light users are only getting 9.9kbps from their 2Mbps 968 lines, the operator needs to consider upgrading their bottleneck (and 969 all the other access bottlenecks for its other customers), so it does 970 a market survey. The operator finds that fifty of the eighty light 971 users and ten of the twenty heavy users are willing to pay more to 972 get an extra 500kbps each at the bottleneck. (Note that by making a 973 smaller proportion of the heavy users willing to pay more we haven't 974 weighted the argument in our favour--in fact our argument would have 975 been even stronger the other way round.) 977 To satisfy the sixty users who are willing to pay for a 500kbps 978 upgrade will require a 60*500kbps = 30Mbps upgrade to the bottleneck 979 and proportionate upgrades deeper into the network, which will cost 980 the ISP an extra $120 per month (say). The outcome is shown in 981 Table 4. Because the bottleneck has grown from 10Mbps to 40Mbps, the 982 bit rates in the whole scenario essentially scale up by 4x. However, 983 also notice that the total volume sent by the light users has not 984 grown by 4x. Although they can send at 4x the bit rate, which means 985 they get more done and therefore transfer more volume, they don't 986 have 4x more volume to transfer--they let their machines idle for 987 longer between transfers reflected in their activity factor having 988 reduced from 10% to 4%. More bit rate was what they wanted, not more 989 volume particularly. 991 Let's assume the operator increases the monthly fee of all 100 992 customers by $1.20 to pay for the $120 upgrade. The light users had 993 a 9.9kbps share of the bottleneck. They've all paid their share of 994 the upgrade, but they've only got 30kbps more than they had--nothing 995 like the 500kbps upgrade most of them wanted and thought they were 996 paying for. TCP has caused each heavy user to increase the bit rate 997 of its flows by 4x too, and each has 50x more flows for 25x more of 998 the time, so they use up most of the newly provisioned capacity even 999 though only half of them were willing to pay for it. 1001 But the operator knew from its marketing that 30 of the light users 1002 and 10 of the heavy ones didn't want to pay any more anyway. Over 1003 time, the extra $1.20/month is likely to make them drift away to a 1004 competitor who runs a similar network but who decided not to upgrade 1005 its 10Mbps bottlenecks. Then the cost of the upgrade on our example 1006 network will have to be shared over 60 not 100 customers, requiring 1007 each to pay $2/month extra, rather than $1.20. 1009 +------------+-------+--------+---------------+----------+----------+ 1010 | Type of | No. | Activ- | Ave | Day rate | Day | 1011 | use | of | ity | simultaneous | /user | volume | 1012 | | users | factor | flows /user | (16hr) | /user | 1013 | | | | | | (16hr) | 1014 +------------+-------+--------+---------------+----------+----------+ 1015 | Attended | 80 | 4% | 2 | 40kbps | 11MB | 1016 | Unattended | 20 | 100% | 100 | 2.0Mbps | 14GB | 1017 | | | | | | | 1018 | Aggregate | 100 | | 2006.4 | 40Mbps | 288GB | 1019 +------------+-------+--------+---------------+----------+----------+ 1021 Table 4: Scenario with bottleneck upgraded to 40Mbps, but otherwise 1022 unchanged from compounded scenario. 1024 But perhaps losing a greater proportion of the heavy users will help? 1025 Table 5 shows the resulting shares of the bottleneck once all the 1026 cost sensitive customers have drifted away. Bit rates have increased 1027 by another 2x, mainly because there are 2x fewer heavy users. But 1028 that still only gives the light users 80kbps when they wanted 1029 500kbps--and, to rub salt in their wounds, their monthly fees have 1030 increased by $2 in all. The remaining 10 heavy users are probably 1031 happy enough though. For the extra $2/month they get to transfer 8x 1032 more volume each (and they still have the night to themselves). 1034 We have shown how the operator might lose those customers who didn't 1035 want to pay. But it also risks losing all fifty of those valuable 1036 light customers who were willing to pay, and who did pay, but who 1037 hardly got any benefit. In this situation, a rational operator will 1038 eventually have no choice but to stop investing in capacity, 1039 otherwise it will only be left with ten customers. 1041 +------------+-------+--------+---------------+----------+----------+ 1042 | Type of | No. | Activ- | Ave | Day rate | Day | 1043 | use | of | ity | simultaneous | /user | volume | 1044 | | users | factor | flows /user | (16hr) | /user | 1045 | | | | | | (16hr) | 1046 +------------+-------+--------+---------------+----------+----------+ 1047 | Attended | 50 | 2.5% | 2 | 80kbps | 14MB | 1048 | Unattended | 10 | 100% | 100 | 4.0Mbps | 29GB | 1049 | | | | | | | 1050 | Aggregate | 60 | | 1002.5 | 40Mbps | 288GB | 1051 +------------+-------+--------+---------------+----------+----------+ 1053 Table 5: Scenario with bottleneck upgraded to 40Mbps, but having lost 1054 customers due to extra cost; otherwise unchanged from compounded 1055 scenario. 1057 We hope the above examples have clearly illustrated two main points: 1059 o Rate equality at design time doesn't prevent extreme unfairness at 1060 run time; 1062 o If extreme unfairness is not corrected, capacity investment tends 1063 to stop--a concrete consequence of unfairness that affects 1064 everyone. 1066 Finally, note that configuration guidelines for typical p2p 1067 applications (e.g. BitTorrent calculator [az-calc]), advise a 1068 maximum number of open connections that increases roughly linearly 1069 with upstream capacity. 1071 Authors' Addresses 1073 Bob Briscoe 1074 BT & UCL 1075 B54/77, Adastral Park 1076 Martlesham Heath 1077 Ipswich IP5 3RE 1078 UK 1080 Phone: +44 1473 645196 1081 Email: bob.briscoe@bt.com 1082 URI: http://www.cs.ucl.ac.uk/staff/B.Briscoe/ 1084 Toby Moncaster 1085 BT 1086 B54/70, Adastral Park 1087 Martlesham Heath, Ipswich IP5 3RE 1088 UK 1090 Phone: +44 1473 645196 1091 Email: toby.moncaster@bt.com 1092 URI: http://research.bt.com/networks/TobyMoncaster.html 1094 Louise Burness 1095 BT 1096 B54/77, Adastral Park 1097 Martlesham Heath 1098 Ipswich IP5 3RE 1099 UK 1101 Phone: +44 1473 646504 1102 Email: Louise.Burness@bt.com 1103 URI: http://research.bt.com/networks/LouiseBurness.html 1105 Full Copyright Statement 1107 Copyright (C) The IETF Trust (2007). 1109 This document is subject to the rights, licenses and restrictions 1110 contained in BCP 78, and except as set forth therein, the authors 1111 retain all their rights. 1113 This document and the information contained herein are provided on an 1114 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1115 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1116 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1117 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1118 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1119 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1121 Intellectual Property 1123 The IETF takes no position regarding the validity or scope of any 1124 Intellectual Property Rights or other rights that might be claimed to 1125 pertain to the implementation or use of the technology described in 1126 this document or the extent to which any license under such rights 1127 might or might not be available; nor does it represent that it has 1128 made any independent effort to identify any such rights. Information 1129 on the procedures with respect to rights in RFC documents can be 1130 found in BCP 78 and BCP 79. 1132 Copies of IPR disclosures made to the IETF Secretariat and any 1133 assurances of licenses to be made available, or the result of an 1134 attempt made to obtain a general license or permission for the use of 1135 such proprietary rights by implementers or users of this 1136 specification can be obtained from the IETF on-line IPR repository at 1137 http://www.ietf.org/ipr. 1139 The IETF invites any interested party to bring to its attention any 1140 copyrights, patents or patent applications, or other proprietary 1141 rights that may cover technology that may be required to implement 1142 this standard. Please address the information to the IETF at 1143 ietf-ipr@ietf.org. 1145 Acknowledgments 1147 Funding for the RFC Editor function is provided by the IETF 1148 Administrative Support Activity (IASA). This document was produced 1149 using xml2rfc v1.32 (of http://xml.resource.org/) from a source in 1150 RFC-2629 XML format.