idnits 2.17.1 draft-briscoe-tsvwg-re-ecn-tcp-motivation-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.ii or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 2, 2009) is 5533 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-pcn-architecture-09 == Outdated reference: A later version (-03) exists of draft-briscoe-re-pcn-border-cheat-02 == Outdated reference: A later version (-09) exists of draft-briscoe-tsvwg-re-ecn-tcp-06 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe 3 Internet-Draft BT & UCL 4 Intended status: Informational A. Jacquet 5 Expires: September 3, 2009 T. Moncaster 6 A. Smith 7 BT 8 March 2, 2009 10 Re-ECN: The Motivation for Adding Congestion Accountability to TCP/IP 11 draft-briscoe-tsvwg-re-ecn-tcp-motivation-00 13 Status of this Memo 15 This Internet-Draft is submitted to IETF in full conformance with the 16 provisions of BCP 78 and BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on September 3, 2009. 36 Copyright Notice 38 Copyright (c) 2009 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents in effect on the date of 43 publication of this document (http://trustee.ietf.org/license-info). 44 Please review these documents carefully, as they describe your rights 45 and restrictions with respect to this document. 47 Abstract 49 This document describes the motivation for a new protocol for 50 explicit congestion notification (ECN), termed re-ECN, which can be 51 deployed incrementally around unmodified routers. Re-ECN allows 52 accurate congestion monitoring throughout the network thus enabling 53 the upstream party at any trust boundary in the internetwork to be 54 held responsible for the congestion they cause, or allow to be 55 caused. So, networks can introduce straightforward accountability 56 for congestion and policing mechanisms for incoming traffic from end- 57 customers or from neighbouring network domains. As well as giving 58 the motivation for re-ECN this document also gives examples of 59 mechanisms that can use the protocol to ensure data sources respond 60 correctly to congestion. And it describes example mechanisms that 61 ensure the dominant selfish strategy of both network domains and end- 62 points will be to use the protocol honestly. 64 Authors' Statement: Status (to be removed by the RFC Editor) 66 Although the re-ECN protocol is intended to make a simple but far- 67 reaching change to the Internet architecture, the most immediate 68 priority for the authors is to delay any move of the ECN nonce to 69 Proposed Standard status. The argument for this position is 70 developed in Appendix E. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 75 1.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . 5 76 1.2. Re-ECN Protocol in Brief . . . . . . . . . . . . . . . . . 5 77 1.3. The Re-ECN Framework . . . . . . . . . . . . . . . . . . . 7 78 1.4. Solving Hard Problems . . . . . . . . . . . . . . . . . . 7 79 1.5. The Rest of this Document . . . . . . . . . . . . . . . . 9 80 2. Requirements notation . . . . . . . . . . . . . . . . . . . . 9 81 3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 9 82 3.1. Policing Congestion Response . . . . . . . . . . . . . . . 10 83 3.1.1. The Policing Problem . . . . . . . . . . . . . . . . . 10 84 3.1.2. The Case Against Bottleneck Policing . . . . . . . . . 11 85 4. Re-ECN Incentive Framework . . . . . . . . . . . . . . . . . . 12 86 4.1. Revealing Congestion Along the Path . . . . . . . . . . . 12 87 4.1.1. Positive and Negative Flows . . . . . . . . . . . . . 13 88 4.2. Incentive Framework Overview . . . . . . . . . . . . . . . 14 89 4.3. Egress Dropper . . . . . . . . . . . . . . . . . . . . . . 18 90 4.4. Ingress Policing . . . . . . . . . . . . . . . . . . . . . 19 91 4.5. Inter-domain Policing . . . . . . . . . . . . . . . . . . 21 92 4.6. Inter-domain Fail-safes . . . . . . . . . . . . . . . . . 25 93 4.7. The Case against Classic Feedback . . . . . . . . . . . . 25 94 4.8. Simulations . . . . . . . . . . . . . . . . . . . . . . . 27 95 5. Other Applications of Re-ECN . . . . . . . . . . . . . . . . . 27 96 5.1. DDoS Mitigation . . . . . . . . . . . . . . . . . . . . . 27 97 5.2. End-to-end QoS . . . . . . . . . . . . . . . . . . . . . . 28 98 5.3. Traffic Engineering . . . . . . . . . . . . . . . . . . . 29 99 5.4. Inter-Provider Service Monitoring . . . . . . . . . . . . 29 100 6. Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 29 101 7. Incremental Deployment . . . . . . . . . . . . . . . . . . . . 30 102 7.1. Incremental Deployment Features . . . . . . . . . . . . . 30 103 7.2. Incremental Deployment Incentives . . . . . . . . . . . . 30 104 8. Architectural Rationale . . . . . . . . . . . . . . . . . . . 35 105 9. Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 38 106 9.1. Policing Rate Response to Congestion . . . . . . . . . . . 38 107 9.2. Congestion Notification Integrity . . . . . . . . . . . . 39 108 9.3. Identifying Upstream and Downstream Congestion . . . . . . 40 109 10. Security Considerations . . . . . . . . . . . . . . . . . . . 40 110 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 40 111 12. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 41 112 13. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 41 113 14. Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 41 114 15. References . . . . . . . . . . . . . . . . . . . . . . . . . . 41 115 15.1. Normative References . . . . . . . . . . . . . . . . . . . 41 116 15.2. Informative References . . . . . . . . . . . . . . . . . . 42 117 Appendix A. Example Egress Dropper Algorithm . . . . . . . . . . 44 118 Appendix B. Policer Designs to ensure Congestion 119 Responsiveness . . . . . . . . . . . . . . . . . . . 44 121 B.1. Per-user Policing . . . . . . . . . . . . . . . . . . . . 44 122 B.2. Per-flow Rate Policing . . . . . . . . . . . . . . . . . . 46 123 Appendix C. Downstream Congestion Metering Algorithms . . . . . . 48 124 C.1. Bulk Downstream Congestion Metering Algorithm . . . . . . 48 125 C.2. Inflation Factor for Persistently Negative Flows . . . . . 49 126 Appendix D. Re-TTL . . . . . . . . . . . . . . . . . . . . . . . 50 127 Appendix E. Argument for holding back the ECN nonce . . . . . . . 51 128 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 52 130 1. Introduction 132 This document aims to: 134 o Describe the motivation for wanting to introduce re-ECN; 136 o Provide a very brief description of the protocol; 138 o The framework within which the protocol sits; 140 o To show how a number of hard problems become much easier to solve 141 once re-ECN is available in IP. 143 This introduction starts with a run through of these 4 points. 145 1.1. Motivation 147 Re-ECN is proposed as a means of allowing accurate monitoring of 148 congestion throughout the Internet. The current Internet relies on 149 the vast majority of end-systems running TCP and reacting to detected 150 congestion by reducing their sending rates. Thus congestion control 151 is conducted by the collaboration of the majority of end-systems. 153 In this situation it is possible for applications that are 154 unresponsive to congestion to take whatever share of bottleneck 155 resources they want from responsive flows, the responsive flows 156 reduce their sending rate in face of congestion and effectively get 157 out of the way of unresponsive flows. An increasing proportion of 158 such applications could lead to congestion collapse being more common 159 [RFC3714]. Each network has no visibility of whole path congestion 160 and can only respond to congestion on a local basis. 162 Using re-ECN will allow any point along a path to calculate 163 congestion both upstream and downstream of that point. As a 164 consequence of this policing of congestion /could/ be carried out in 165 the network if end-systems fail to do so. Re-ECN enables flows and 166 users to be policed and for policing to happen at network ingress and 167 at network borders. 169 1.2. Re-ECN Protocol in Brief 171 In re-ECN each sender makes a prediction of the congestion that each 172 flow will cause and signals that prediction within the IP headers of 173 that flow. The prediction is based on, but not limited to, feedback 174 received from the receiver. Sending a prediction of the congestion 175 gives network equipment a view of the congestion downstream and 176 upstream. 178 In order to explain this mechanism we introduce the notion of IP 179 packets carrying different, notional values dependent on the state of 180 their header flags: 182 o Negative - are those marked by queues when incipient congestion is 183 detected. This is exactly the same as ECN [RFC3168]; 185 o Positive - are sent by the sender in proportion to the number of 186 bytes in packets that have been marked negative according to 187 feedback received from the receiver; 189 o Cautious - are sent whenever the sender cannot be sure of the 190 correct amount of positive bytes to inject into the network for 191 example, at the start of a flow to indicate that feedback has not 192 been established; 194 o Cancelled - packets sent by the sender as positive that get marked 195 as negative by queues in the network due to incipient congestion; 197 o Neutral - normal IP packets but show queues that they can be 198 marked negative. 200 A flow starts to transmit packets. No feedback has been established 201 so a number of cautious packets are sent (see the protocol definition 202 [Re-TCP] for an analysis of how many cautious packets should be sent 203 at flow start). The rest are sent as neutral. 205 The packets traverse a congested queue. A fraction are marked 206 negative as an indication of incipient congestion. 208 The packets are received by the receiver. The receiver feeds back to 209 the sender a count of the number of packets that have been marked 210 negative. This feedback can be provided either by the transport 211 (e.g. TCP) or by higher-layer control messages. 213 The sender receives the feedback and then sends a number of positive 214 packets in proportion to the bytes represented by packets that have 215 been marked negative. It is important to note that congestion is 216 revealed by the fraction of marked packets rather than a field in the 217 IP header. This is due to the limited code points available and 218 includes use of the last unallocated bit (sometimes called the evil 219 bit [RFC3514]). Full details of the code points used is given in 220 [Re-TCP]. This lack of codepoints is, however, the case with IPv4. 221 ECN is similarly restricted. 223 The number of bytes inside the negative packets and positive packets 224 should therefore be approximately equal at the termination point of 225 the flow. To put it another way, the balance of negative and 226 positive should be zero. 228 1.3. The Re-ECN Framework 230 The introducion of the protocol enables 3 things: 232 o Gives a view of whole path congestion; 234 o Enables policing of flows; 236 o It allows networks to monitor the flow of congestion across their 237 borders. 239 At any point in the network a device can calculate the upstream 240 congestion by calculating the fraction of bytes in negative packets 241 to total packets. This it could do using ECN by calculating the 242 fraction of packets marked Congestion Experienced. 244 Using re-ECN a device in the network can calculate downstream 245 congestion by subtracting the fraction of negative packets from the 246 fraction of positive packets. 248 A user can be restricted to only causing a certain amount of 249 congestion. A Policer could be introduced at the ingress of a 250 network that counts the number of positive packets being sent and 251 limits the sender if that sender ties to transmit more positive 252 packets than their allowance. 254 A user could deliberately ignore some or all of the feedback and 255 transmit packets with a zero or much lower proportion of positive 256 packets than negative packets. To solve this a Dropper is proposed. 257 This would be placed at the egress of a network. If the number of 258 negative packets exceeds the number of positive packets then the flow 259 could be dropped or some other sanction enacted. 261 Policers and droppers could be used between networks in order to 262 police bulk traffic. A whole network harbouring users causing 263 congestion in downstream networks can be held responsible or policed 264 by its downstream neighbour. 266 1.4. Solving Hard Problems 268 We have already shown that by making flows declare the level of 269 congestion they are causing that they can be policed, more 270 specifically these are the kind of problems that can be solved: 272 o mitigating distributed denial of service (DDoS); 273 o simplifying differentiation of quality of service (QoS); 275 o policing compliance to congestion control; 277 o inter-provider service monitoring; 279 o etc. 281 Uniquely, re-ECN manages to enable solutions to these problems 282 without unduly stifling innovative new ways to use the Internet. 283 This was a hard balance to strike, given it could be argued that DDoS 284 is an innovative way to use the Internet. The most valuable insight 285 was to allow each network to choose the level of constraint it wishes 286 to impose. Also re-ECN has been carefully designed so that networks 287 that choose to use it conservatively can protect themselves against 288 the congestion caused in their network by users on other networks 289 with more liberal policies. 291 For instance, some network owners want to block applications like 292 voice and video unless their network is compensated for the extra 293 share of bottleneck bandwidth taken. These real-time applications 294 tend to be unresponsive when congestion arises. Whereas elastic TCP- 295 based applications back away quickly, ending up taking a much smaller 296 share of congested capacity for themselves. Other network owners 297 want to invest in large amounts of capacity and make their gains from 298 simplicity of operation and economies of scale. 300 While we have designed re-ECN so that networks can choose to deploy 301 stringent policing, this does not imply we advocate that every 302 network should introduce tight controls on those that cause 303 congestion. Re-ECN has been specifically designed to allow different 304 networks to choose how conservative or liberal they wish to be with 305 respect to policing congestion. But those that choose to be 306 conservative can protect themselves from the excesses that liberal 307 networks allow their users. 309 Re-ECN allows the more conservative networks to police out flows that 310 have not asked to be unresponsive to congestion---not because they 311 are voice or video---just because they don't respond to congestion. 312 But it also allows other networks to choose not to police. 313 Crucially, when flows from liberal networks cross into a conservative 314 network, re-ECN enables the conservative network to apply penalties 315 to its neighbouring networks for the congestion they allow to be 316 caused. And these penalties can be applied to bulk data, without 317 regard to flows. 319 Then, if unresponsive applications become so dominant that some of 320 the more liberal networks experience congestion collapse [RFC3714], 321 they can change their minds and use re-ECN to apply tighter controls 322 in order to bring congestion back under control. 324 Re-ECN reduces the need for complex network equipment to perform 325 these functions. 327 1.5. The Rest of this Document 329 This document is structured as follows. First the motivation for the 330 new protocol is given (Section 3) followed by the incentive framework 331 that is possible with the protocol Section 4. Section 5 then 332 describes other important applications re-ECN, such as policing DDoS, 333 QoS and congestion control. Although these applications do not 334 require standardisation themselves, they are described in a fair 335 degree of detail in order to explain how re-ECN can be used. Given 336 re-ECN proposes to use the last undefined bit in the IPv4 header, we 337 felt it necessary to outline the potential that re-ECN could release 338 in return for being given that bit. 340 Deployment issues discussed throughout the document are brought 341 together in Section 7, which is followed by a brief section 342 explaining the somewhat subtle rationale for the design from an 343 architectural perspective (Section 8). We end by describing related 344 work (Section 9), listing security considerations (Section 10) and 345 finally drawing conclusions (Section 12). 347 2. Requirements notation 349 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 350 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 351 document are to be interpreted as described in [RFC2119]. 353 This document first specifies a protocol, then describes a framework 354 that creates the right incentives to ensure compliance to the 355 protocol. This could cause confusion because the second part of the 356 document considers many cases where malicious nodes may not comply 357 with the protocol. When such contingencies are described, if any of 358 the above keywords are not capitalised, that is deliberate. So, for 359 instance, the following two apparently contradictory sentences would 360 be perfectly consistent: i) x MUST do this; ii) x may not do this. 362 3. Motivation 363 3.1. Policing Congestion Response 365 3.1.1. The Policing Problem 367 The current Internet architecture trusts hosts to respond voluntarily 368 to congestion. Limited evidence shows that the large majority of 369 end-points on the Internet comply with a TCP-friendly response to 370 congestion. But telephony (and increasingly video) services over the 371 best effort Internet are attracting the interest of major commercial 372 operations. Most of these applications do not respond to congestion 373 at all. Those that can switch to lower rate codecs. 375 Of course, the Internet is intended to support many different 376 application behaviours. But the problem is that this freedom can be 377 exercised irresponsibly. The greater problem is that we will never 378 be able to agree on where the boundary is between responsible and 379 irresponsible. Therefore re-ECN is designed to allow different 380 networks to set their own view of the limit to irresponsibility, and 381 to allow networks that choose a more conservative limit to push back 382 against congestion caused in more liberal networks. 384 As an example of the impossibility of setting a standard for 385 fairness, mandating TCP-friendliness would set the bar too high for 386 unresponsive streaming media, but still some would say the bar was 387 too low [relax-fairness]. Even though all known peer-to-peer 388 filesharing applications are TCP-compatible, they can cause a 389 disproportionate amount of congestion, simply by using multiple flows 390 and by transferring data continuously relative to other short-lived 391 sessions. On the other hand, if we swung the other way and set the 392 bar low enough to allow streaming media to be unresponsive, we would 393 also allow denial of service attacks, which are typically 394 unresponsive to congestion and consist of multiple continuous flows. 396 Applications that need (or choose) to be unresponsive to congestion 397 can effectively take (some would say steal) whatever share of 398 bottleneck resources they want from responsive flows. Whether or not 399 such free-riding is common, inability to prevent it increases the 400 risk of poor returns for investors in network infrastructure, leading 401 to under-investment. An increasing proportion of unresponsive or 402 free-riding demand coupled with persistent under-supply is a broken 403 economic cycle. Therefore, if the current, largely co-operative 404 consensus continues to erode, congestion collapse could become more 405 common in more areas of the Internet [RFC3714]. 407 While we have designed re-ECN so that networks can choose to deploy 408 stringent policing, this does not imply we advocate that every 409 network should introduce tight controls on those that cause 410 congestion. Re-ECN has been specifically designed to allow different 411 networks to choose how conservative or liberal they wish to be with 412 respect to policing congestion. But those that choose to be 413 conservative can protect themselves from the excesses that liberal 414 networks allow their users. 416 3.1.2. The Case Against Bottleneck Policing 418 The state of the art in rate policing is the bottleneck policer, 419 which is intended to be deployed at any forwarding resource that may 420 become congested. Its aim is to detect flows that cause 421 significantly more local congestion than others. Although operators 422 might solve their immediate problems by deploying bottleneck 423 policers, we are concerned that widespread deployment would make it 424 extremely hard to evolve new application behaviours. We believe the 425 IETF should offer re-ECN as the preferred protocol on which to base 426 solutions to the policing problems of operators, because it would not 427 harm evolvability and, frankly, it would be far more effective (see 428 later for why). 430 Approaches like [XCHOKe] & [pBox] are nice approaches for rate 431 policing traffic without the benefit of whole path information (such 432 as could be provided by re-ECN). But they must be deployed at 433 bottlenecks in order to work. Unfortunately, a large proportion of 434 traffic traverses at least two bottlenecks (in two access networks), 435 particularly with the current traffic mix where peer-to-peer file- 436 sharing is prevalent. If ECN were deployed, we believe it would be 437 likely that these bottleneck policers would be adapted to combine ECN 438 congestion marking from the upstream path with local congestion 439 knowledge. But then the only useful placement for such policers 440 would be close to the egress of the internetwork. 442 But then, if these bottleneck policers were widely deployed (which 443 would require them to be more effective than they are now), the 444 Internet would find itself with one universal rate adaptation policy 445 (probably TCP-friendliness) embedded throughout the network. Given 446 TCP's congestion control algorithm is already known to be hitting its 447 scalability limits and new algorithms are being developed for high- 448 speed congestion control, embedding TCP policing into the Internet 449 would make evolution to new algorithms extremely painful. If a 450 source wanted to use a different algorithm, it would have to first 451 discover then negotiate with all the policers on its path, 452 particularly those in the far access network. The IETF has already 453 traveled that path with the Intserv architecture and found it 454 constrains scalability [RFC2208]. 456 Anyway, if bottleneck policers were ever widely deployed, they would 457 be likely to be bypassed by determined attackers. They inherently 458 have to police fairness per flow or per source-destination pair. 460 Therefore they can easily be circumvented either by opening multiple 461 flows (by varying the end-point port number); or by spoofing the 462 source address but arranging with the receiver to hide the true 463 return address at a higher layer. 465 4. Re-ECN Incentive Framework 467 The aim is to create an incentive environment that ensures optimal 468 sharing of capacity despite everyone acting selfishly (including 469 lying and cheating). Of course, the mechanisms put in place for this 470 can lie dormant wherever co-operation is the norm. 472 4.1. Revealing Congestion Along the Path 474 Throughout this document we focus on path congestion. But some forms 475 of fairness, particularly TCP's, also depend on round trip time. If 476 TCP-fairness is required, we also propose to measure downstream path 477 delay using re-feedback. We give a simple outline of how this could 478 work in Appendix D. However, we do not expect this to be necessary, 479 as researchers tend to agree that only congestion control dynamics 480 need to depend on RTT, not the rate that the algorithm would converge 481 on after a period of stability. 483 Recall that re-ECN can be used to measure path congestion at any 484 point on the path. End-systems know the whole path congestion. The 485 receiver knows this by the ratio of negative packets to all other 486 packets it observes. The sender knows this same information via the 487 feedback. 489 +---+ +----+ +----+ +---+ 490 | S |--| Q1 |----------------| Q2 |--| R | 491 +---+ +----+ +----+ +---+ 492 . . . . 493 ^ . . . . 494 | . . . . 495 | . positive fraction . . 496 3% |-------------------------------+======= 497 | . . | . 498 2% | . . | . 499 | . . negative fraction | . 500 1% | . +----------------------+ . 501 | . | . . 502 0% +---------------------------------------> 503 ^ ^ ^ 504 L M N Observation points 505 Figure 1: A 2-Queue Example (Imprecise) 507 Figure 1 uses a simple network to illustrate how re-ECN allows queues 508 to measure downstream congestion. The receiver counts negative 509 packets as being 3% of all received packets. This fraction is fed 510 back to the sender. The sender sets 3% of its packets to be positive 511 to match this. This fraction of positive packets can be observed 512 along the path. This is shown by the horizontal line at 3% in the 513 figure. The negative fraction is shown by the stepped line which 514 rises to meet the positive fraction line with steps at at each queue 515 where packets are marked negative. Two queues are shown (Q1 and Q2) 516 that are currently congested. Each time packets pass through a 517 fraction are marked red; 1% at Q1 and 2% at Q2). The approximate 518 downstream congestion can be measured at the observation points shown 519 along the path by subtracting the negative fraction from the positive 520 fraction, as shown in the table below. [Re-TCP] [ref other document] 521 derives these approximations from a precise analysis). 523 +-------------------+------------------------------+ 524 | Observation point | Approx downstream congestion | 525 +-------------------+------------------------------+ 526 | L | 3% - 0% = 3% | 527 | M | 3% - 1% = 2% | 528 | N | 3% - 3% = 0% | 529 +-------------------+------------------------------+ 531 Table 1: Downstream Congestion Measured at Example Observation Points 533 All along the path, whole-path congestion remains unchanged so it can 534 be used as a reference against which to compare upstream congestion. 535 The difference predicts downstream congestion for the rest of the 536 path. Therefore, measuring the fractions of negative and positive 537 packets at any point in the Internet will reveal upstream, downstream 538 and whole path congestion. 540 Note: to be absolutely clear these fractions are averages that would 541 result from the behaviour of the protocol handler mechanically 542 sending positive packets in direct response to incoming feedback---we 543 are not saying any protocol handler has to work with these average 544 fractions directly. 546 4.1.1. Positive and Negative Flows 548 In section Section 1.2 we introduced the notion of IP packets having 549 different values (negative, positive, cautious, cancelled and 550 neutral). So positive and cautious packets have a value of +1, 551 negative -1, and cancelled and neutral have zero value. 553 In the rest of this document we will loosely talk of positive or 554 negative flows. A negative flow is one where more negative bytes 555 than positive bytes arrive at the reciever. Likewise positive flows 556 are where more positive bytes arrive than negative bytes. Both of 557 these indicate that the wrong amount of positive bytes have been 558 sent. 560 4.2. Incentive Framework Overview 562 Figure 2 sketches the incentive framework that we will describe piece 563 by piece throughout this section. We will do a first pass in 564 overview, then return to each piece in detail. We re-use the earlier 565 example of how downstream congestion is derived by subtracting 566 upstream congestion from path congestion (Figure 1) but depict 567 multiple trust boundaries to turn it into an internetwork. For 568 clarity, only downstream congestion is shown (the difference between 569 the two earlier plots). The graph displays downstream path 570 congestion seen in a typical flow as it traverses an example path 571 from sender S to receiver R, across networks N1, N2 & N3. Everyone 572 is shown using re-ECN correctly, but we intend to show why everyone 573 would /choose/ to use it correctly, and honestly. 575 Three main types of self-interest can be identified: 577 o Users want to transmit data across the network as fast as 578 possible, paying as little as possible for the privilege. In this 579 respect, there is no distinction between senders and receivers, 580 but we must be wary of potential malice by one on the other; 582 o Network operators want to maximise revenues from the resources 583 they invest in. They compete amongst themselves for the custom of 584 users. 586 o Attackers (whether users or networks) want to use any opportunity 587 to subvert the new re-ECN system for their own gain or to damage 588 the service of their victims, whether targeted or random. 590 policer dropper 591 | | 592 | | 593 S <-----N1----> <---N2---> <---N3--> R domain 594 | | 595 | | 596 Border Gateways 598 Figure 2: Incentive Framework 600 Source congestion control: We want to ensure that the sender will 601 throttle its rate as downstream congestion increases. Whatever 602 the agreed congestion response (whether TCP-compatible or some 603 enhanced QoS), to some extent it will always be against the 604 sender's interest to comply. 606 Ingress policing: But it is in all the network operators' interests 607 to encourage fair congestion response, so that their investments 608 are employed to satisfy the most valuable demand. The re-ECN 609 protocol ensures packets carry the necessary information about 610 their own expected downstream congestion so that N1 can deploy a 611 policer at its ingress to check that S1 is complying with whatever 612 congestion control it should be using (Section 4.4). If N1 is 613 extremely conservative it could police each flow, but it is likely 614 to just police the bulk amount of congestion each customer causes 615 without regard to flows, or if it is extremely liberal it need not 616 police congestion control at all. Whatever, it is always 617 preferable to police traffic at the very first ingress into an 618 internetwork, before non-compliant traffic can cause any damage. 620 Edge egress dropper: If the policer ensures the source has less 621 right to a high rate the higher it declares downstream congestion, 622 the source has a clear incentive to understate downstream 623 congestion. But, if flows of packets are understated when they 624 enter the internetwork, they will have become negative by the time 625 they leave. So, we introduce a dropper at the last network 626 egress, which drops packets in flows that persistently declare 627 negative downstream congestion (see Section 4.3 for details). 629 Inter-domain traffic policing: But next we must ask, if congestion 630 arises downstream (say in N3), what is the ingress network's 631 (N1's) incentive to police its customers' response? If N1 turns a 632 blind eye, its own customers benefit while other networks suffer. 633 This is why all inter-domain QoS architectures (e.g. Intserv, 634 Diffserv) police traffic each time it crosses a trust boundary. 635 We have already shown that re-ECN gives a trustworthy measure of 636 the expected downstream congestion that a flow will cause by 637 subtracting negative volume from positive at any intermediate 638 point on a path. N3 (say) can use this measure to police all the 639 responses to congestion of all the sources beyond its upstream 640 neighbour (N2), but in bulk with one very simple passive 641 mechanism, rather than per flow, as we will now explain. 643 Emulating policing with inter-domain congestion penalties: Between 644 high-speed networks, we would rather avoid per-flow policing, and 645 we would rather avoid holding back traffic while it is policed. 646 Instead, once re-ECN has arranged headers to carry downstream 647 congestion honestly, N2 can contract to pay N3 penalties in 648 proportion to a single bulk count of the congestion metrics 649 crossing their mutual trust boundary (Section 4.5). In this way, 650 N3 puts pressure on N2 to suppress downstream congestion, for 651 every flow passing through the border interface, even though they 652 will all start and end in different places, and even though they 653 may all be allowed different responses to congestion. The figure 654 depicts this downward pressure on N2 by the solid downward arrow 655 at the egress of N2. Then N2 has an incentive either to police 656 the congestion response of its own ingress traffic (from N1) or to 657 emulate policing by applying penalties to N1 in turn on the basis 658 of congestion counted at their mutual boundary. In this recursive 659 way, the incentives for each flow to respond correctly to 660 congestion trace back with each flow precisely to each source, 661 despite the mechanism not recognising flows (see Section 5.2). 663 Inter-domain congestion charging diversity: Any two networks are 664 free to agree any of a range of penalty regimes between themselves 665 but they would only provide the right incentives if they were 666 within the following reasonable constraints. N2 should expect to 667 have to pay penalties to N3 where penalties monotonically increase 668 with the volume of congestion and negative penalties are not 669 allowed. For instance, they may agree an SLA with tiered 670 congestion thresholds, where higher penalties apply the higher the 671 threshold that is broken. But the most obvious (and useful) form 672 of penalty is where N3 levies a charge on N2 proportional to the 673 volume of downstream congestion N2 dumps into N3. In the 674 explanation that follows, we assume this specific variant of 675 volume charging between networks - charging proportionate to the 676 volume of congestion. 678 We must make clear that we are not advocating that everyone should 679 use this form of contract. We are well aware that the IETF tries 680 to avoid standardising technology that depends on a particular 681 business model. And we strongly share this desire to encourage 682 diversity. But our aim is merely to show that border policing can 683 at least work with this one model, then we can assume that 684 operators might experiment with the metric in other models (see 685 Section 4.5 for examples). Of course, operators are free to 686 complement this usage element of their charges with traditional 687 capacity charging, and we expect they will as predicted by 688 economics. 690 No congestion charging to users: Bulk congestion penalties at trust 691 boundaries are passive and extremely simple, and lose none of 692 their per-packet precision from one boundary to the next (unlike 693 Diffserv all-address traffic conditioning agreements, which 694 dissipate their effectiveness across long topologies). But at any 695 trust boundary, there is no imperative to use congestion charging. 696 Traditional traffic policing can be used, if the complexity and 697 cost is preferred. In particular, at the boundary with end 698 customers (e.g. between S and N1), traffic policing will most 699 likely be more appropriate. Policer complexity is less of a 700 concern at the edge of the network. And end-customers are known 701 to be highly averse to the unpredictability of congestion 702 charging. 704 NOTE WELL: This document neither advocates nor requires congestion 705 charging for end customers and advocates but does not require 706 inter-domain congestion charging. 708 Competitive discipline of inter-domain traffic engineering: With 709 inter-domain congestion charging, a domain seems to have a 710 perverse incentive to fake congestion; N2's profit depends on the 711 difference between congestion at its ingress (its revenue) and at 712 its egress (its cost). So, overstating internal congestion seems 713 to increase profit. However, smart border routing [Smart_rtg] by 714 N1 will bias its routing towards the least cost routes. So, N2 715 risks losing all its revenue to competitive routes if it 716 overstates congestion (see Section 5.3). In other words, if N2 is 717 the least congested route, its ability to raise excess profits is 718 limited by the congestion on the next least congested route. 720 Closing the loop: All the above elements conspire to trap everyone 721 between two opposing pressures, ensuring the downstream congestion 722 metric arrives at the destination neither above nor below zero. 723 So, we have arrived back where we started in our argument. The 724 ingress edge network can rely on downstream congestion declared in 725 the packet headers presented by the sender. So it can police the 726 sender's congestion response accordingly. 728 Evolvability of congestion control: We have seen that re-ECN enables 729 policing at the very first ingress. We have also seen that, as 730 flows continue on their path through further networks downstream, 731 re-ECN removes the need for further per-domain ingress policing of 732 all the different congestion responses allowed to each different 733 flow. This is why the evolvability of re-ECN policing is so 734 superior to bottleneck policing or to any policing of different 735 QoS for different flows. Even if all access networks choose to 736 conservatively police congestion per flow, each will want to 737 compete with the others to allow new responses to congestion for 738 new types of application. With re-ECN, each can introduce new 739 controls independently, without coordinating with other networks 740 and without having to standardise anything. But, as we have just 741 seen, by making inter-domain penalties proportionate to bulk 742 downtream congestion, downstream networks can be agnostic to the 743 specific congestion response for each flow, but they can still 744 apply more penalty the more liberal the ingress access network has 745 been in the response to congestion it allowed for each flow. 747 We now take a second pass over the incentive framework, filling in 748 the detail. 750 4.3. Egress Dropper 752 As traffic leaves the last network before the receiver (domain N3 in 753 Figure 2), the fraction of positive octets in a flow should match the 754 fraction of negative octets introduced by congestion marking (red 755 packets), leaving a balance of zero. If it is less (a negative 756 flow), it implies that the source is understating path congestion 757 (which will reduce the penalties that N2 owes N3). 759 If flows are positive, N3 need take no action---this simply means its 760 upstream neighbour is paying more penalties than it needs to, and the 761 source is going slower than it needs to. But, to protect itself 762 against persistently negative flows, N3 will need to install a 763 dropper at its egress. Appendix A gives a suggested algorithm for 764 this dropper. There is no intention that the dropper algorithm needs 765 to be standardised, it is merely provided to show that an efficient, 766 robust algorithm is possible. But whatever algorithm is used must 767 meet the criteria below: 769 o It SHOULD introduce minimal false positives for honest flows; 771 o It SHOULD quickly detect and sanction dishonest flows (minimal 772 false negatives); 774 o It SHOULD be invulnerable to state exhaustion attacks from 775 malicious sources. For instance, if the dropper uses flow-state, 776 it should not be possible for a source to send numerous packets, 777 each with a different flow ID, to force the dropper to exhaust its 778 memory capacity (rationale for SHOULD: Continuously sending keep- 779 alive packets might be perfectly reasonable behaviour, so we can't 780 distinguish a deliberate attack from reasonable levels of such 781 behaviour. Therefore it is strictly impossible to be invulnerable 782 to such an attack); 784 o It MUST introduce sufficient loss in goodput so that malicious 785 sources cannot play off losses in the egress dropper against 786 higher allowed throughput. Salvatori [CLoop_pol] describes this 787 attack, which involves the source understating path congestion 788 then inserting forward error correction (FEC) packets to 789 compensate expected losses; 791 o It MUST NOT be vulnerable to `identity whitewashing', where a 792 transport can label a flow with a new ID more cheaply than paying 793 the cost of continuing to use its current ID. 795 Note that the dropper operates on flows but we would like it not to 796 require per-flow state. This is why we have been careful to ensure 797 that all flows MUST start with a cautious packet. If a flow does not 798 start with a cautious packet, a dropper is likely to treat it 799 unfavourably. This risk makes it worth sending a cautious packet at 800 the start of a flow, even though there is a cost to the sender of 801 doing so (positive `worth'). Indeed, with cautious packets, the rate 802 at which a sender can generate new flows can be limited (Appendix B). 803 In this respect, cautious packets work like Handley's state set-up 804 bit [Steps_DoS]. 806 Appendix A also gives an example dropper implementation that 807 aggregates flow state. Dropper algorithms will often maintain a 808 moving average across flows of the fraction of positive packets. 809 When maintaining an average across flows, a dropper SHOULD only allow 810 flows into the average if they start with a cautious packet, but it 811 SHOULD NOT include cautious packets in the positive packet average. 812 A sender sends cautious packets when it does not have the benefit of 813 feedback from the receiver. So, counting cautious packets would be 814 likely to make the average unnecessarily positive, providing headroom 815 (or should we say footroom?) for dishonest (negative) traffic. 817 If the dropper detects a persistently negative flow, it SHOULD drop 818 sufficient negative and neutral packets to force the flow to not be 819 negative. Drops SHOULD be focused on just sufficient packets in 820 misbehaving flows to remove the negative bias while doing minimal 821 extra harm. 823 4.4. Ingress Policing 825 Access operators who wish to limit the congeston that a sender is 826 able to cause can deploy policers at the very first ingress to the 827 internetwork. Re-ECN has been designed to avoid the need for 828 bottleneck policing so that we can avoid a future where a single rate 829 adaptation policy is embedded throughout the network. Instead, re- 830 ECN allows the particular rate adaptation policy to be solely agreed 831 bilaterally between the sender and its ingress access provider ([ref 832 other document] discusses possible ways to signal between them), 833 which allows congestion control to be policed, but maintains its 834 evolvability, requiring only a single, local box to be updated. 836 Appendix B gives examples of per-user policing algorithms. But there 837 is no implication that these algorithms are to be standardised, or 838 that they are ideal. The ingress rate policer is the part of the re- 839 ECN incentive framework that is intended to be the most flexible. 840 Once endpoint protocol handlers for re-ECN and egress droppers are in 841 place, operators can choose exactly which congestion response they 842 want to police, and whether they want to do it per user, per flow or 843 not at all. 845 The re-ECN protocol allows these ingress policers to easily perform 846 bulk per-user policing (Appendix B.1). This is likely to provide 847 sufficient incentive to the user to correctly respond to congestion 848 without needing the policing function to be overly complex. If an 849 access operator chose they could use per-flow policing according to 850 the widely adopted TCP rate adaptation ( Appendix B.2) or other 851 alternatives, however this would introduce extra complexity to the 852 system. 854 If a per-flow rate policer is used, it should use path (not 855 downstream) congestion as the relevant metric, which is represented 856 by the fraction of octets in packets with positive (positive and 857 cautious packets) and cancelled packets. Of course, re-ECN provides 858 all the information a policer needs directly in the packets being 859 policed. So, even policing TCP's AIMD algorithm is relatively 860 straightforward (Appendix B.2). 862 Note that we have included cancelled packets in the measure of path 863 congestion. cancelled packets arise when the sender sends a positive 864 packet in response to feedback, but then this positive packet just 865 happens to be congestion marked itself. One would not normally 866 expect many cancelled packets at the first ingress because one would 867 not normally expect much congestion marking to have been necessary 868 that soon in the path. However, a home network or campus network may 869 well sit between the sending endpoint and the ingress policer, so 870 some congestion may occur upstream of the policer. And if congestion 871 does occur upstream, some cancelled packets should be visible, and 872 should be taken into account in the measure of path congestion. 874 But a much more important reason for including cancelled packets in 875 the measure of path congestion at an ingress policer is that a sender 876 might otherwise subvert the protocol by sending cancelled packets 877 instead of neutral packets. Like neutral, cancelled packets are 878 worth zero, so the sender knows they won't be counted against any 879 quota it might have been allowed. But unlike neutral packets, 880 cancelled packets are immune to congestion marking, because they have 881 already been congestion marked. So, it is both correct and useful 882 that cancelled packets should be included in a policer's measure of 883 path congestion, as this removes the incentive the sender would 884 otherwise have to mark more packets as cancelled than it should. 886 An ingress policer should also ensure that flows are not already 887 negative when they enter the access network. As with cancelled 888 packets, the presence of negative packets will typically be unusual. 889 Therefore it will be easy to detect negative flows at the ingress by 890 just detecting negative packets then monitoring the flow they belong 891 to. 893 Of course, even if the sender does operate its own network, it may 894 arrange not to congestion mark traffic. Whether the sender does this 895 or not is of no concern to anyone else except the sender. Such a 896 sender will not be policed against its own network's contribution to 897 congestion, but the only resulting problem would be overload in the 898 sender's own network. 900 Finally, we must not forget that an easy way to circumvent re-ECN's 901 defences is for the source to turn off re-ECN support, by setting the 902 Not-RECT codepoint, implying RFC3168 compliant traffic. Therefore an 903 ingress policer should put a general rate-limit on Not-RECT traffic, 904 which SHOULD be lax during early, patchy deployment, but will have to 905 become stricter as deployment widens. Similarly, flows starting 906 without a cautious packet can be confined by a strict rate-limit used 907 for the remainder of flows that haven't proved they are well-behaved 908 by starting correctly (therefore they need not consume any flow 909 state---they are just confined to the `misbehaving' bin if they carry 910 an unrecognised flow ID). 912 4.5. Inter-domain Policing 914 One of the main design goals of re-ECN is for border security 915 mechanisms to be as simple as possible, otherwise they will become 916 the pinch-points that limit scalability of the whole internetwork. 917 We want to avoid per-flow processing at borders and to keep to 918 passive mechanisms that can monitor traffic in parallel to 919 forwarding, rather than having to filter traffic inline---in series 920 with forwarding. Such passive, off-line mechanisms are essential for 921 future high-speed all-optical border interconnection where packets 922 cannot be buffered while they are checked for policy compliance. 924 So far, we have been able to keep the border mechanisms simple, 925 despite having had to harden them against some subtle attacks on the 926 re-ECN design. The mechanisms are still passive and avoid per-flow 927 processing. 929 The basic accounting mechanism at each border interface simply 930 involves accumulating the volume of packets with positive worth 931 (positive and cautious packets), and subtracting the volume of those 932 with negative worth (red packets). Even though this mechanism takes 933 no regard of flows, over an accounting period (say a month) this 934 subtraction will account for the downstream congestion caused by all 935 the flows traversing the interface, wherever they come from, and 936 wherever they go to. The two networks can agree to use this metric 937 however they wish to determine some congestion-related penalty 938 against the upstream network. Although the algorithm could hardly be 939 simpler, it is spelled out using pseudo-code in Appendix C.1. 941 Various attempts to subvert the re-ECN design have been made. In all 942 cases their root cause is persistently negative flows. But, after 943 describing these attacks we will show that we don't actually have to 944 get rid of all persistently negative flows in order to thwart the 945 attacks. 947 In honest flows, downstream congestion is measured as positive minus 948 negative volume. So if all flows are honest (i.e. not persistently 949 negative), adding all positive volume and all negative volume without 950 regard to flows will give an aggregate measure of downstream 951 congestion. But such simple aggregation is only possible if no flows 952 are persistently negative. Unless persistently negative flows are 953 completely removed, they will reduce the aggregate measure of 954 congestion. The aggregate may still be positive overall, but not as 955 positive as it would have been had the negative flows been removed. 957 In Section 4.3 we discussed how to sanction traffic to remove, or at 958 least to identify, persistently negative flows. But, even if the 959 sanction for negative traffic is to discard it, unless it is 960 discarded at the exact point it goes negative, it will wrongly 961 subtract from aggregate downstream congestion, at least at any 962 borders it crosses after it has gone negative but before it is 963 discarded. 965 We rely on sanctions to deter dishonest understatement of congestion. 966 But even the ultimate sanction of discard can only be effective if 967 the sender is bothered about the data getting through to its 968 destination. A number of attacks have been identified where a sender 969 gains from sending dummy traffic or it can attack someone or 970 something using dummy traffic even though it isn't communicating any 971 information to anyone: 973 o A host can send traffic with no positive packets towards its 974 intended destination, aiming to transmit as much traffic as any 975 dropper will allow [Bauer06]. It may add forward error correction 976 (FEC) to repair as much drop as it experiences. 978 o A host can send dummy traffic into the network with no positive 979 packets and with no intention of communicating with anyone, but 980 merely to cause higher levels of congestion for others who do want 981 to communicate (DoS). So, to ride over the extra congestion, 982 everyone else has to spend more of whatever rights to cause 983 congestion they have been allowed. 985 o A network can simply create its own dummy traffic to congest 986 another network, perhaps causing it to lose business at no cost to 987 the attacking network. This is a form of denial of service 988 perpetrated by one network on another. The preferential drop 989 measures in [ref other document] provide crude protection against 990 such attacks, but we are not overly worried about more accurate 991 prevention measures, because it is already possible for networks 992 to DoS other networks on the general Internet, but they generally 993 don't because of the grave consequences of being found out. We 994 are only concerned if re-ECN increases the motivation for such an 995 attack, as in the next example. 997 o A network can just generate negative traffic and send it over its 998 border with a neighbour to reduce the overall penalties that it 999 should pay to that neighbour. It could even initialise the TTL so 1000 it expired shortly after entering the neighbouring network, 1001 reducing the chance of detection further downstream. This attack 1002 need not be motivated by a desire to deny service and indeed need 1003 not cause denial of service. A network's main motivator would 1004 most likely be to reduce the penalties it pays to a neighbour. 1005 But, the prospect of financial gain might tempt the network into 1006 mounting a DoS attack on the other network as well, given the gain 1007 would offset some of the risk of being detected. 1009 The first step towards a solution to all these problems with negative 1010 flows is to be able to estimate the contribution they make to 1011 downstream congestion at a border and to correct the measure 1012 accordingly. Although ideally we want to remove negative flows 1013 themselves, perhaps surprisingly, the most effective first step is to 1014 cancel out the polluting effect negative flows have on the measure of 1015 downstream congestion at a border. It is more important to get an 1016 unbiased estimate of their effect, than to try to remove them all. A 1017 suggested algorithm to give an unbiased estimate of the contribution 1018 from negative flows to the downstream congestion measure is given in 1019 Appendix C.2. 1021 Although making an accurate assessment of the contribution from 1022 negative flows may not be easy, just the single step of neutralising 1023 their polluting effect on congestion metrics removes all the gains 1024 networks could otherwise make from mounting dummy traffic attacks on 1025 each other. This puts all networks on the same side (only with 1026 respect to negative flows of course), rather than being pitched 1027 against each other. The network where this flow goes negative as 1028 well as all the networks downstream lose out from not being 1029 reimbursed for any congestion this flow causes. So they all have an 1030 interest in getting rid of these negative flows. Networks forwarding 1031 a flow before it goes negative aren't strictly on the same side, but 1032 they are disinterested bystanders---they don't care that the flow 1033 goes negative downstream, but at least they can't actively gain from 1034 making it go negative. The problem becomes localised so that once a 1035 flow goes negative, all the networks from where it happens and beyond 1036 downstream each have a small problem, each can detect it has a 1037 problem and each can get rid of the problem if it chooses to. But 1038 negative flows can no longer be used for any new attacks. 1040 Once an unbiased estimate of the effect of negative flows can be 1041 made, the problem reduces to detecting and preferably removing flows 1042 that have gone negative as soon as possible. But importantly, 1043 complete eradication of negative flows is no longer critical---best 1044 endeavours will be sufficient. 1046 For instance, let us consider the case where a source sends traffic 1047 with no positive packets at all, hoping to at least get as much 1048 traffic delivered as network-based droppers will allow. The flow is 1049 likely to go at least slightly negative in the first network on the 1050 path (N1 if we use the example network layout in Figure 2). If all 1051 networks use the algorithm in Appendix C.2 to inflate penalties at 1052 their border with an upstream network, they will remove the effect of 1053 negative flows. So, for instance, N2 will not be paying a penalty to 1054 N1 for this flow. Further, because the flow contributes no positive 1055 packets at all, a dropper at the egress will completely remove it. 1057 The remaining problem is that every network is carrying a flow that 1058 is causing congestion to others but not being held to account for the 1059 congestion it is causing. Whenever the fail-safe border algorithm 1060 (Section 4.6) or the border algorithm to compensate for negative 1061 flows (Appendix C.2) detects a negative flow, it can instantiate a 1062 focused dropper for that flow locally. It may be some time before 1063 the flow is detected, but the more strongly negative the flow is, the 1064 more quickly it will be detected by the fail-safe algorithm. But, in 1065 the meantime, it will not be distorting border incentives. Until it 1066 is detected, if it contributes to drop anywhere, its packets will 1067 tend to be dropped before others if queues use the preferential drop 1068 rules in [ref other document], which discriminate against non- 1069 positive packets. All networks below the point where a flow goes 1070 negative (N1, N2 and N3 in this case) have an incentive to remove 1071 this flow, but the queue where it first goes negative (in N1) can of 1072 course remove the problem for everyone downstream. 1074 In the case of DDoS attacks, Section 5.1 describes how re-ECN 1075 mitigates their force. 1077 4.6. Inter-domain Fail-safes 1079 The mechanisms described so far create incentives for rational 1080 network operators to behave. That is, one operator aims to make 1081 another behave responsibly by applying penalties and expects a 1082 rational response (i.e. one that trades off costs against benefits). 1083 It is usually reasonable to assume that other network operators will 1084 behave rationally (policy routing can avoid those that might not). 1085 But this approach does not protect against the misconfigurations and 1086 accidents of other operators. 1088 Therefore, we propose the following two mechanisms at a network's 1089 borders to provide "defence in depth". Both are similar: 1091 Highly positive flows: A small sample of positive packets should be 1092 picked randomly as they cross a border interface. Then subsequent 1093 packets matching the same source and destination address and DSCP 1094 should be monitored. If the fraction of positive packets is well 1095 above a threshold (to be determined by operational practice), a 1096 management alarm SHOULD be raised, and the flow MAY be 1097 automatically subject to focused drop. 1099 Persistently negative flows: A small sample of congestion marked 1100 (red) packets should be picked randomly as they cross a border 1101 interface. Then subsequent packets matching the same source and 1102 destination address and DSCP should be monitored. If the balance 1103 of positive packets minus negative packets (measured in bytes) is 1104 persistently negative, a management alarm SHOULD be raised, and 1105 the flow MAY be automatically subject to focused drop. 1107 Both these mechanisms rely on the fact that highly positive (or 1108 negative) flows will appear more quickly in the sample by selecting 1109 randomly solely from positive (or negative) packets. 1111 4.7. The Case against Classic Feedback 1113 A system that produces an optimal outcome as a result of everyone's 1114 selfish actions is extremely powerful. Especially one that enables 1115 evolvability of congestion control. But why do we have to change to 1116 re-ECN to achieve it? Can't classic congestion feedback (as used 1117 already by standard ECN) be arranged to provide similar incentives 1118 and similar evolvability? Superficially it can. Kelly's seminal 1119 work showed how we can allow everyone the freedom to evolve whatever 1120 congestion control behaviour is in their application's best interest 1121 but still optimise the whole system of networks and users by placing 1122 a price on congestion to ensure responsible use of this 1123 freedom [Evol_cc]). Kelly used ECN with its classic congestion 1124 feedback model as the mechanism to convey congestion price 1125 information. The mechanism could be thought of as volume charging; 1126 except only the volume of packets marked with congestion experienced 1127 (CE) was counted. 1129 However, below we explain why relying on classic feedback /required/ 1130 congestion charging to be used, while re-ECN achieves the same 1131 powerful outcome (given it is built on Kelly's foundations), but does 1132 not /require/ congestion charging. In brief, the problem with 1133 classic feedback is that the incentives have to trace the indirect 1134 path back to the sender---the long way round the feedback loop. For 1135 example, if classic feedback were used in Figure 2, N2 would have had 1136 to influence N1 via all of N3, R & S rather than directly. 1138 Inability to agree what is happening downstream: In order to police 1139 its upstream neighbour's congestion response, the neighbours 1140 should be able to agree on the congestion to be responded to. 1141 Whatever the feedback regime, as packets change hands at each 1142 trust boundary, any path metrics they carry are verifiable by both 1143 neighbours. But, with a classic path metric, they can only agree 1144 on the /upstream/ path congestion. 1146 Inaccessible back-channel: The network needs a whole-path congestion 1147 metric if it wants to control the source. Classically, whole path 1148 congestion emerges at the destination, to be fed back from 1149 receiver to sender in a back-channel. But, in any data network, 1150 back-channels need not be visible to relays, as they are 1151 essentially communications between the end-points. They may be 1152 encrypted, asymmetrically routed or simply omitted, so no network 1153 element can reliably intercept them. The congestion charging 1154 literature solves this problem by charging the receiver and 1155 assuming this will cause the receiver to refer the charges to the 1156 sender. But, of course, this creates unintended side-effects... 1158 `Receiver pays' unacceptable: In connectionless datagram networks, 1159 receivers and receiving networks cannot prevent reception from 1160 malicious senders, so `receiver pays' opens them to `denial of 1161 funds' attacks. 1163 End-user congestion charging unacceptable in many societies: Even if 1164 'denial of funds' were not a problem, we know that end-users are 1165 highly averse to the unpredictability of congestion charging and 1166 anyway, we want to avoid restricting network operators to just one 1167 retail tariff. But with classic feedback only an upstream metric 1168 is available, so we cannot avoid having to wrap the `receiver 1169 pays' money flow around the feedback loop, necessarily forcing 1170 end-users to be subjected to congestion charging. 1172 To summarise so far, with classic feedback, policing congestion 1173 response without losing evolvability /requires/ congestion charging 1174 of end-users and a `receiver pays' model, whereas, with re-ECN, it is 1175 still possible to influence incentives using congestion charging but 1176 using the safer `sender pays' model. However, congestion charging is 1177 only likely to be appropriate between domains. So, without losing 1178 evolvability, re-ECN enables technical policing mechanisms that are 1179 more appropriate for end users than congestion pricing. 1181 4.8. Simulations 1183 Simulations of policer and dropper performance done for the multi-bit 1184 version of re-feedback have been included in section 5 "Dropper 1185 Performance" of [Re-fb]. Simulations of policer and dropper for the 1186 re-ECN version described in this document are work in progress. 1188 5. Other Applications of Re-ECN 1190 5.1. DDoS Mitigation 1192 A flooding attack is inherently about congestion of a resource. 1193 Because re-ECN ensures the sources causing network congestion 1194 experience the cost of their own actions, it acts as a first line of 1195 defence against DDoS. As load focuses on a victim, upstream queues 1196 grow, requiring honest sources to pre-load packets with a higher 1197 fraction of positive packets. Once downstream queues are so 1198 congested that they are dropping traffic, they will be marking to 1199 negative the traffic they do forward 100%. Honest sources will 1200 therefore be sending positive packets 100% (and therefore being 1201 severely rate-limited at the ingress). 1203 Senders under malicious control can either do the same as honest 1204 sources, and be rate-limited at ingress, or they can understate 1205 congestion by sending more neutral RECT packets than they should. If 1206 sources understate congestion (i.e. do not re-echo sufficient 1207 positive packets) and the preferential drop ranking is implemented on 1208 queues ([ref othe document]), these queues will preserve positive 1209 traffic until last. So, the neutral traffic from malicious sources 1210 will all be automatically dropped first. Either way, the malicious 1211 sources cannot send more than honest sources. 1213 Further, hosts under malicious control will tend to be re-used for 1214 many different attacks. They will therefore build up a long term 1215 history of causing congestion. Therefore, as long as the population 1216 of potentially compromisable hosts around the Internet is limited, 1217 the per-user policing algorithms in Appendix B.1 will gradually 1218 throttle down zombies and other launchpads for attacks. Therefore, 1219 widespread deployment of re-ECN could considerably dampen the force 1220 of DDoS. Certainly, zombie armies could hold their fire for long 1221 enough to be able to build up enough credit in the per-user policers 1222 to launch an attack. But they would then still be limited to no more 1223 throughput than other, honest users. 1225 Inter-domain traffic policing (see Section 4.5)ensures that any 1226 network that harbours compromised `zombie' hosts will have to bear 1227 the cost of the congestion caused by traffic from zombies in 1228 downstream networks. Such networks will be incentivised to deploy 1229 per-user policers that rate-limit hosts that are unresponsive to 1230 congestion so they can only send very slowly into congested paths. 1231 As well as protecting other networks, the extremely poor performance 1232 at any sign of congestion will incentivise the zombie's owner to 1233 clean it up. However, the host should behave normally when using 1234 uncongested paths. 1236 Uniquely, re-ECN handles DDoS traffic without relying on the validity 1237 of identifiers in packets. Certainly the egress dropper relies on 1238 uniqueness of flow identifiers, but not their validity. So if a 1239 source spoofs another address, re-ECN works just as well, as long as 1240 the attacker cannot imitate all the flow identifiers of another 1241 active flow passing through the same dropper (see Section 6). 1242 Similarly, the ingress policer relies on uniqueness of flow IDs, not 1243 their validity. Because a new flow will only be allowed any rate at 1244 all if it starts with a cautious packet, and the more cautious 1245 packets there are starting new flows, the more they will be limited. 1246 Essentially a re-ECN policer limits the bulk of all congestion 1247 entering the network through a physical interface; limiting the 1248 congestion caused by each flow is merely an optional extra. 1250 5.2. End-to-end QoS 1252 {ToDo: (Section 3.3.2 of [Re-fb] entitled `Edge QoS' gives an outline 1253 of the text that will be added here).} 1255 5.3. Traffic Engineering 1257 {ToDo: } 1259 5.4. Inter-Provider Service Monitoring 1261 {ToDo: } 1263 6. Limitations 1265 The known limitations of the re-ECN approach are: 1267 o We still cannot defend against the attack described in Section 10 1268 where a malicious source sends negative traffic through the same 1269 egress dropper as another flow and imitates its flow identifiers, 1270 allowing a malicious source to cause an innocent flow to 1271 experience heavy drop. 1273 o Re-feedback for TTL (re-TTL) would also be desirable at the same 1274 time as re-ECN. Unfortunately this requires a further standards 1275 action for the mechanisms briefly described in Appendix D 1277 o Traffic must be ECN-capable for re-ECN to be effective. The only 1278 defence against malicious users who turn off ECN capbility is that 1279 networks are expected to rate limit Not-ECT traffic and to apply 1280 higher drop preference to it during congestion. Although these 1281 are blunt instruments, they at least represent a feasible scenario 1282 for the future Internet where Not-ECT traffic co-exists with re- 1283 ECN traffic, but as a severely hobbled under-class. We recommend 1284 (Section 7.1) that while accommodating a smooth initial transition 1285 to re-ECN, policing policies should gradually be tightened to rate 1286 limit Not-ECT traffic more strictly in the longer term. 1288 o When checking whether a flow is balancing positive packets with 1289 negative packets (measured in bytes), re-ECN can only account for 1290 congestion marking, not drops. So, whenever a sender experiences 1291 drop, it does not have to re-echo the congestion event by sending 1292 positive packet(s). Nonetheless, it is hardly any advantage to be 1293 able to send faster than other flows only if your traffic is 1294 dropped and the other traffic isn't. 1296 o We are considering the issue of whether it would be useful to 1297 truncate rather than drop packets that appear to be malicious, so 1298 that the feedback loop is not broken but useful data can be 1299 removed. 1301 7. Incremental Deployment 1303 7.1. Incremental Deployment Features 1305 The design of the re-ECN protocol started from the fact that the 1306 current ECN marking behaviour of queues was sufficient and that re- 1307 feedback could be introduced around these queues by changing the 1308 sender behaviour but not the routers. Otherwise, if we had required 1309 routers to be changed, the chance of encountering a path that had 1310 every router upgraded would be vanishly small during early 1311 deployment, giving no incentive to start deployment. Also, as there 1312 is no new forwarding behaviour, routers and hosts do not have to 1313 signal or negotiate anything. 1315 However, networks that choose to protect themselves using re-ECN do 1316 have to add new security functions at their trust boundaries with 1317 others. They distinguish legacy traffic by its ECN field. Traffic 1318 from Not-ECT transports is distinguishable by its Not-ECT marking . 1319 Traffic from RFC3168 compliant ECN transports is distinguished from 1320 re-ECN by which of ECT(0) or ECT(1) is used. We chose to use ECT(1) 1321 for re-ECN traffic deliberately. Existing ECN sources set ECT(0) on 1322 either 50% (the nonce) or 100% (the default) of packets, whereas re- 1323 ECN does not use ECT(0) at all. We can use this distinguishing 1324 feature of RFC3168 compliant ECN traffic to separate it out for 1325 different treatment at the various border security functions: egress 1326 dropping, ingress policing and border policing. 1328 The general principle we adopt is that an egress dropper will not 1329 drop any legacy traffic, but ingress and border policers will limit 1330 the bulk rate of legacy traffic (Not-ECT, ECT(0) and those amrked 1331 with the unused codepoint as defined in [Re-TCP]) that can enter each 1332 network. Then, during early re-ECN deployment, operators can set 1333 very permissive (or non-existent) rate-limits on legacy traffic, but 1334 once re-ECN implementations are generally available, legacy traffic 1335 can be rate-limited increasingly harshly. Ultimately, an operator 1336 might choose to block all legacy traffic entering its network, or at 1337 least only allow through a trickle. 1339 Then, as the limits are set more strictly, the more RFC3168 ECN 1340 sources will gain by upgrading to re-ECN. Thus, towards the end of 1341 the voluntary incremental deployment period, RFC3168 compliant 1342 transports can be given progressively stronger encouragement to 1343 upgrade. 1345 7.2. Incremental Deployment Incentives 1347 It would only be worth standardising the re-ECN protocol if there 1348 existed a coherent story for how it might be incrementally deployed. 1350 In order for it to have a chance of deployment, everyone who needs to 1351 act must have a strong incentive to act, and the incentives must 1352 arise in the order that deployment would have to happen. Re-ECN 1353 works around unmodified ECN routers, but we can't just discuss why 1354 and how re-ECN deployment might build on ECN deployment, because 1355 there is precious little to build on in the first place. Instead, we 1356 aim to show that re-ECN deployment could carry ECN with it. We focus 1357 on commercial deployment incentives, although some of the arguments 1358 apply equally to academic or government sectors. 1360 ECN deployment: 1362 ECN is largely implemented in commercial routers, but generally 1363 not as a supported feature, and it has largely not been deployed 1364 by commercial network operators. ECN has been implemented in most 1365 Unix-based operating systems for some time. Microsoft first 1366 implemented ECN in Windows Vista, but it is only on by default for 1367 the server end of a TCP connection. Unfortunately the client end 1368 had to be turned off by default, because a non-zero ECN field 1369 triggers a bug in a legacy home gateway which makes it crash. For 1370 detailed deployment status, see [ECN-Deploy]. We believe the 1371 reason ECN deployment has not happened is twofold: 1373 * ECN requires changes to both routers and hosts. If someone 1374 wanted to sell the improvement that ECN offers, they would have 1375 to co-ordinate deployment of their product with others. An ECN 1376 server only gives any improvement on an ECN network. An ECN 1377 network only gives any improvement if used by ECN devices. 1378 Deployment that requires co-ordination adds cost and delay and 1379 tends to dilute any competitive advantage that might be gained. 1381 * ECN `only' gives a performance improvement. Making a product a 1382 bit faster (whether the product is a device or a network), 1383 isn't usually a sufficient selling point to be worth the cost 1384 of co-ordinating across the industry to deploy it. Network 1385 operators tend to avoid re-configuring a working network unless 1386 launching a new product. 1388 ECN and Re-ECN for Edge-to-edge Assured QoS: 1390 We believe the proposal to provide assured QoS sessions using a 1391 form of ECN called pre-congestion notification (PCN) [PCN-arch] is 1392 most likely to break the deadlock in ECN deployment first. It 1393 only requires edge-to-edge deployment so it does not require 1394 endpoint support. It can be deployed in a single network, then 1395 grow incrementally to interconnected networks. And it provides a 1396 different `product' (internetworked assured QoS), rather than 1397 merely making an existing product a bit faster. 1399 Not only could this assured QoS application kick-start ECN 1400 deployment, it could also carry re-ECN deployment with it; because 1401 re-ECN can enable the assured QoS region to expand to a large 1402 internetwork where neighbouring networks do not trust each other. 1403 [Re-PCN] argues that re-ECN security should be built in to the QoS 1404 system from the start, explaining why and how. 1406 If ECN and re-ECN were deployed edge-to-edge for assured QoS, 1407 operators would gain valuable experience. They would also clear 1408 away many technical obstacles such as firewall configurations that 1409 block all but the RFC3168 settings of the ECN field and the RE 1410 flag. 1412 ECN in Access Networks: 1414 The next obstacle to ECN deployment would be extension to access 1415 and backhaul networks, where considerable link layer differences 1416 makes implementation non-trivial, particularly on congested 1417 wireless links. ECN and re-ECN work fine during partial 1418 deployment, but they will not be very useful if the most congested 1419 elements in networks are the last to support them. Access network 1420 support is one of the weakest parts of this deployment story. All 1421 we can hope is that, once the benefits of ECN are better 1422 understood by operators, they will push for the necessary link 1423 layer implementations as deployment proceeds. 1425 Policing Unresponsive Flows: 1427 Re-ECN allows a network to offer differentiated quality of service 1428 as explained in Section 5.2. But we do not believe this will 1429 motivate initial deployment of re-ECN, because the industry is 1430 already set on alternative ways of doing QoS. Despite being much 1431 more complicated and expensive, the alternative approaches are 1432 here and now. 1434 But re-ECN is critical to QoS deployment in another respect. It 1435 can be used to prevent applications from taking whatever bandwidth 1436 they choose without asking. 1438 Currently, applications that remain resolute in their lack of 1439 response to congestion are rewarded by other TCP applications. In 1440 other words, TCP is naively friendly, in that it reduces its rate 1441 in response to congestion whether it is competing with friends 1442 (other TCPs) or with enemies (unresponsive applications). 1444 Therefore, those network owners that want to sell QoS will be keen 1445 to ensure that their users can't help themselves to QoS for free. 1446 Given the very large revenues at stake, we believe effective 1447 policing of congestion response will become highly sought after by 1448 network owners. 1450 But this does not necessarily argue for re-ECN deployment. 1451 Network owners might choose to deploy bottleneck policers rather 1452 than re-ECN-based policing. However, under Related Work 1453 (Section 9) we argue that bottleneck policers are inherently 1454 vulnerable to circumvention. 1456 Therefore we believe there will be a strong demand from network 1457 owners for re-ECN deployment so they can police flows that do not 1458 ask to be unresponsive to congestion, in order to protect their 1459 revenues from flows that do ask (QoS). In particular, we suspect 1460 that the operators of cellular networks will want to prevent VoIP 1461 and video applications being used freely on their networks as a 1462 more open market develops in GPRS and 3G devices. 1464 Initial deployments are likely to be isolated to single cellular 1465 networks. Cellular operators would first place requirements on 1466 device manufacturers to include re-ECN in the standards for mobile 1467 devices. In parallel, they would put out tenders for ingress and 1468 egress policers. Then, after a while they would start to tighten 1469 rate limits on Not-ECT traffic from non-standard devices and they 1470 would start policing whatever non-accredited applications people 1471 might install on mobile devices with re-ECN support in the 1472 operating system. This would force even independent mobile device 1473 manufacturers to provide re-ECN support. Early standardisation 1474 across the cellular operators is likely, including interconnection 1475 agreements with penalties for excess downstream congestion. 1477 We suspect some fixed broadband networks (whether cable or DSL) 1478 would follow a similar path. However, we also believe that larger 1479 parts of the fixed Internet would not choose to police on a per- 1480 flow basis. Some might choose to police congestion on a per-user 1481 basis in order to manage heavy peer-to-peer file-sharing, but it 1482 seems likely that a sizeable majority would not deploy any form of 1483 policing. 1485 This hybrid situation begs the question, "How does re-ECN work for 1486 networks that choose to using policing if they connect with others 1487 that don't?" Traffic from non-ECN capable sources will arrive 1488 from other networks and cause congestion within the policed, ECN- 1489 capable networks. So networks that chose to police congestion 1490 would rate-limit Not-ECT traffic throughout their network, 1491 particularly at their borders. They would probably also set 1492 higher usage prices in their interconnection contracts for 1493 incoming Not-ECT and Not-RECT traffic. We assume that 1494 interconnection contracts between networks in the same tier will 1495 include congestion penalties before contracts with provider 1496 backbones do. 1498 A hybrid situation could remain for all time. As was explained in 1499 the introduction, we believe in healthy competition between 1500 policing and not policing, with no imperative to convert the whole 1501 world to the religion of policing. Networks that chose not to 1502 deploy egress droppers would leave themselves open to being 1503 congested by senders in other networks. But that would be their 1504 choice. 1506 The important aspect of the egress dropper though is that it most 1507 protects the network that deploys it. If a network does not 1508 deploy an egress dropper, sources sending into it from other 1509 networks will be able to understate the congestion they are 1510 causing. Whereas, if a network deploys an egress dropper, it can 1511 know how much congestion other networks are dumping into it, and 1512 apply penalties or charges accordingly. So, whether or not a 1513 network polices its own sources at ingress, it is in its interests 1514 to deploy an egress dropper. 1516 Host support: 1518 In the above deployment scenario, host operating system support 1519 for re-ECN came about through the cellular operators demanding it 1520 in device standards (i.e. 3GPP). Of course, increasingly, mobile 1521 devices are being built to support multiple wireless technologies. 1522 So, if re-ECN were stipulated for cellular devices, it would 1523 automatically appear in those devices connected to the wireless 1524 fringes of fixed networks if they coupled cellular with WiFi or 1525 Bluetooth technology, for instance. Also, once implemented in the 1526 operating system of one mobile device, it would tend to be found 1527 in other devices using the same family of operating system. 1529 Therefore, whether or not a fixed network deployed ECN, or 1530 deployed re-ECN policers and droppers, many of its hosts might 1531 well be using re-ECN over it. Indeed, they would be at an 1532 advantage when communicating with hosts across re-ECN policed 1533 networks that rate limited Not-RECT traffic. 1535 Other possible scenarios: 1537 The above is thankfully not the only plausible scenario we can 1538 think of. One of the many clubs of operators that meet regularly 1539 around the world might decide to act together to persuade a major 1540 operating system manufacturer to implement re-ECN. And they may 1541 agree between them on an interconnection model that includes 1542 congestion penalties. 1544 Re-ECN provides an interesting opportunity for device 1545 manufacturers as well as network operators. Policers can be 1546 configured loosely when first deployed. Then as re-ECN take-up 1547 increases, they can be tightened up, so that a network with re-ECN 1548 deployed can gradually squeeze down the service provided to 1549 RFC3168 compliant devices that have not upgraded to re-ECN. Many 1550 device vendors rely on replacement sales. And operating system 1551 companies rely heavily on new release sales. Also support 1552 services would like to be able to force stragglers to upgrade. 1553 So, the ability to throttle service to RFC3168 compliant operating 1554 systems is quite valuable. 1556 Also, policing unresponsive sources may not be the only or even 1557 the first application that drives deployment. It may be policing 1558 causes of heavy congestion (e.g. peer-to-peer file-sharing). Or 1559 it may be mitigation of denial of service. Or we may be wrong in 1560 thinking simpler QoS will not be the initial motivation for re-ECN 1561 deployment. Indeed, the combined pressure for all these may be 1562 the motivator, but it seems optimistic to expect such a level of 1563 joined-up thinking from today's communications industry. We 1564 believe a single application alone must be a sufficient motivator. 1566 In short, everyone gains from adding accountability to TCP/IP, 1567 except the selfish or malicious. So, deployment incentives tend 1568 to be strong. 1570 8. Architectural Rationale 1572 In the Internet's technical community, the danger of not responding 1573 to congestion is well-understood, as well as its attendant risk of 1574 congestion collapse [RFC3714]. However, one side of the Internet's 1575 commercial community considers that the very essence of IP is to 1576 provide open access to the internetwork for all applications. They 1577 see congestion as a symptom of over-conservative investment, and rely 1578 on revising application designs to find novel ways to keep 1579 applications working despite congestion. They argue that the 1580 Internet was never intended to be solely for TCP-friendly 1581 applications. Meanwhile, another side of the Internet's commercial 1582 community believes that it is worthwhile providing a network for 1583 novel applications only if it has sufficient capacity, which can 1584 happen only if a greater share of application revenues can be 1585 /assured/ for the infrastructure provider. Otherwise the major 1586 investments required would carry too much risk and wouldn't happen. 1588 The lesson articulated in [Tussle] is that we shouldn't embed our 1589 view on these arguments into the Internet at design time. Instead we 1590 should design the Internet so that the outcome of these arguments can 1591 get decided at run-time. Re-ECN is designed in that spirit. Once 1592 the protocol is available, different network operators can choose how 1593 liberal they want to be in holding people accountable for the 1594 congestion they cause. Some might boldly invest in capacity and not 1595 police its use at all, hoping that novel applications will result. 1596 Others might use re-ECN for fine-grained flow policing, expecting to 1597 make money selling vertically integrated services. Yet others might 1598 sit somewhere half-way, perhaps doing coarse, per-user policing. All 1599 might change their minds later. But re-ECN always allows them to 1600 interconnect so that the careful ones can protect themselves from the 1601 liberal ones. 1603 The incentive-based approach used for re-ECN is based on Gibbens and 1604 Kelly's arguments [Evol_cc] on allowing endpoints the freedom to 1605 evolve new congestion control algorithms for new applications. They 1606 ensured responsible behaviour despite everyone's self-interest by 1607 applying pricing to ECN marking, and Kelly had proved stability and 1608 optimality in an earlier paper. 1610 Re-ECN keeps all the underlying economic incentives, but rearranges 1611 the feedback. The idea is to allow a network operator (if it 1612 chooses) to deploy engineering mechanisms like policers at the front 1613 of the network which can be designed to behave /as if/ they are 1614 responding to congestion prices. Rather than having to subject users 1615 to congestion pricing, networks can then use more traditional 1616 charging regimes (or novel ones). But the engineering can constrain 1617 the overall amount of congestion a user can cause. This provides a 1618 buffer against completely outrageous congestion control, but still 1619 makes it easy for novel applications to evolve if they need different 1620 congestion control to the norms. It also allows novel charging 1621 regimes to evolve. 1623 Despite being achieved with a relatively minor protocol change, re- 1624 ECN is an architectural change. Previously, Internet congestion 1625 could only be controlled by the data sender, because it was the only 1626 one both in a position to control the load and in a position to see 1627 information on congestion. Re-ECN levels the playing field. It 1628 recognises that the network also has a role to play in moderating 1629 (policing) congestion control. But policing is only truly effective 1630 at the first ingress into an internetwork, whereas path congestion 1631 was previously only visible at the last egress. So, re-ECN 1632 democratises congestion information. Then the choice over who 1633 actually controls congestion can be made at run-time, not design 1634 time---a bit like an aircraft with dual controls. And different 1635 operators can make different choices. We believe non-architectural 1636 approaches to this problem are unlikely to offer more than partial 1637 solutions (see Section 9). 1639 Importantly, re-ECN does not require assumptions about specific 1640 congestion responses to be embedded in any network elements, except 1641 at the first ingress to the internetwork if that level of control is 1642 desired by the ingress operator. But such tight policing will be a 1643 matter of agreement between the source and its access network 1644 operator. The ingress operator need not police congestion response 1645 at flow granularity; it can simply hold a source responsible for the 1646 aggregate congestion it causes, perhaps keeping it within a monthly 1647 congestion quota. Or if the ingress network trusts the source, it 1648 can do nothing. 1650 Therefore, the aim of the re-ECN protocol is NOT solely to police 1651 TCP-friendliness. Re-ECN preserves IP as a generic network layer for 1652 all sorts of responses to congestion, for all sorts of transports. 1653 Re-ECN merely ensures truthful downstream congestion information is 1654 available in the network layer for all sorts of accountability 1655 applications. 1657 The end to end design principle does not say that all functions 1658 should be moved out of the lower layers---only those functions that 1659 are not generic to all higher layers. Re-ECN adds a function to the 1660 network layer that is generic, but was omitted: accountability for 1661 causing congestion. Accountability is not something that an end-user 1662 can provide to themselves. We believe re-ECN adds no more than is 1663 sufficient to hold each flow accountable, even if it consists of a 1664 single datagram. 1666 "Accountability" implies being able to identify who is responsible 1667 for causing congestion. However, at the network layer it would NOT 1668 be useful to identify the cause of congestion by adding individual or 1669 organisational identity information, NOR by using source IP 1670 addresses. Rather than bringing identity information to the point of 1671 congestion, we bring downstream congestion information to the point 1672 where the cause can be most easily identified and dealt with. That 1673 is, at any trust boundary congestion can be associated with the 1674 physically connected upstream neighbour that is directly responsible 1675 for causing it (whether intentionally or not). A trust boundary 1676 interface is exactly the place to police or throttle in order to 1677 directly mitigate congestion, rather than having to trace the 1678 (ir)responsible party in order to shut them down. 1680 Some considered that ECN itself was a layering violation. The 1681 reasoning went that the interface to a layer should provide a service 1682 to the higher layer and hide how the lower layer does it. However, 1683 ECN reveals the state of the network layer and below to the transport 1684 layer. A more positive way to describe ECN is that it is like the 1685 return value of a function call to the network layer. It explicitly 1686 returns the status of the request to deliver a packet, by returning a 1687 value representing the current risk that a packet will not be served. 1688 Re-ECN has similar semantics, except the transport layer must try to 1689 guess the return value, then it can use the actual return value from 1690 the network layer to modify the next guess. 1692 The guiding principle behind all the discussion in Section 4.5 on 1693 Policing is that any gain from subverting the protocol should be 1694 precisely neutralised, rather than punished. If a gain is punished 1695 to a greater extent than is sufficient to neutralise it, it will most 1696 likely open up a new vulnerability, where the amplifying effect of 1697 the punishment mechanism can be turned on others. 1699 For instance, if possible, flows should be removed as soon as they go 1700 negative, but we do NOT RECOMMEND any attempts to discard such flows 1701 further upstream while they are still positive. Such over-zealous 1702 push-back is unnecessary and potentially dangerous. These flows have 1703 paid their `fare' up to the point they go negative, so there is no 1704 harm in delivering them that far. If someone downstream asks for a 1705 flow to be dropped as near to the source as possible, because they 1706 say it is going to become negative later, an upstream node cannot 1707 test the truth of this assertion. Rather than have to authenticate 1708 such messages, re-ECN has been designed so that flows can be dropped 1709 solely based on locally measurable evidence. A message hinting that 1710 a flow should be watched closely to test for negativity is fine. But 1711 not a message that claims that a positive flow will go negative 1712 later, so it should be dropped. . 1714 9. Related Work 1716 {Due to lack of time, this section is incomplete. The reader is 1717 referred to the Related Work section of [Re-fb] for a brief selection 1718 of related ideas.} 1720 9.1. Policing Rate Response to Congestion 1722 ATM network elements send congestion back-pressure 1723 messages [ITU-T.I.371] along each connection, duplicating any end to 1724 end feedback because they don't trust it. On the other hand, re-ECN 1725 ensures information in forwarded packets can be used for congestion 1726 management without requiring a connection-oriented architecture and 1727 re-using the overhead of fields that are already set aside for end to 1728 end congestion control (and routing loop detection in the case of re- 1729 TTL in Appendix D). 1731 We borrowed ideas from policers in the literature [pBox],[XCHOKe], 1732 AFD etc. for our rate equation policer. However, without the benefit 1733 of re-ECN they don't police the correct rate for the condition of 1734 their path. They detect unusually high /absolute/ rates, but only 1735 while the policer itself is congested, because they work by detecting 1736 prevalent flows in the discards from the local RED queue. These 1737 policers must sit at every potential bottleneck, whereas our policer 1738 need only be located at each ingress to the internetwork. As Floyd & 1739 Fall explain [pBox], the limitation of their approach is that a high 1740 sending rate might be perfectly legitimate, if the rest of the path 1741 is uncongested or the round trip time is short. Commercially 1742 available rate policers cap the rate of any one flow. Or they 1743 enforce monthly volume caps in an attempt to control high volume 1744 file-sharing. They limit the value a customer derives. They might 1745 also limit the congestion customers can cause, but only as an 1746 accidental side-effect. They actually punish traffic that fills 1747 troughs as much as traffic that causes peaks in utilisation. In 1748 practice network operators need to be able to allocate service by 1749 cost during congestion, and by value at other times. 1751 9.2. Congestion Notification Integrity 1753 The choice of two ECT code-points in the ECN field [RFC3168] 1754 permitted future flexibility, optionally allowing the sender to 1755 encode the experimental ECN nonce [RFC3540] in the packet stream. 1756 This mechanism has since been included in the specifications of DCCP 1757 [RFC4340]. 1759 The ECN nonce is an elegant scheme that allows the sender to detect 1760 if someone in the feedback loop - the receiver especially - tries to 1761 claim no congestion was experienced when in fact congestion led to 1762 packet drops or ECN marks. For each packet it sends, the sender 1763 chooses between the two ECT codepoints in a pseudo-random sequence. 1764 Then, whenever the network marks a packet with CE, if the receiver 1765 wants to deny congestion happened, she has to guess which ECT 1766 codepoint was overwritten. She has only a 50:50 chance of being 1767 correct each time she denies a congestion mark or a drop, which 1768 ultimately will give her away. 1770 The purpose of a network-layer nonce should primarily be protection 1771 of the network, while a transport-layer nonce would be better used to 1772 protect the sender from cheating receivers. Now, the assumption 1773 behind the ECN nonce is that a sender will want to detect whether a 1774 receiver is suppressing congestion feedback. This is only true if 1775 the sender's interests are aligned with the network's, or with the 1776 community of users as a whole. This may be true for certain large 1777 senders, who are under close scrutiny and have a reputation to 1778 maintain. But we have to deal with a more hostile world, where 1779 traffic may be dominated by peer-to-peer transfers, rather than 1780 downloads from a few popular sites. Often the `natural' self- 1781 interest of a sender is not aligned with the interests of other 1782 users. It often wishes to transfer data quickly to the receiver as 1783 much as the receiver wants the data quickly. 1785 In contrast, the re-ECN protocol enables policing of an agreed rate- 1786 response to congestion (e.g. TCP-friendliness) at the sender's 1787 interface with the internetwork. It also ensures downstream networks 1788 can police their upstream neighbours, to encourage them to police 1789 their users in turn. But most importantly, it requires the sender to 1790 declare path congestion to the network and it can remove traffic at 1791 the egress if this declaration is dishonest. So it can police 1792 correctly, irrespective of whether the receiver tries to suppress 1793 congestion feedback or whether the sender ignores genuine congestion 1794 feedback. Therefore the re-ECN protocol addresses a much wider range 1795 of cheating problems, which includes the one addressed by the ECN 1796 nonce. 1798 9.3. Identifying Upstream and Downstream Congestion 1800 Purple [Purple] proposes that queues should use the CWR flag in the 1801 TCP header of ECN-capable flows to work out path congestion and 1802 therefore downstream congestion in a similar way to re-ECN. However, 1803 because CWR is in the transport layer, it is not always visible to 1804 network layer routers and policers. Purple's motivation was to 1805 improve AQM, not policing. But, of course, nodes trying to avoid a 1806 policer would not be expected to allow CWR to be visible. 1808 10. Security Considerations 1810 Security concerns are discussed in the protocol document. What goes 1811 here? 1813 11. IANA Considerations 1815 This memo includes no request to IANA (yet). See protocol document 1816 for discussion on possible IANA considerations. 1818 12. Conclusions 1820 {ToDo:} 1822 13. Acknowledgements 1824 Sebastien Cazalet and Andrea Soppera contributed to the idea of re- 1825 feedback. All the following have given helpful comments: Andrea 1826 Soppera, David Songhurst, Peter Hovell, Louise Burness, Phil Eardley, 1827 Steve Rudkin, Marc Wennink, Fabrice Saffre, Cefn Hoile, Steve Wright, 1828 John Davey, Martin Koyabe, Carla Di Cairano-Gilfedder, Alexandru 1829 Murgu, Nigel Geffen, Pete Willis, John Adams (BT), Sally Floyd 1830 (ICIR), Joe Babiarz, Kwok Ho-Chan (Nortel), Stephen Hailes, Mark 1831 Handley (who developed the attack with cancelled packets), Adam 1832 Greenhalgh (who developed the attack on DNS) (UCL), Jon Crowcroft 1833 (Uni Cam), David Clark, Bill Lehr, Sharon Gillett, Steve Bauer (who 1834 complemented our own dummy traffic attacks with others), Liz Maida 1835 (MIT), and comments from participants in the CRN/CFP Broadband and 1836 DoS-resistant Internet working groups.A special thank you to 1837 Alessandro Salvatori for coming up with fiendish attacks on re-ECN. 1839 14. Comments Solicited 1841 Comments and questions are encouraged and very welcome. They can be 1842 addressed to the IETF Transport Area working group's mailing list 1843 , and/or to the authors. 1845 15. References 1847 15.1. Normative References 1849 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1850 Requirement Levels", BCP 14, RFC 2119, March 1997. 1852 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1853 of Explicit Congestion Notification (ECN) to IP", 1854 RFC 3168, September 2001. 1856 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1857 Congestion Control Protocol (DCCP)", RFC 4340, March 2006. 1859 [RFC4341] Floyd, S. and E. Kohler, "Profile for Datagram Congestion 1860 Control Protocol (DCCP) Congestion Control ID 2: TCP-like 1861 Congestion Control", RFC 4341, March 2006. 1863 [RFC4342] Floyd, S., Kohler, E., and J. Padhye, "Profile for 1864 Datagram Congestion Control Protocol (DCCP) Congestion 1865 Control ID 3: TCP-Friendly Rate Control (TFRC)", RFC 4342, 1866 March 2006. 1868 15.2. Informative References 1870 [Bauer06] Bauer, S., Faratin, P., and R. Beverly, "Assessing the 1871 assumptions underlying mechanism design for the Internet", 1872 Proc. Workshop on the Economics of Networked Systems 1873 (NetEcon06) , June 2006, . 1876 [CLoop_pol] 1877 Salvatori, A., "Closed Loop Traffic Policing", Politecnico 1878 Torino and Institut Eurecom Masters Thesis , 1879 September 2005. 1881 [ECN-Deploy] 1882 Floyd, S., "ECN (Explicit Congestion Notification) in 1883 TCP/IP; Implementation and Deployment of ECN", Web-page , 1884 May 2004, 1885 . 1887 [Evol_cc] Gibbens, R. and F. Kelly, "Resource pricing and the 1888 evolution of congestion control", Automatica 35(12)1969-- 1889 1985, December 1999, 1890 . 1892 [ITU-T.I.371] 1893 ITU-T, "Traffic Control and Congestion Control in 1894 {B-ISDN}", ITU-T Rec. I.371 (03/04), March 2004. 1896 [Jiang02] Jiang, H. and D. Dovrolis, "The Macroscopic Behavior of 1897 the TCP Congestion Avoidance Algorithm", ACM SIGCOMM 1898 CCR 32(3)75-88, July 2002, 1899 . 1901 [Mathis97] 1902 Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The 1903 Macroscopic Behavior of the TCP Congestion Avoidance 1904 Algorithm", ACM SIGCOMM CCR 27(3)67--82, July 1997, 1905 . 1907 [PCN-arch] 1908 Eardley, P., Babiarz, J., Chan, K., Charny, A., Geib, R., 1909 Karagiannis, G., Menth, M., and T. Tsou, "Pre-Congestion 1910 Notification Architecture", draft-ietf-pcn-architecture-09 1911 (work in progress), February 2008. 1913 [Purple] Pletka, R., Waldvogel, M., and S. Mannal, "PURPLE: 1914 Predictive Active Queue Management Utilizing Congestion 1915 Information", Proc. Local Computer Networks (LCN 2003) , 1916 October 2003. 1918 [RFC2208] Mankin, A., Baker, F., Braden, B., Bradner, S., O'Dell, 1919 M., Romanow, A., Weinrib, A., and L. Zhang, "Resource 1920 ReSerVation Protocol (RSVP) Version 1 Applicability 1921 Statement Some Guidelines on Deployment", RFC 2208, 1922 September 1997. 1924 [RFC3514] Bellovin, S., "The Security Flag in the IPv4 Header", 1925 RFC 3514, April 2003. 1927 [RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust Explicit 1928 Congestion Notification (ECN) Signaling with Nonces", 1929 RFC 3540, June 2003. 1931 [RFC3714] Floyd, S. and J. Kempf, "IAB Concerns Regarding Congestion 1932 Control for Voice Traffic in the Internet", RFC 3714, 1933 March 2004. 1935 [Re-PCN] Briscoe, B., "Emulating Border Flow Policing using Re-ECN 1936 on Bulk Data", draft-briscoe-re-pcn-border-cheat-02 (work 1937 in progress), February 2008. 1939 [Re-TCP] Briscoe, B., Jacquet, A., Moncaster, T., and A. Smith, 1940 "Re-ECN: Adding Accountability for Causing Congestion to 1941 TCP/IP", draft-briscoe-tsvwg-re-ecn-tcp-06 (work in 1942 progress), July 2007. 1944 [Re-fb] Briscoe, B., Jacquet, A., Di Cairano-Gilfedder, C., 1945 Salvatori, A., Soppera, A., and M. Koyabe, "Policing 1946 Congestion Response in an Internetwork Using Re-Feedback", 1947 ACM SIGCOMM CCR 35(4)277--288, August 2005, . 1951 [Savage99] 1952 Savage, S., Cardwell, N., Wetherall, D., and T. Anderson, 1953 "TCP congestion control with a misbehaving receiver", ACM 1954 SIGCOMM CCR 29(5), October 1999, 1955 . 1957 [Smart_rtg] 1958 Goldenberg, D., Qiu, L., Xie, H., Yang, Y., and Y. Zhang, 1959 "Optimizing Cost and Performance for Multihoming", ACM 1960 SIGCOMM CCR 34(4)79--92, October 2004, 1961 . 1963 [Steps_DoS] 1964 Handley, M. and A. Greenhalgh, "Steps towards a DoS- 1965 resistant Internet Architecture", Proc. ACM SIGCOMM 1966 workshop on Future directions in network architecture 1967 (FDNA'04) pp 49--56, August 2004. 1969 [Tussle] Clark, D., Sollins, K., Wroclawski, J., and R. Braden, 1970 "Tussle in Cyberspace: Defining Tomorrow's Internet", ACM 1971 SIGCOMM CCR 32(4)347--356, October 2002, 1972 . 1975 [XCHOKe] Chhabra, P., Chuig, S., Goel, A., John, A., Kumar, A., 1976 Saran, H., and R. Shorey, "XCHOKe: Malicious Source 1977 Control for Congestion Avoidance at Internet Gateways", 1978 Proceedings of IEEE International Conference on Network 1979 Protocols (ICNP-02) , November 2002, 1980 . 1982 [pBox] Floyd, S. and K. Fall, "Promoting the Use of End-to-End 1983 Congestion Control in the Internet", IEEE/ACM Transactions 1984 on Networking 7(4) 458--472, August 1999, 1985 . 1987 [relax-fairness] 1988 Briscoe, B., "Transport Protocols Don't Have To Do 1989 Fairness", draft-briscoe-tsvwg-relax-fairness-01 (work in 1990 progress), July 2008. 1992 Appendix A. Example Egress Dropper Algorithm 1994 {ToDo: Write up the basic algorithm with flow state, then the 1995 aggregated one.} 1997 Appendix B. Policer Designs to ensure Congestion Responsiveness 1999 B.1. Per-user Policing 2001 User policing requires a policer on the ingress interface of the 2002 access router associated with the user. At that point, the traffic 2003 of the user hasn't diverged on different routes yet; nor has it mixed 2004 with traffic from other sources. 2006 In order to ensure that a user doesn't generate more congestion in 2007 the network than her due share, a modified bulk token-bucket is 2008 maintained with the following parameter: 2010 o b_0 the initial token level 2012 o r the filling rate 2014 o b_max the bucket depth 2016 The same token bucket algorithm is used as in many areas of 2017 networking, but how it is used is very different: 2019 o all traffic from a user over the lifetime of their subscription is 2020 policed in the same token bucket. 2022 o only positive and cancelled packets (positive, cautious and 2023 cancelled) consume tokens 2025 Such a policer will allow network operators to throttle the 2026 contribution of their users to network congestion. This will require 2027 the appropriate contractual terms to be in place between operators 2028 and users. For instance: a condition for a user to subscribe to a 2029 given network service may be that she should not cause more than a 2030 volume C_user of congestion over a reference period T_user, although 2031 she may carry forward up to N_user times her allowance at the end of 2032 each period. These terms directly set the parameter of the user 2033 policer: 2035 o b_0 = C_user 2037 o r = C_user/T_user 2039 o b_max = b_0 * (N_user +1) 2041 Besides the congestion budget policer above, another user policer may 2042 be necessary to further rate-limit cautious packets, if they are to 2043 be marked rather than dropped (see discussion in [ref other 2044 document].). Rate-limiting cautious packets will prevent high bursts 2045 of new flow arrivals, which is a very useful feature in DoS 2046 prevention. A condition to subscribe to a given network service 2047 would have to be that a user should not generate more than C_cautious 2048 cautious packets, over a reference period T_cautious, with no option 2049 to carry forward any of the allowance at the end of each period. 2050 These terms directly set the parameters of the cautious packet 2051 policer: 2053 o b_0 = C_cautious 2055 o r = C_cautious/T_cautious 2057 o b_max = b_0 2059 T_cautious should be a much shorter period than T_user: for instance 2060 T_cautious could be in the order of minutes while T_user could be in 2061 order of weeks. 2063 B.2. Per-flow Rate Policing 2065 Whilst we believe that simple per-user policing would be sufficient 2066 to ensure senders comply with congestion control, some operators may 2067 wish to police the rate response of each flow to congestion as well. 2068 Although we do not believe this will be neceesary, we include this 2069 section to show how one could perform per-flow policing using 2070 enforcement of TCP-fairness as an example. Per-flow policing aims to 2071 enforce congestion responsiveness on the shortest information 2072 timescale on a network path: packet roundtrips. 2074 This again requires that the appropriate terms be agreed between a 2075 network operator and its users, where a congestion responsiveness 2076 policy might be required for the use of a given network service 2077 (perhaps unless the user specifically requests otherwise). 2079 As an example, we describe below how a rate adaptation policer can be 2080 designed when the applicable rate adaptation policy is TCP- 2081 compliance. In that context, the average throughput of a flow will 2082 be expected to be bounded by the value of the TCP throughput during 2083 congestion avoidance, given in Mathis' formula [Mathis97] 2085 x_TCP = k * s / ( T * sqrt(m) ) 2087 where: 2089 o x_TCP is the throughput of the TCP flow in packets per second, 2091 o k is a constant upper-bounded by sqrt(3/2), 2093 o s is the average packet size of the flow, 2095 o T is the roundtrip time of the flow, 2097 o m is the congestion level experienced by the flow. 2099 We define the marking period N=1/m which represents the average 2100 number of packets between two positive or cancelled packets. Mathis' 2101 formula can be re-written as: 2103 x_TCP = k*s*sqrt(N)/T 2105 We can then get the average inter-mark time in a compliant TCP flow, 2106 dt_TCP, by solving (x_TCP/s)*dt_TCP = N which gives 2108 dt_TCP = sqrt(N)*T/k 2110 We rely on this equation for the design of a rate-adaptation policer 2111 as a variation of a token bucket. In that case a policer has to be 2112 set up for each policed flow. This may be triggered by cautious 2113 packets, with the remainder of flows being all rate limited together 2114 if they do not start with a cautious packet. 2116 Where maintaining per flow state is not a problem, for instance on 2117 some access routers, systematic per-flow policing may be considered. 2118 Should per-flow state be more constrained, rate adaptation policing 2119 could be limited to a random sample of flows exhibiting positive or 2120 cancelled packets. 2122 As in the case of user policing, only positive or cancelled packets 2123 will consume tokens, however the amount of tokens consumed will 2124 depend on the congestion signal. 2126 When a new rate adaptation policer is set up for flow j, the 2127 following state is created: 2129 o a token bucket b_j of depth b_max starting at level b_0 2131 o a timestamp t_j = timenow() 2133 o a counter N_j = 0 2135 o a roundtrip estimate T_j 2137 o a filling rate r 2139 When the policing node forwards a packet of flow j with no positive 2140 packets: 2142 o . the counter is incremented: N_j += 1 2144 When the policing node forwards a packet of flow j carrying a 2145 negative packet: 2147 o the counter is incremented: N_j += 1 2148 o the token level is adjusted: b_j += r*(timenow()-t_j) - sqrt(N_j)* 2149 T_j/k 2151 o the counter is reset: N_j = 0 2153 o the timer is reset: t_j = timenow() 2155 An implementation example will be given in a later draft that avoids 2156 having to extract the square root. 2158 Analysis: For a TCP flow, for r= 1 token/sec, on average, 2160 r*(timenow()-t_j)-sqrt(N_j)* T_j/k = dt_TCP - sqrt(N)*T/k = 0 2162 This means that the token level will fluctuate around its initial 2163 level. The depth b_max of the bucket sets the timescale on which the 2164 rate adaptation policy is performed while the filling rate r sets the 2165 trade-off between responsiveness and robustness: 2167 o the higher b_max, the longer it will take to catch greedy flows 2169 o the higher r, the fewer false positives (greedy verdict on 2170 compliant flows) but the more false negatives (compliant verdict 2171 on greedy flows) 2173 This rate adaptation policer requires the availability of a roundtrip 2174 estimate which may be obtained for instance from the application of 2175 re-feedback to the downstream delay Appendix D or passive estimation 2176 [Jiang02]. 2178 When the bucket of a policer located at the access router (whether it 2179 is a per-user policer or a per-flow policer) becomes empty, the 2180 access router SHOULD drop at least all packets causing the token 2181 level to become negative. The network operator MAY take further 2182 sanctions if the token level of the per-flow policers associated with 2183 a user becomes negative. 2185 Appendix C. Downstream Congestion Metering Algorithms 2187 C.1. Bulk Downstream Congestion Metering Algorithm 2189 To meter the bulk amount of downstream congestion in traffic crossing 2190 an inter-domain border an algorithm is needed that accumulates the 2191 size of positive packets and subtracts the size of negative packets. 2192 We maintain two counters: 2194 V_b: accumulated congestion volume 2196 B: total data volume (in case it is needed) 2198 A suitable pseudo-code algorithm for a border router is as follows: 2200 ==================================================================== 2201 V_b = 0 2202 B = 0 2203 for each Re-ECN-capable packet { 2204 b = readLength(packet) /* set b to packet size */ 2205 B += b /* accumulate total volume */ 2206 if readEECN(packet) == (positive || cautious { 2207 V_b += b /* increment... */ 2208 } elseif readEECN(packet) == negative { 2209 V_b -= b /* ...or decrement V_b... */ 2210 } /*...depending on EECN field */ 2211 } 2212 ==================================================================== 2214 At the end of an accounting period this counter V_b represents the 2215 congestion volume that penalties could be applied to, as described in 2216 Section 4.5. 2218 For instance, accumulated volume of congestion through a border 2219 interface over a month might be V_b = 5PB (petabyte = 10^15 byte). 2220 This might have resulted from an average downstream congestion level 2221 of 1% on an accumulated total data volume of B = 500PB. 2223 C.2. Inflation Factor for Persistently Negative Flows 2225 The following process is suggested to complement the simple algorithm 2226 above in order to protect against the various attacks from 2227 persistently negative flows described in Section 4.5. As explained 2228 in that section, the most important and first step is to estimate the 2229 contribution of persistently negative flows to the bulk volume of 2230 downstream pre-congestion and to inflate this bulk volume as if these 2231 flows weren't there. The process below has been designed to give an 2232 unbiased estimate, but it may be possible to define other processes 2233 that achieve similar ends. 2235 While the above simple metering algorithm is counting the bulk of 2236 traffic over an accounting period, the meter should also select a 2237 subset of the whole flow ID space that is small enough to be able to 2238 realistically measure but large enough to give a realistic sample. 2239 Many different samples of different subsets of the ID space should be 2240 taken at different times during the accounting period, preferably 2241 covering the whole ID space. During each sample, the meter should 2242 count the volume of positive packets and subtract the volume of 2243 negative, maintaining a separate account for each flow in the sample. 2244 It should run a lot longer than the large majority of flows, to avoid 2245 a bias from missing the starts and ends of flows, which tend to be 2246 positive and negative respectively. 2248 Once the accounting period finishes, the meter should calculate the 2249 total of the accounts V_{bI} for the subset of flows I in the sample, 2250 and the total of the accounts V_{fI} excluding flows with a negative 2251 account from the subset I. Then the weighted mean of all these 2252 samples should be taken a_S = sum_{forall I} V_{fI} / sum_{forall I} 2253 V_{bI}. 2255 If V_b is the result of the bulk accounting algorithm over the 2256 accounting period (Appendix C.1) it can be inflated by this factor 2257 a_S to get a good unbiased estimate of the volume of downstream 2258 congestion over the accounting period a_S.V_b, without being polluted 2259 by the effect of persistently negative flows. 2261 Appendix D. Re-TTL 2263 This Appendix gives an overview of a proposal to be able to overload 2264 the TTL field in the IP header to monitor downstream propagation 2265 delay. This is included to show that it would be possible to take 2266 account of RTT if it was deemed desirable. 2268 Delay re-feedback can be achieved by overloading the TTL field, 2269 without changing IP or router TTL processing. A target value for TTL 2270 at the destination would need standardising, say 16. If the path hop 2271 count increased by more than 16 during a routing change, it would 2272 temporarily be mistaken for a routing loop, so this target would need 2273 to be chosen to exceed typical hop count increases. The TCP wire 2274 protocol and handlers would need modifying to feed back the 2275 destination TTL and initialise it. It would be necessary to 2276 standardise the unit of TTL in terms of real time (as was the 2277 original intent in the early days of the Internet). 2279 In the longer term, precision could be improved if routers 2280 decremented TTL to represent exact propagation delay to the next 2281 router. That is, for a router to decrement TTL by, say, 1.8 time 2282 units it would alternate the decrement of every packet between 1 & 2 2283 at a ratio of 1:4. Although this might sometimes require a seemingly 2284 dangerous null decrement, a packet in a loop would still decrement to 2285 zero after 255 time units on average. As more routers were upgraded 2286 to this more accurate TTL decrement, path delay estimates would 2287 become increasingly accurate despite the presence of some RFC3168 2288 compliant routers that continued to always decrement the TTL by 1. 2290 Appendix E. Argument for holding back the ECN nonce 2292 The ECN nonce is a mechanism that allows a /sending/ transport to 2293 detect if drop or ECN marking at a congested router has been 2294 suppressed by a node somewhere in the feedback loop---another router 2295 or the receiver. 2297 Space for the ECN nonce was set aside in [RFC3168] (currently 2298 proposed standard) while the full nonce mechanism is specified in 2299 [RFC3540] (currently experimental). The specifications for [RFC4340] 2300 (currently proposed standard) requires that "Each DCCP sender SHOULD 2301 set ECN Nonces on its packets...". It also mandates as a requirement 2302 for all CCID profiles that "Any newly defined acknowledgement 2303 mechanism MUST include a way to transmit ECN Nonce Echoes back to the 2304 sender.", therefore: 2306 o The CCID profile for TCP-like Congestion Control [RFC4341] 2307 (currently proposed standard) says "The sender will use the ECN 2308 Nonce for data packets, and the receiver will echo those nonces in 2309 its Ack Vectors." 2311 o The CCID profile for TCP-Friendly Rate Control (TFRC) [RFC4342] 2312 recommends that "The sender [use] Loss Intervals options' ECN 2313 Nonce Echoes (and possibly any Ack Vectors' ECN Nonce Echoes) to 2314 probabilistically verify that the receiver is correctly reporting 2315 all dropped or marked packets." 2317 The primary function of the ECN nonce is to protect the integrity of 2318 the information about congestion: ECN marks and packet drops. 2319 However, when the nonce is used to protect the integrity of 2320 information about packet drops, rather than ECN marks, a transport 2321 layer nonce will always be sufficient (because a drop loses the 2322 transport header as well as the ECN field in the network header), 2323 which would avoid using scarce IP header codepoint space. Similarly, 2324 a transport layer nonce would protect against a receiver sending 2325 early acknowledgements [Savage99]. 2327 If the ECN nonce reveals integrity problems with the information 2328 about congestion, the sending transport can use that knowledge for 2329 two functions: 2331 o to protect its own resources, by allocating them in proportion to 2332 the rates that each network path can sustain, based on congestion 2333 control, 2335 o and to protect congested routers in the network, by slowing down 2336 drastically its connection to the destination with corrupt 2337 congestion information. 2339 If the sending transport chooses to act in the interests of congested 2340 routers, it can reduce its rate if it detects some malicious party in 2341 the feedback loop may be suppressing ECN feedback. But it would only 2342 be useful to congested routers when /all/ senders using them are 2343 trusted to act in interest of the congested routers. 2345 In the end, the only essential use of a network layer nonce is when 2346 sending transports (e.g. large servers) want to allocate their /own/ 2347 resources in proportion to the rates that each network path can 2348 sustain, based on congestion control. In that case, the nonce allows 2349 senders to be assured that they aren't being duped into giving more 2350 of their own resources to a particular flow. And if congestion 2351 suppression is detected, the sending transport can rate limit the 2352 offending connection to protect its own resources. Certainly, this 2353 is a useful function, but the IETF should carefully decide whether 2354 such a single, very specific case warrants IP header space. 2356 In contrast, Re-ECN allows all routers to fully protect themselves 2357 from such attacks, without having to trust anyone - senders, 2358 receivers, neighbouring networks. Re-ECN is therefore proposed in 2359 preference to the ECN nonce on the basis that it addresses the 2360 generic problem of accountability for congestion of a network's 2361 resources at the IP layer. 2363 Delaying the ECN nonce is justified because the applicability of the 2364 ECN nonce seems too limited for it to consume a two-bit codepoint in 2365 the IP header. It therefore seems prudent to give time for an 2366 alternative way to be found to do the one function the nonce is 2367 essential for. 2369 Moreover, while we have re-designed the Re-ECN codepoints so that 2370 they do not prevent the ECN nonce progressing, the same is not true 2371 the other way round. If the ECN nonce started to see some deployment 2372 (perhaps because it was blessed with proposed standard status), 2373 incremental deployment of Re-ECN would effectively be impossible, 2374 because Re-ECN marking fractions at inter-domain borders would be 2375 polluted by unknown levels of nonce traffic. 2377 The authors are aware that Re-ECN must prove it has the potential it 2378 claims if it is to displace the nonce. Therefore, every effort has 2379 been made to complete a comprehensive specification of Re-ECN so that 2380 its potential can be assessed. We therefore seek the opinion of the 2381 Internet community on whether the Re-ECN protocol is sufficiently 2382 useful to warrant standards action. 2384 Authors' Addresses 2386 Bob Briscoe 2387 BT & UCL 2388 B54/77, Adastral Park 2389 Martlesham Heath 2390 Ipswich IP5 3RE 2391 UK 2393 Phone: +44 1473 645196 2394 Email: bob.briscoe@bt.com 2395 URI: http://www.cs.ucl.ac.uk/staff/B.Briscoe/ 2397 Arnaud Jacquet 2398 BT 2399 B54/70, Adastral Park 2400 Martlesham Heath 2401 Ipswich IP5 3RE 2402 UK 2404 Phone: +44 1473 647284 2405 Email: arnaud.jacquet@bt.com 2406 URI: 2408 Toby Moncaster 2409 BT 2410 B54/70, Adastral Park 2411 Martlesham Heath 2412 Ipswich IP5 3RE 2413 UK 2415 Phone: +44 1473 648734 2416 Email: toby.moncaster@bt.com 2418 Alan Smith 2419 BT 2420 B54/76, Adastral Park 2421 Martlesham Heath 2422 Ipswich IP5 3RE 2423 UK 2425 Phone: +44 1473 640404 2426 Email: alan.p.smith@bt.com