idnits 2.17.1 draft-briscoe-tsvwg-re-ecn-tcp-motivation-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 25, 2010) is 4931 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group B. Briscoe, Ed. 3 Internet-Draft A. Jacquet 4 Intended status: Informational BT 5 Expires: April 28, 2011 T. Moncaster 6 Moncaster.com 7 A. Smith 8 BT 9 October 25, 2010 11 Re-ECN: A Framework for adding Congestion Accountability to TCP/IP 12 draft-briscoe-tsvwg-re-ecn-tcp-motivation-02 14 Abstract 16 This document describes the framework to support a new protocol for 17 explicit congestion notification (ECN), termed re-ECN, which can be 18 deployed incrementally around unmodified routers. Re-ECN allows 19 accurate congestion monitoring throughout the network thus enabling 20 the upstream party at any trust boundary in the internetwork to be 21 held responsible for the congestion they cause, or allow to be 22 caused. So, networks can introduce straightforward accountability 23 for congestion and policing mechanisms for incoming traffic from end- 24 customers or from neighbouring network domains. As well as giving 25 the motivation for re-ECN this document also gives examples of 26 mechanisms that can use the protocol to ensure data sources respond 27 correctly to congestion. And it describes example mechanisms that 28 ensure the dominant selfish strategy of both network domains and end- 29 points will be to use the protocol honestly. 31 Status of This Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 Internet-Drafts are working documents of the Internet Engineering 37 Task Force (IETF). Note that other groups may also distribute 38 working documents as Internet-Drafts. The list of current Internet- 39 Drafts is at http://datatracker.ietf.org/drafts/current/. 41 Internet-Drafts are draft documents valid for a maximum of six months 42 and may be updated, replaced, or obsoleted by other documents at any 43 time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress." 46 This Internet-Draft will expire on April 28, 2011. 48 Copyright Notice 49 Copyright (c) 2010 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 65 1.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . 4 66 1.2. Re-ECN Protocol in Brief . . . . . . . . . . . . . . . . . 5 67 1.3. The Re-ECN Framework . . . . . . . . . . . . . . . . . . . 6 68 1.4. Solving Hard Problems . . . . . . . . . . . . . . . . . . 7 69 1.5. The Rest of this Document . . . . . . . . . . . . . . . . 8 70 2. Requirements notation . . . . . . . . . . . . . . . . . . . . 8 71 3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 9 72 3.1. Policing Congestion Response . . . . . . . . . . . . . . . 9 73 3.1.1. The Policing Problem . . . . . . . . . . . . . . . . . 9 74 3.1.2. The Case Against Bottleneck Policing . . . . . . . . . 10 75 4. Re-ECN Incentive Framework . . . . . . . . . . . . . . . . . . 11 76 4.1. Revealing Congestion Along the Path . . . . . . . . . . . 11 77 4.1.1. Positive and Negative Flows . . . . . . . . . . . . . 13 78 4.2. Incentive Framework Overview . . . . . . . . . . . . . . . 13 79 4.3. Egress Dropper . . . . . . . . . . . . . . . . . . . . . . 17 80 4.4. Ingress Policing . . . . . . . . . . . . . . . . . . . . . 19 81 4.5. Inter-domain Policing . . . . . . . . . . . . . . . . . . 21 82 4.6. Inter-domain Fail-safes . . . . . . . . . . . . . . . . . 24 83 4.7. The Case against Classic Feedback . . . . . . . . . . . . 25 84 4.8. Simulations . . . . . . . . . . . . . . . . . . . . . . . 26 85 5. Other Applications of Re-ECN . . . . . . . . . . . . . . . . . 26 86 5.1. DDoS Mitigation . . . . . . . . . . . . . . . . . . . . . 26 87 5.2. End-to-end QoS . . . . . . . . . . . . . . . . . . . . . . 28 88 5.3. Traffic Engineering . . . . . . . . . . . . . . . . . . . 28 89 5.4. Inter-Provider Service Monitoring . . . . . . . . . . . . 28 90 6. Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 28 91 7. Incremental Deployment . . . . . . . . . . . . . . . . . . . . 29 92 7.1. Incremental Deployment Features . . . . . . . . . . . . . 29 93 7.2. Incremental Deployment Incentives . . . . . . . . . . . . 30 94 8. Architectural Rationale . . . . . . . . . . . . . . . . . . . 34 95 9. Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 37 96 9.1. Policing Rate Response to Congestion . . . . . . . . . . . 37 97 9.2. Congestion Notification Integrity . . . . . . . . . . . . 38 98 9.3. Identifying Upstream and Downstream Congestion . . . . . . 39 99 10. Security Considerations . . . . . . . . . . . . . . . . . . . 39 100 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 39 101 12. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 39 102 13. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 39 103 14. Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 40 104 15. References . . . . . . . . . . . . . . . . . . . . . . . . . . 40 105 15.1. Normative References . . . . . . . . . . . . . . . . . . . 40 106 15.2. Informative References . . . . . . . . . . . . . . . . . . 40 107 Appendix A. Example Egress Dropper Algorithm . . . . . . . . . . 43 108 Appendix B. Policer Designs to ensure Congestion 109 Responsiveness . . . . . . . . . . . . . . . . . . . 43 110 B.1. Per-user Policing . . . . . . . . . . . . . . . . . . . . 43 111 B.2. Per-flow Rate Policing . . . . . . . . . . . . . . . . . . 45 112 Appendix C. Downstream Congestion Metering Algorithms . . . . . . 47 113 C.1. Bulk Downstream Congestion Metering Algorithm . . . . . . 47 114 C.2. Inflation Factor for Persistently Negative Flows . . . . . 48 115 Appendix D. Re-TTL . . . . . . . . . . . . . . . . . . . . . . . 49 116 Appendix E. Argument for holding back the ECN nonce . . . . . . . 49 118 Authors' Statement: Status (to be removed by the RFC Editor) 120 Although the re-ECN protocol is intended to make a simple but far- 121 reaching change to the Internet architecture, the most immediate 122 priority for the authors is to delay any move of the ECN nonce to 123 Proposed Standard status. The argument for this position is 124 developed in Appendix E. 126 1. Introduction 128 This document aims to: 130 o Describe the motivation for wanting to introduce re-ECN; 132 o Provide a very brief description of the protocol; 134 o The framework within which the protocol sits; 136 o To show how a number of hard problems become much easier to solve 137 once re-ECN is available in IP. 139 This introduction starts with a run through of these 4 points. 141 1.1. Motivation 143 Re-ECN is proposed as a means of allowing accurate monitoring of 144 congestion throughout the Internet. The current Internet relies on 145 the vast majority of end-systems running TCP and reacting to detected 146 congestion by reducing their sending rates. Thus congestion control 147 is conducted by the collaboration of the majority of end-systems. 149 In this situation it is possible for applications that are 150 unresponsive to congestion to take whatever share of bottleneck 151 resources they want from responsive flows, the responsive flows 152 reduce their sending rate in face of congestion and effectively get 153 out of the way of unresponsive flows. An increasing proportion of 154 such applications could lead to congestion collapse being more common 155 [RFC3714]. Each network has no visibility of whole path congestion 156 and can only respond to congestion on a local basis. 158 Using re-ECN will allow any point along a path to calculate 159 congestion both upstream and downstream of that point. As a 160 consequence of this policing of congestion /could/ be carried out in 161 the network if end-systems fail to do so. Re-ECN enables flows and 162 users to be policed and for policing to happen at network ingress and 163 at network borders. 165 1.2. Re-ECN Protocol in Brief 167 In re-ECN each sender makes a prediction of the congestion that each 168 flow will cause and signals that prediction within the IP headers of 169 that flow. The prediction is based on, but not limited to, feedback 170 received from the receiver. Sending a prediction of the congestion 171 gives network equipment a view of the congestion downstream and 172 upstream. 174 In order to explain this mechanism we introduce the notion of IP 175 packets carrying different, notional values dependent on the state of 176 their header flags: 178 o Negative - are those marked by queues when incipient congestion is 179 detected. This is exactly the same as ECN [RFC3168]; 181 o Positive - are sent by the sender in proportion to the number of 182 bytes in packets that have been marked negative according to 183 feedback received from the receiver; 185 o Cautious - are sent whenever the sender cannot be sure of the 186 correct amount of positive bytes to inject into the network for 187 example, at the start of a flow to indicate that feedback has not 188 been established; 190 o Cancelled - packets sent by the sender as positive that get marked 191 as negative by queues in the network due to incipient congestion; 193 o Neutral - normal IP packets but show queues that they can be 194 marked negative. 196 A flow starts to transmit packets. No feedback has been established 197 so a number of cautious packets are sent (see the protocol definition 198 [Re-TCP] for an analysis of how many cautious packets should be sent 199 at flow start). The rest are sent as neutral. 201 The packets traverse a congested queue. A fraction are marked 202 negative as an indication of incipient congestion. 204 The packets are received by the receiver. The receiver feeds back to 205 the sender a count of the number of packets that have been marked 206 negative. This feedback can be provided either by the transport 207 (e.g. TCP) or by higher-layer control messages. 209 The sender receives the feedback and then sends a number of positive 210 packets in proportion to the bytes represented by packets that have 211 been marked negative. It is important to note that congestion is 212 revealed by the fraction of marked packets rather than a field in the 213 IP header. This is due to the limited code points available and 214 includes use of the last unallocated bit (sometimes called the evil 215 bit [RFC3514]). Full details of the code points used is given in 216 [Re-TCP]. This lack of codepoints is, however, the case with IPv4. 217 ECN is similarly restricted. 219 The number of bytes inside the negative packets and positive packets 220 should therefore be approximately equal at the termination point of 221 the flow. To put it another way, the balance of negative and 222 positive should be zero. 224 1.3. The Re-ECN Framework 226 The introducion of the protocol enables 3 things: 228 o Gives a view of whole path congestion; 230 o Enables policing of flows; 232 o It allows networks to monitor the flow of congestion across their 233 borders. 235 At any point in the network a device can calculate the upstream 236 congestion by calculating the fraction of bytes in negative packets 237 to total packets. This it could do using ECN by calculating the 238 fraction of packets marked Congestion Experienced. 240 Using re-ECN a device in the network can calculate downstream 241 congestion by subtracting the fraction of negative packets from the 242 fraction of positive packets. 244 A user can be restricted to only causing a certain amount of 245 congestion. A Policer could be introduced at the ingress of a 246 network that counts the number of positive packets being sent and 247 limits the sender if that sender ties to transmit more positive 248 packets than their allowance. 250 A user could deliberately ignore some or all of the feedback and 251 transmit packets with a zero or much lower proportion of positive 252 packets than negative packets. To solve this a Dropper is proposed. 253 This would be placed at the egress of a network. If the number of 254 negative packets exceeds the number of positive packets then the flow 255 could be dropped or some other sanction enacted. 257 Policers and droppers could be used between networks in order to 258 police bulk traffic. A whole network harbouring users causing 259 congestion in downstream networks can be held responsible or policed 260 by its downstream neighbour. 262 1.4. Solving Hard Problems 264 We have already shown that by making flows declare the level of 265 congestion they are causing that they can be policed, more 266 specifically these are the kind of problems that can be solved: 268 o mitigating distributed denial of service (DDoS); 270 o simplifying differentiation of quality of service (QoS); 272 o policing compliance to congestion control; 274 o inter-provider service monitoring; 276 o etc. 278 Uniquely, re-ECN manages to enable solutions to these problems 279 without unduly stifling innovative new ways to use the Internet. 280 This was a hard balance to strike, given it could be argued that DDoS 281 is an innovative way to use the Internet. The most valuable insight 282 was to allow each network to choose the level of constraint it wishes 283 to impose. Also re-ECN has been carefully designed so that networks 284 that choose to use it conservatively can protect themselves against 285 the congestion caused in their network by users on other networks 286 with more liberal policies. 288 For instance, some network owners want to block applications like 289 voice and video unless their network is compensated for the extra 290 share of bottleneck bandwidth taken. These real-time applications 291 tend to be unresponsive when congestion arises. Whereas elastic TCP- 292 based applications back away quickly, ending up taking a much smaller 293 share of congested capacity for themselves. Other network owners 294 want to invest in large amounts of capacity and make their gains from 295 simplicity of operation and economies of scale. 297 While we have designed re-ECN so that networks can choose to deploy 298 stringent policing, this does not imply we advocate that every 299 network should introduce tight controls on those that cause 300 congestion. Re-ECN has been specifically designed to allow different 301 networks to choose how conservative or liberal they wish to be with 302 respect to policing congestion. But those that choose to be 303 conservative can protect themselves from the excesses that liberal 304 networks allow their users. 306 Re-ECN allows the more conservative networks to police out flows that 307 have not asked to be unresponsive to congestion---not because they 308 are voice or video---just because they don't respond to congestion. 309 But it also allows other networks to choose not to police. 311 Crucially, when flows from liberal networks cross into a conservative 312 network, re-ECN enables the conservative network to apply penalties 313 to its neighbouring networks for the congestion they allow to be 314 caused. And these penalties can be applied to bulk data, without 315 regard to flows. 317 Then, if unresponsive applications become so dominant that some of 318 the more liberal networks experience congestion collapse [RFC3714], 319 they can change their minds and use re-ECN to apply tighter controls 320 in order to bring congestion back under control. 322 Re-ECN reduces the need for complex network equipment to perform 323 these functions. 325 1.5. The Rest of this Document 327 This document is structured as follows. First the motivation for the 328 new protocol is given (Section 3) followed by the incentive framework 329 that is possible with the protocol Section 4. Section 5 then 330 describes other important applications re-ECN, such as policing DDoS, 331 QoS and congestion control. Although these applications do not 332 require standardisation themselves, they are described in a fair 333 degree of detail in order to explain how re-ECN can be used. Given 334 re-ECN proposes to use the last undefined bit in the IPv4 header, we 335 felt it necessary to outline the potential that re-ECN could release 336 in return for being given that bit. 338 Deployment issues discussed throughout the document are brought 339 together in Section 7, which is followed by a brief section 340 explaining the somewhat subtle rationale for the design from an 341 architectural perspective (Section 8). We end by describing related 342 work (Section 9), listing security considerations (Section 10) and 343 finally drawing conclusions (Section 12). 345 2. Requirements notation 347 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 348 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 349 document are to be interpreted as described in [RFC2119]. 351 This document first specifies a protocol, then describes a framework 352 that creates the right incentives to ensure compliance to the 353 protocol. This could cause confusion because the second part of the 354 document considers many cases where malicious nodes may not comply 355 with the protocol. When such contingencies are described, if any of 356 the above keywords are not capitalised, that is deliberate. So, for 357 instance, the following two apparently contradictory sentences would 358 be perfectly consistent: i) x MUST do this; ii) x may not do this. 360 3. Motivation 362 3.1. Policing Congestion Response 364 3.1.1. The Policing Problem 366 The current Internet architecture trusts hosts to respond voluntarily 367 to congestion. Limited evidence shows that the large majority of 368 end-points on the Internet comply with a TCP-friendly response to 369 congestion. But telephony (and increasingly video) services over the 370 best effort Internet are attracting the interest of major commercial 371 operations. Most of these applications do not respond to congestion 372 at all. Those that can switch to lower rate codecs. 374 Of course, the Internet is intended to support many different 375 application behaviours. But the problem is that this freedom can be 376 exercised irresponsibly. The greater problem is that we will never 377 be able to agree on where the boundary is between responsible and 378 irresponsible. Therefore re-ECN is designed to allow different 379 networks to set their own view of the limit to irresponsibility, and 380 to allow networks that choose a more conservative limit to push back 381 against congestion caused in more liberal networks. 383 As an example of the impossibility of setting a standard for 384 fairness, mandating TCP-friendliness would set the bar too high for 385 unresponsive streaming media, but still some would say the bar was 386 too low [relax-fairness]. Even though all known peer-to-peer 387 filesharing applications are TCP-compatible, they can cause a 388 disproportionate amount of congestion, simply by using multiple flows 389 and by transferring data continuously relative to other short-lived 390 sessions. On the other hand, if we swung the other way and set the 391 bar low enough to allow streaming media to be unresponsive, we would 392 also allow denial of service attacks, which are typically 393 unresponsive to congestion and consist of multiple continuous flows. 395 Applications that need (or choose) to be unresponsive to congestion 396 can effectively take (some would say steal) whatever share of 397 bottleneck resources they want from responsive flows. Whether or not 398 such free-riding is common, inability to prevent it increases the 399 risk of poor returns for investors in network infrastructure, leading 400 to under-investment. An increasing proportion of unresponsive or 401 free-riding demand coupled with persistent under-supply is a broken 402 economic cycle. Therefore, if the current, largely co-operative 403 consensus continues to erode, congestion collapse could become more 404 common in more areas of the Internet [RFC3714]. 406 While we have designed re-ECN so that networks can choose to deploy 407 stringent policing, this does not imply we advocate that every 408 network should introduce tight controls on those that cause 409 congestion. Re-ECN has been specifically designed to allow different 410 networks to choose how conservative or liberal they wish to be with 411 respect to policing congestion. But those that choose to be 412 conservative can protect themselves from the excesses that liberal 413 networks allow their users. 415 3.1.2. The Case Against Bottleneck Policing 417 The state of the art in rate policing is the bottleneck policer, 418 which is intended to be deployed at any forwarding resource that may 419 become congested. Its aim is to detect flows that cause 420 significantly more local congestion than others. Although operators 421 might solve their immediate problems by deploying bottleneck 422 policers, we are concerned that widespread deployment would make it 423 extremely hard to evolve new application behaviours. We believe the 424 IETF should offer re-ECN as the preferred protocol on which to base 425 solutions to the policing problems of operators, because it would not 426 harm evolvability and, frankly, it would be far more effective (see 427 later for why). 429 Approaches like [XCHOKe] & [pBox] are nice approaches for rate 430 policing traffic without the benefit of whole path information (such 431 as could be provided by re-ECN). But they must be deployed at 432 bottlenecks in order to work. Unfortunately, a large proportion of 433 traffic traverses at least two bottlenecks (in two access networks), 434 particularly with the current traffic mix where peer-to-peer file- 435 sharing is prevalent. If ECN were deployed, we believe it would be 436 likely that these bottleneck policers would be adapted to combine ECN 437 congestion marking from the upstream path with local congestion 438 knowledge. But then the only useful placement for such policers 439 would be close to the egress of the internetwork. 441 But then, if these bottleneck policers were widely deployed (which 442 would require them to be more effective than they are now), the 443 Internet would find itself with one universal rate adaptation policy 444 (probably TCP-friendliness) embedded throughout the network. Given 445 TCP's congestion control algorithm is already known to be hitting its 446 scalability limits and new algorithms are being developed for high- 447 speed congestion control, embedding TCP policing into the Internet 448 would make evolution to new algorithms extremely painful. If a 449 source wanted to use a different algorithm, it would have to first 450 discover then negotiate with all the policers on its path, 451 particularly those in the far access network. The IETF has already 452 traveled that path with the Intserv architecture and found it 453 constrains scalability [RFC2208]. 455 Anyway, if bottleneck policers were ever widely deployed, they would 456 be likely to be bypassed by determined attackers. They inherently 457 have to police fairness per flow or per source-destination pair. 458 Therefore they can easily be circumvented either by opening multiple 459 flows (by varying the end-point port number); or by spoofing the 460 source address but arranging with the receiver to hide the true 461 return address at a higher layer. 463 4. Re-ECN Incentive Framework 465 The aim is to create an incentive environment that ensures optimal 466 sharing of capacity despite everyone acting selfishly (including 467 lying and cheating). Of course, the mechanisms put in place for this 468 can lie dormant wherever co-operation is the norm. 470 4.1. Revealing Congestion Along the Path 472 Throughout this document we focus on path congestion. But some forms 473 of fairness, particularly TCP's, also depend on round trip time. If 474 TCP-fairness is required, we also propose to measure downstream path 475 delay using re-feedback. We give a simple outline of how this could 476 work in Appendix D. However, we do not expect this to be necessary, 477 as researchers tend to agree that only congestion control dynamics 478 need to depend on RTT, not the rate that the algorithm would converge 479 on after a period of stability. 481 Recall that re-ECN can be used to measure path congestion at any 482 point on the path. End-systems know the whole path congestion. The 483 receiver knows this by the ratio of negative packets to all other 484 packets it observes. The sender knows this same information via the 485 feedback. 487 +---+ +----+ +----+ +---+ 488 | S |--| Q1 |----------------| Q2 |--| R | 489 +---+ +----+ +----+ +---+ 490 . . . . 491 ^ . . . . 492 | . . . . 493 | . positive fraction . . 494 3% |-------------------------------+======= 495 | . . | . 496 2% | . . | . 497 | . . negative fraction | . 498 1% | . +----------------------+ . 499 | . | . . 500 0% +---------------------------------------> 501 ^ ^ ^ 502 L M N Observation points 504 Figure 1: A 2-Queue Example (Imprecise) 506 Figure 1 uses a simple network to illustrate how re-ECN allows queues 507 to measure downstream congestion. The receiver counts negative 508 packets as being 3% of all received packets. This fraction is fed 509 back to the sender. The sender sets 3% of its packets to be positive 510 to match this. This fraction of positive packets can be observed 511 along the path. This is shown by the horizontal line at 3% in the 512 figure. The negative fraction is shown by the stepped line which 513 rises to meet the positive fraction line with steps at at each queue 514 where packets are marked negative. Two queues are shown (Q1 and Q2) 515 that are currently congested. Each time packets pass through a 516 fraction are marked red; 1% at Q1 and 2% at Q2). The approximate 517 downstream congestion can be measured at the observation points shown 518 along the path by subtracting the negative fraction from the positive 519 fraction, as shown in the table below. [Re-TCP] [ref other document] 520 derives these approximations from a precise analysis). 522 +-------------------+------------------------------+ 523 | Observation point | Approx downstream congestion | 524 +-------------------+------------------------------+ 525 | L | 3% - 0% = 3% | 526 | M | 3% - 1% = 2% | 527 | N | 3% - 3% = 0% | 528 +-------------------+------------------------------+ 530 Table 1: Downstream Congestion Measured at Example Observation Points 532 All along the path, whole-path congestion remains unchanged so it can 533 be used as a reference against which to compare upstream congestion. 535 The difference predicts downstream congestion for the rest of the 536 path. Therefore, measuring the fractions of negative and positive 537 packets at any point in the Internet will reveal upstream, downstream 538 and whole path congestion. 540 Note: to be absolutely clear these fractions are averages that would 541 result from the behaviour of the protocol handler mechanically 542 sending positive packets in direct response to incoming feedback---we 543 are not saying any protocol handler has to work with these average 544 fractions directly. 546 4.1.1. Positive and Negative Flows 548 In section Section 1.2 we introduced the notion of IP packets having 549 different values (negative, positive, cautious, cancelled and 550 neutral). So positive and cautious packets have a value of +1, 551 negative -1, and cancelled and neutral have zero value. 553 In the rest of this document we will loosely talk of positive or 554 negative flows. A negative flow is one where more negative bytes 555 than positive bytes arrive at the reciever. Likewise positive flows 556 are where more positive bytes arrive than negative bytes. Both of 557 these indicate that the wrong amount of positive bytes have been 558 sent. 560 4.2. Incentive Framework Overview 562 Figure 2 sketches the incentive framework that we will describe piece 563 by piece throughout this section. We will do a first pass in 564 overview, then return to each piece in detail. We re-use the earlier 565 example of how downstream congestion is derived by subtracting 566 upstream congestion from path congestion (Figure 1) but depict 567 multiple trust boundaries to turn it into an internetwork. For 568 clarity, only downstream congestion is shown (the difference between 569 the two earlier plots). The graph displays downstream path 570 congestion seen in a typical flow as it traverses an example path 571 from sender S to receiver R, across networks N1, N2 & N3. Everyone 572 is shown using re-ECN correctly, but we intend to show why everyone 573 would /choose/ to use it correctly, and honestly. 575 Three main types of self-interest can be identified: 577 o Users want to transmit data across the network as fast as 578 possible, paying as little as possible for the privilege. In this 579 respect, there is no distinction between senders and receivers, 580 but we must be wary of potential malice by one on the other; 582 o Network operators want to maximise revenues from the resources 583 they invest in. They compete amongst themselves for the custom of 584 users. 586 o Attackers (whether users or networks) want to use any opportunity 587 to subvert the new re-ECN system for their own gain or to damage 588 the service of their victims, whether targeted or random. 590 policer dropper 591 | | 592 | | 593 S <-----N1----> <---N2---> <---N3--> R domain 594 | | 595 | | 596 Border Gateways 598 Figure 2: Incentive Framework 600 Source congestion control: We want to ensure that the sender will 601 throttle its rate as downstream congestion increases. Whatever 602 the agreed congestion response (whether TCP-compatible or some 603 enhanced QoS), to some extent it will always be against the 604 sender's interest to comply. 606 Ingress policing: But it is in all the network operators' interests 607 to encourage fair congestion response, so that their investments 608 are employed to satisfy the most valuable demand. The re-ECN 609 protocol ensures packets carry the necessary information about 610 their own expected downstream congestion so that N1 can deploy a 611 policer at its ingress to check that S1 is complying with whatever 612 congestion control it should be using (Section 4.4). If N1 is 613 extremely conservative it could police each flow, but it is likely 614 to just police the bulk amount of congestion each customer causes 615 without regard to flows, or if it is extremely liberal it need not 616 police congestion control at all. Whatever, it is always 617 preferable to police traffic at the very first ingress into an 618 internetwork, before non-compliant traffic can cause any damage. 620 Edge egress dropper: If the policer ensures the source has less 621 right to a high rate the higher it declares downstream congestion, 622 the source has a clear incentive to understate downstream 623 congestion. But, if flows of packets are understated when they 624 enter the internetwork, they will have become negative by the time 625 they leave. So, we introduce a dropper at the last network 626 egress, which drops packets in flows that persistently declare 627 negative downstream congestion (see Section 4.3 for details). 629 Inter-domain traffic policing: But next we must ask, if congestion 630 arises downstream (say in N3), what is the ingress network's 631 (N1's) incentive to police its customers' response? If N1 turns a 632 blind eye, its own customers benefit while other networks suffer. 633 This is why all inter-domain QoS architectures (e.g. Intserv, 634 Diffserv) police traffic each time it crosses a trust boundary. 635 We have already shown that re-ECN gives a trustworthy measure of 636 the expected downstream congestion that a flow will cause by 637 subtracting negative volume from positive at any intermediate 638 point on a path. N3 (say) can use this measure to police all the 639 responses to congestion of all the sources beyond its upstream 640 neighbour (N2), but in bulk with one very simple passive 641 mechanism, rather than per flow, as we will now explain. 643 Emulating policing with inter-domain congestion penalties: Between 644 high-speed networks, we would rather avoid per-flow policing, and 645 we would rather avoid holding back traffic while it is policed. 646 Instead, once re-ECN has arranged headers to carry downstream 647 congestion honestly, N2 can contract to pay N3 penalties in 648 proportion to a single bulk count of the congestion metrics 649 crossing their mutual trust boundary (Section 4.5). In this way, 650 N3 puts pressure on N2 to suppress downstream congestion, for 651 every flow passing through the border interface, even though they 652 will all start and end in different places, and even though they 653 may all be allowed different responses to congestion. The figure 654 depicts this downward pressure on N2 by the solid downward arrow 655 at the egress of N2. Then N2 has an incentive either to police 656 the congestion response of its own ingress traffic (from N1) or to 657 emulate policing by applying penalties to N1 in turn on the basis 658 of congestion counted at their mutual boundary. In this recursive 659 way, the incentives for each flow to respond correctly to 660 congestion trace back with each flow precisely to each source, 661 despite the mechanism not recognising flows (see Section 5.2). 663 Inter-domain congestion charging diversity: Any two networks are 664 free to agree any of a range of penalty regimes between themselves 665 but they would only provide the right incentives if they were 666 within the following reasonable constraints. N2 should expect to 667 have to pay penalties to N3 where penalties monotonically increase 668 with the volume of congestion and negative penalties are not 669 allowed. For instance, they may agree an SLA with tiered 670 congestion thresholds, where higher penalties apply the higher the 671 threshold that is broken. But the most obvious (and useful) form 672 of penalty is where N3 levies a charge on N2 proportional to the 673 volume of downstream congestion N2 dumps into N3. In the 674 explanation that follows, we assume this specific variant of 675 volume charging between networks - charging proportionate to the 676 volume of congestion. 678 We must make clear that we are not advocating that everyone should 679 use this form of contract. We are well aware that the IETF tries 680 to avoid standardising technology that depends on a particular 681 business model. And we strongly share this desire to encourage 682 diversity. But our aim is merely to show that border policing can 683 at least work with this one model, then we can assume that 684 operators might experiment with the metric in other models (see 685 Section 4.5 for examples). Of course, operators are free to 686 complement this usage element of their charges with traditional 687 capacity charging, and we expect they will as predicted by 688 economics. 690 No congestion charging to users: Bulk congestion penalties at trust 691 boundaries are passive and extremely simple, and lose none of 692 their per-packet precision from one boundary to the next (unlike 693 Diffserv all-address traffic conditioning agreements, which 694 dissipate their effectiveness across long topologies). But at any 695 trust boundary, there is no imperative to use congestion charging. 696 Traditional traffic policing can be used, if the complexity and 697 cost is preferred. In particular, at the boundary with end 698 customers (e.g. between S and N1), traffic policing will most 699 likely be more appropriate. Policer complexity is less of a 700 concern at the edge of the network. And end-customers are known 701 to be highly averse to the unpredictability of congestion 702 charging. 704 NOTE WELL: This document neither advocates nor requires congestion 705 charging for end customers and advocates but does not require 706 inter-domain congestion charging. 708 Competitive discipline of inter-domain traffic engineering: With 709 inter-domain congestion charging, a domain seems to have a 710 perverse incentive to fake congestion; N2's profit depends on the 711 difference between congestion at its ingress (its revenue) and at 712 its egress (its cost). So, overstating internal congestion seems 713 to increase profit. However, smart border routing [Smart_rtg] by 714 N1 will bias its routing towards the least cost routes. So, N2 715 risks losing all its revenue to competitive routes if it 716 overstates congestion (see Section 5.3). In other words, if N2 is 717 the least congested route, its ability to raise excess profits is 718 limited by the congestion on the next least congested route. 720 Closing the loop: All the above elements conspire to trap everyone 721 between two opposing pressures, ensuring the downstream congestion 722 metric arrives at the destination neither above nor below zero. 723 So, we have arrived back where we started in our argument. The 724 ingress edge network can rely on downstream congestion declared in 725 the packet headers presented by the sender. So it can police the 726 sender's congestion response accordingly. 728 Evolvability of congestion control: We have seen that re-ECN enables 729 policing at the very first ingress. We have also seen that, as 730 flows continue on their path through further networks downstream, 731 re-ECN removes the need for further per-domain ingress policing of 732 all the different congestion responses allowed to each different 733 flow. This is why the evolvability of re-ECN policing is so 734 superior to bottleneck policing or to any policing of different 735 QoS for different flows. Even if all access networks choose to 736 conservatively police congestion per flow, each will want to 737 compete with the others to allow new responses to congestion for 738 new types of application. With re-ECN, each can introduce new 739 controls independently, without coordinating with other networks 740 and without having to standardise anything. But, as we have just 741 seen, by making inter-domain penalties proportionate to bulk 742 downtream congestion, downstream networks can be agnostic to the 743 specific congestion response for each flow, but they can still 744 apply more penalty the more liberal the ingress access network has 745 been in the response to congestion it allowed for each flow. 747 We now take a second pass over the incentive framework, filling in 748 the detail. 750 4.3. Egress Dropper 752 As traffic leaves the last network before the receiver (domain N3 in 753 Figure 2), the fraction of positive octets in a flow should match the 754 fraction of negative octets introduced by congestion marking (red 755 packets), leaving a balance of zero. If it is less (a negative 756 flow), it implies that the source is understating path congestion 757 (which will reduce the penalties that N2 owes N3). 759 If flows are positive, N3 need take no action---this simply means its 760 upstream neighbour is paying more penalties than it needs to, and the 761 source is going slower than it needs to. But, to protect itself 762 against persistently negative flows, N3 will need to install a 763 dropper at its egress. Appendix A gives a suggested algorithm for 764 this dropper. There is no intention that the dropper algorithm needs 765 to be standardised, it is merely provided to show that an efficient, 766 robust algorithm is possible. But whatever algorithm is used must 767 meet the criteria below: 769 o It SHOULD introduce minimal false positives for honest flows; 771 o It SHOULD quickly detect and sanction dishonest flows (minimal 772 false negatives); 774 o It SHOULD be invulnerable to state exhaustion attacks from 775 malicious sources. For instance, if the dropper uses flow-state, 776 it should not be possible for a source to send numerous packets, 777 each with a different flow ID, to force the dropper to exhaust its 778 memory capacity (rationale for SHOULD: Continuously sending keep- 779 alive packets might be perfectly reasonable behaviour, so we can't 780 distinguish a deliberate attack from reasonable levels of such 781 behaviour. Therefore it is strictly impossible to be invulnerable 782 to such an attack); 784 o It MUST introduce sufficient loss in goodput so that malicious 785 sources cannot play off losses in the egress dropper against 786 higher allowed throughput. Salvatori [CLoop_pol] describes this 787 attack, which involves the source understating path congestion 788 then inserting forward error correction (FEC) packets to 789 compensate expected losses; 791 o It MUST NOT be vulnerable to `identity whitewashing', where a 792 transport can label a flow with a new ID more cheaply than paying 793 the cost of continuing to use its current ID. 795 Note that the dropper operates on flows but we would like it not to 796 require per-flow state. This is why we have been careful to ensure 797 that all flows MUST start with a cautious packet. If a flow does not 798 start with a cautious packet, a dropper is likely to treat it 799 unfavourably. This risk makes it worth sending a cautious packet at 800 the start of a flow, even though there is a cost to the sender of 801 doing so (positive `worth'). Indeed, with cautious packets, the rate 802 at which a sender can generate new flows can be limited (Appendix B). 803 In this respect, cautious packets work like Handley's state set-up 804 bit [Steps_DoS]. 806 Appendix A also gives an example dropper implementation that 807 aggregates flow state. Dropper algorithms will often maintain a 808 moving average across flows of the fraction of positive packets. 809 When maintaining an average across flows, a dropper SHOULD only allow 810 flows into the average if they start with a cautious packet, but it 811 SHOULD NOT include cautious packets in the positive packet average. 812 A sender sends cautious packets when it does not have the benefit of 813 feedback from the receiver. So, counting cautious packets would be 814 likely to make the average unnecessarily positive, providing headroom 815 (or should we say footroom?) for dishonest (negative) traffic. 817 If the dropper detects a persistently negative flow, it SHOULD drop 818 sufficient negative and neutral packets to force the flow to not be 819 negative. Drops SHOULD be focused on just sufficient packets in 820 misbehaving flows to remove the negative bias while doing minimal 821 extra harm. 823 4.4. Ingress Policing 825 Access operators who wish to limit the congeston that a sender is 826 able to cause can deploy policers at the very first ingress to the 827 internetwork. Re-ECN has been designed to avoid the need for 828 bottleneck policing so that we can avoid a future where a single rate 829 adaptation policy is embedded throughout the network. Instead, re- 830 ECN allows the particular rate adaptation policy to be solely agreed 831 bilaterally between the sender and its ingress access provider ([ref 832 other document] discusses possible ways to signal between them), 833 which allows congestion control to be policed, but maintains its 834 evolvability, requiring only a single, local box to be updated. 836 Appendix B gives examples of per-user policing algorithms. But there 837 is no implication that these algorithms are to be standardised, or 838 that they are ideal. The ingress rate policer is the part of the re- 839 ECN incentive framework that is intended to be the most flexible. 840 Once endpoint protocol handlers for re-ECN and egress droppers are in 841 place, operators can choose exactly which congestion response they 842 want to police, and whether they want to do it per user, per flow or 843 not at all. 845 The re-ECN protocol allows these ingress policers to easily perform 846 bulk per-user policing (Appendix B.1). This is likely to provide 847 sufficient incentive to the user to correctly respond to congestion 848 without needing the policing function to be overly complex. If an 849 access operator chose they could use per-flow policing according to 850 the widely adopted TCP rate adaptation ( Appendix B.2) or other 851 alternatives, however this would introduce extra complexity to the 852 system. 854 If a per-flow rate policer is used, it should use path (not 855 downstream) congestion as the relevant metric, which is represented 856 by the fraction of octets in packets with positive (positive and 857 cautious packets) and cancelled packets. Of course, re-ECN provides 858 all the information a policer needs directly in the packets being 859 policed. So, even policing TCP's AIMD algorithm is relatively 860 straightforward (Appendix B.2). 862 Note that we have included cancelled packets in the measure of path 863 congestion. cancelled packets arise when the sender sends a positive 864 packet in response to feedback, but then this positive packet just 865 happens to be congestion marked itself. One would not normally 866 expect many cancelled packets at the first ingress because one would 867 not normally expect much congestion marking to have been necessary 868 that soon in the path. However, a home network or campus network may 869 well sit between the sending endpoint and the ingress policer, so 870 some congestion may occur upstream of the policer. And if congestion 871 does occur upstream, some cancelled packets should be visible, and 872 should be taken into account in the measure of path congestion. 874 But a much more important reason for including cancelled packets in 875 the measure of path congestion at an ingress policer is that a sender 876 might otherwise subvert the protocol by sending cancelled packets 877 instead of neutral packets. Like neutral, cancelled packets are 878 worth zero, so the sender knows they won't be counted against any 879 quota it might have been allowed. But unlike neutral packets, 880 cancelled packets are immune to congestion marking, because they have 881 already been congestion marked. So, it is both correct and useful 882 that cancelled packets should be included in a policer's measure of 883 path congestion, as this removes the incentive the sender would 884 otherwise have to mark more packets as cancelled than it should. 886 An ingress policer should also ensure that flows are not already 887 negative when they enter the access network. As with cancelled 888 packets, the presence of negative packets will typically be unusual. 889 Therefore it will be easy to detect negative flows at the ingress by 890 just detecting negative packets then monitoring the flow they belong 891 to. 893 Of course, even if the sender does operate its own network, it may 894 arrange not to congestion mark traffic. Whether the sender does this 895 or not is of no concern to anyone else except the sender. Such a 896 sender will not be policed against its own network's contribution to 897 congestion, but the only resulting problem would be overload in the 898 sender's own network. 900 Finally, we must not forget that an easy way to circumvent re-ECN's 901 defences is for the source to turn off re-ECN support, by setting the 902 Not-RECT codepoint, implying RFC3168 compliant traffic. Therefore an 903 ingress policer should put a general rate-limit on Not-RECT traffic, 904 which SHOULD be lax during early, patchy deployment, but will have to 905 become stricter as deployment widens. Similarly, flows starting 906 without a cautious packet can be confined by a strict rate-limit used 907 for the remainder of flows that haven't proved they are well-behaved 908 by starting correctly (therefore they need not consume any flow 909 state---they are just confined to the `misbehaving' bin if they carry 910 an unrecognised flow ID). 912 4.5. Inter-domain Policing 914 One of the main design goals of re-ECN is for border security 915 mechanisms to be as simple as possible, otherwise they will become 916 the pinch-points that limit scalability of the whole internetwork. 917 We want to avoid per-flow processing at borders and to keep to 918 passive mechanisms that can monitor traffic in parallel to 919 forwarding, rather than having to filter traffic inline---in series 920 with forwarding. Such passive, off-line mechanisms are essential for 921 future high-speed all-optical border interconnection where packets 922 cannot be buffered while they are checked for policy compliance. 924 So far, we have been able to keep the border mechanisms simple, 925 despite having had to harden them against some subtle attacks on the 926 re-ECN design. The mechanisms are still passive and avoid per-flow 927 processing. 929 The basic accounting mechanism at each border interface simply 930 involves accumulating the volume of packets with positive worth 931 (positive and cautious packets), and subtracting the volume of those 932 with negative worth (red packets). Even though this mechanism takes 933 no regard of flows, over an accounting period (say a month) this 934 subtraction will account for the downstream congestion caused by all 935 the flows traversing the interface, wherever they come from, and 936 wherever they go to. The two networks can agree to use this metric 937 however they wish to determine some congestion-related penalty 938 against the upstream network. Although the algorithm could hardly be 939 simpler, it is spelled out using pseudo-code in Appendix C.1. 941 Various attempts to subvert the re-ECN design have been made. In all 942 cases their root cause is persistently negative flows. But, after 943 describing these attacks we will show that we don't actually have to 944 get rid of all persistently negative flows in order to thwart the 945 attacks. 947 In honest flows, downstream congestion is measured as positive minus 948 negative volume. So if all flows are honest (i.e. not persistently 949 negative), adding all positive volume and all negative volume without 950 regard to flows will give an aggregate measure of downstream 951 congestion. But such simple aggregation is only possible if no flows 952 are persistently negative. Unless persistently negative flows are 953 completely removed, they will reduce the aggregate measure of 954 congestion. The aggregate may still be positive overall, but not as 955 positive as it would have been had the negative flows been removed. 957 In Section 4.3 we discussed how to sanction traffic to remove, or at 958 least to identify, persistently negative flows. But, even if the 959 sanction for negative traffic is to discard it, unless it is 960 discarded at the exact point it goes negative, it will wrongly 961 subtract from aggregate downstream congestion, at least at any 962 borders it crosses after it has gone negative but before it is 963 discarded. 965 We rely on sanctions to deter dishonest understatement of congestion. 966 But even the ultimate sanction of discard can only be effective if 967 the sender is bothered about the data getting through to its 968 destination. A number of attacks have been identified where a sender 969 gains from sending dummy traffic or it can attack someone or 970 something using dummy traffic even though it isn't communicating any 971 information to anyone: 973 o A host can send traffic with no positive packets towards its 974 intended destination, aiming to transmit as much traffic as any 975 dropper will allow [Bauer06]. It may add forward error correction 976 (FEC) to repair as much drop as it experiences. 978 o A host can send dummy traffic into the network with no positive 979 packets and with no intention of communicating with anyone, but 980 merely to cause higher levels of congestion for others who do want 981 to communicate (DoS). So, to ride over the extra congestion, 982 everyone else has to spend more of whatever rights to cause 983 congestion they have been allowed. 985 o A network can simply create its own dummy traffic to congest 986 another network, perhaps causing it to lose business at no cost to 987 the attacking network. This is a form of denial of service 988 perpetrated by one network on another. The preferential drop 989 measures in [ref other document] provide crude protection against 990 such attacks, but we are not overly worried about more accurate 991 prevention measures, because it is already possible for networks 992 to DoS other networks on the general Internet, but they generally 993 don't because of the grave consequences of being found out. We 994 are only concerned if re-ECN increases the motivation for such an 995 attack, as in the next example. 997 o A network can just generate negative traffic and send it over its 998 border with a neighbour to reduce the overall penalties that it 999 should pay to that neighbour. It could even initialise the TTL so 1000 it expired shortly after entering the neighbouring network, 1001 reducing the chance of detection further downstream. This attack 1002 need not be motivated by a desire to deny service and indeed need 1003 not cause denial of service. A network's main motivator would 1004 most likely be to reduce the penalties it pays to a neighbour. 1005 But, the prospect of financial gain might tempt the network into 1006 mounting a DoS attack on the other network as well, given the gain 1007 would offset some of the risk of being detected. 1009 The first step towards a solution to all these problems with negative 1010 flows is to be able to estimate the contribution they make to 1011 downstream congestion at a border and to correct the measure 1012 accordingly. Although ideally we want to remove negative flows 1013 themselves, perhaps surprisingly, the most effective first step is to 1014 cancel out the polluting effect negative flows have on the measure of 1015 downstream congestion at a border. It is more important to get an 1016 unbiased estimate of their effect, than to try to remove them all. A 1017 suggested algorithm to give an unbiased estimate of the contribution 1018 from negative flows to the downstream congestion measure is given in 1019 Appendix C.2. 1021 Although making an accurate assessment of the contribution from 1022 negative flows may not be easy, just the single step of neutralising 1023 their polluting effect on congestion metrics removes all the gains 1024 networks could otherwise make from mounting dummy traffic attacks on 1025 each other. This puts all networks on the same side (only with 1026 respect to negative flows of course), rather than being pitched 1027 against each other. The network where this flow goes negative as 1028 well as all the networks downstream lose out from not being 1029 reimbursed for any congestion this flow causes. So they all have an 1030 interest in getting rid of these negative flows. Networks forwarding 1031 a flow before it goes negative aren't strictly on the same side, but 1032 they are disinterested bystanders---they don't care that the flow 1033 goes negative downstream, but at least they can't actively gain from 1034 making it go negative. The problem becomes localised so that once a 1035 flow goes negative, all the networks from where it happens and beyond 1036 downstream each have a small problem, each can detect it has a 1037 problem and each can get rid of the problem if it chooses to. But 1038 negative flows can no longer be used for any new attacks. 1040 Once an unbiased estimate of the effect of negative flows can be 1041 made, the problem reduces to detecting and preferably removing flows 1042 that have gone negative as soon as possible. But importantly, 1043 complete eradication of negative flows is no longer critical---best 1044 endeavours will be sufficient. 1046 For instance, let us consider the case where a source sends traffic 1047 with no positive packets at all, hoping to at least get as much 1048 traffic delivered as network-based droppers will allow. The flow is 1049 likely to go at least slightly negative in the first network on the 1050 path (N1 if we use the example network layout in Figure 2). If all 1051 networks use the algorithm in Appendix C.2 to inflate penalties at 1052 their border with an upstream network, they will remove the effect of 1053 negative flows. So, for instance, N2 will not be paying a penalty to 1054 N1 for this flow. Further, because the flow contributes no positive 1055 packets at all, a dropper at the egress will completely remove it. 1057 The remaining problem is that every network is carrying a flow that 1058 is causing congestion to others but not being held to account for the 1059 congestion it is causing. Whenever the fail-safe border algorithm 1060 (Section 4.6) or the border algorithm to compensate for negative 1061 flows (Appendix C.2) detects a negative flow, it can instantiate a 1062 focused dropper for that flow locally. It may be some time before 1063 the flow is detected, but the more strongly negative the flow is, the 1064 more quickly it will be detected by the fail-safe algorithm. But, in 1065 the meantime, it will not be distorting border incentives. Until it 1066 is detected, if it contributes to drop anywhere, its packets will 1067 tend to be dropped before others if queues use the preferential drop 1068 rules in [ref other document], which discriminate against non- 1069 positive packets. All networks below the point where a flow goes 1070 negative (N1, N2 and N3 in this case) have an incentive to remove 1071 this flow, but the queue where it first goes negative (in N1) can of 1072 course remove the problem for everyone downstream. 1074 In the case of DDoS attacks, Section 5.1 describes how re-ECN 1075 mitigates their force. 1077 4.6. Inter-domain Fail-safes 1079 The mechanisms described so far create incentives for rational 1080 network operators to behave. That is, one operator aims to make 1081 another behave responsibly by applying penalties and expects a 1082 rational response (i.e. one that trades off costs against benefits). 1083 It is usually reasonable to assume that other network operators will 1084 behave rationally (policy routing can avoid those that might not). 1085 But this approach does not protect against the misconfigurations and 1086 accidents of other operators. 1088 Therefore, we propose the following two mechanisms at a network's 1089 borders to provide "defence in depth". Both are similar: 1091 Highly positive flows: A small sample of positive packets should be 1092 picked randomly as they cross a border interface. Then subsequent 1093 packets matching the same source and destination address and DSCP 1094 should be monitored. If the fraction of positive packets is well 1095 above a threshold (to be determined by operational practice), a 1096 management alarm SHOULD be raised, and the flow MAY be 1097 automatically subject to focused drop. 1099 Persistently negative flows: A small sample of congestion marked 1100 (red) packets should be picked randomly as they cross a border 1101 interface. Then subsequent packets matching the same source and 1102 destination address and DSCP should be monitored. If the balance 1103 of positive packets minus negative packets (measured in bytes) is 1104 persistently negative, a management alarm SHOULD be raised, and 1105 the flow MAY be automatically subject to focused drop. 1107 Both these mechanisms rely on the fact that highly positive (or 1108 negative) flows will appear more quickly in the sample by selecting 1109 randomly solely from positive (or negative) packets. 1111 4.7. The Case against Classic Feedback 1113 A system that produces an optimal outcome as a result of everyone's 1114 selfish actions is extremely powerful. Especially one that enables 1115 evolvability of congestion control. But why do we have to change to 1116 re-ECN to achieve it? Can't classic congestion feedback (as used 1117 already by standard ECN) be arranged to provide similar incentives 1118 and similar evolvability? Superficially it can. Kelly's seminal 1119 work showed how we can allow everyone the freedom to evolve whatever 1120 congestion control behaviour is in their application's best interest 1121 but still optimise the whole system of networks and users by placing 1122 a price on congestion to ensure responsible use of this 1123 freedom [Evol_cc]). Kelly used ECN with its classic congestion 1124 feedback model as the mechanism to convey congestion price 1125 information. The mechanism could be thought of as volume charging; 1126 except only the volume of packets marked with congestion experienced 1127 (CE) was counted. 1129 However, below we explain why relying on classic feedback /required/ 1130 congestion charging to be used, while re-ECN achieves the same 1131 powerful outcome (given it is built on Kelly's foundations), but does 1132 not /require/ congestion charging. In brief, the problem with 1133 classic feedback is that the incentives have to trace the indirect 1134 path back to the sender---the long way round the feedback loop. For 1135 example, if classic feedback were used in Figure 2, N2 would have had 1136 to influence N1 via all of N3, R & S rather than directly. 1138 Inability to agree what is happening downstream: In order to police 1139 its upstream neighbour's congestion response, the neighbours 1140 should be able to agree on the congestion to be responded to. 1141 Whatever the feedback regime, as packets change hands at each 1142 trust boundary, any path metrics they carry are verifiable by both 1143 neighbours. But, with a classic path metric, they can only agree 1144 on the /upstream/ path congestion. 1146 Inaccessible back-channel: The network needs a whole-path congestion 1147 metric if it wants to control the source. Classically, whole path 1148 congestion emerges at the destination, to be fed back from 1149 receiver to sender in a back-channel. But, in any data network, 1150 back-channels need not be visible to relays, as they are 1151 essentially communications between the end-points. They may be 1152 encrypted, asymmetrically routed or simply omitted, so no network 1153 element can reliably intercept them. The congestion charging 1154 literature solves this problem by charging the receiver and 1155 assuming this will cause the receiver to refer the charges to the 1156 sender. But, of course, this creates unintended side-effects... 1158 `Receiver pays' unacceptable: In connectionless datagram networks, 1159 receivers and receiving networks cannot prevent reception from 1160 malicious senders, so `receiver pays' opens them to `denial of 1161 funds' attacks. 1163 End-user congestion charging unacceptable in many societies: Even if 1164 'denial of funds' were not a problem, we know that end-users are 1165 highly averse to the unpredictability of congestion charging and 1166 anyway, we want to avoid restricting network operators to just one 1167 retail tariff. But with classic feedback only an upstream metric 1168 is available, so we cannot avoid having to wrap the `receiver 1169 pays' money flow around the feedback loop, necessarily forcing 1170 end-users to be subjected to congestion charging. 1172 To summarise so far, with classic feedback, policing congestion 1173 response without losing evolvability /requires/ congestion charging 1174 of end-users and a `receiver pays' model, whereas, with re-ECN, it is 1175 still possible to influence incentives using congestion charging but 1176 using the safer `sender pays' model. However, congestion charging is 1177 only likely to be appropriate between domains. So, without losing 1178 evolvability, re-ECN enables technical policing mechanisms that are 1179 more appropriate for end users than congestion pricing. 1181 4.8. Simulations 1183 Simulations of policer and dropper performance done for the multi-bit 1184 version of re-feedback have been included in section 5 "Dropper 1185 Performance" of [Re-fb]. Simulations of policer and dropper for the 1186 re-ECN version described in this document are work in progress. 1188 5. Other Applications of Re-ECN 1190 5.1. DDoS Mitigation 1192 A flooding attack is inherently about congestion of a resource. 1193 Because re-ECN ensures the sources causing network congestion 1194 experience the cost of their own actions, it acts as a first line of 1195 defence against DDoS. As load focuses on a victim, upstream queues 1196 grow, requiring honest sources to pre-load packets with a higher 1197 fraction of positive packets. Once downstream queues are so 1198 congested that they are dropping traffic, they will be marking to 1199 negative the traffic they do forward 100%. Honest sources will 1200 therefore be sending positive packets 100% (and therefore being 1201 severely rate-limited at the ingress). 1203 Senders under malicious control can either do the same as honest 1204 sources, and be rate-limited at ingress, or they can understate 1205 congestion by sending more neutral RECT packets than they should. If 1206 sources understate congestion (i.e. do not re-echo sufficient 1207 positive packets) and the preferential drop ranking is implemented on 1208 queues ([ref othe document]), these queues will preserve positive 1209 traffic until last. So, the neutral traffic from malicious sources 1210 will all be automatically dropped first. Either way, the malicious 1211 sources cannot send more than honest sources. 1213 Further, hosts under malicious control will tend to be re-used for 1214 many different attacks. They will therefore build up a long term 1215 history of causing congestion. Therefore, as long as the population 1216 of potentially compromisable hosts around the Internet is limited, 1217 the per-user policing algorithms in Appendix B.1 will gradually 1218 throttle down zombies and other launchpads for attacks. Therefore, 1219 widespread deployment of re-ECN could considerably dampen the force 1220 of DDoS. Certainly, zombie armies could hold their fire for long 1221 enough to be able to build up enough credit in the per-user policers 1222 to launch an attack. But they would then still be limited to no more 1223 throughput than other, honest users. 1225 Inter-domain traffic policing (see Section 4.5)ensures that any 1226 network that harbours compromised `zombie' hosts will have to bear 1227 the cost of the congestion caused by traffic from zombies in 1228 downstream networks. Such networks will be incentivised to deploy 1229 per-user policers that rate-limit hosts that are unresponsive to 1230 congestion so they can only send very slowly into congested paths. 1231 As well as protecting other networks, the extremely poor performance 1232 at any sign of congestion will incentivise the zombie's owner to 1233 clean it up. However, the host should behave normally when using 1234 uncongested paths. 1236 Uniquely, re-ECN handles DDoS traffic without relying on the validity 1237 of identifiers in packets. Certainly the egress dropper relies on 1238 uniqueness of flow identifiers, but not their validity. So if a 1239 source spoofs another address, re-ECN works just as well, as long as 1240 the attacker cannot imitate all the flow identifiers of another 1241 active flow passing through the same dropper (see Section 6). 1242 Similarly, the ingress policer relies on uniqueness of flow IDs, not 1243 their validity. Because a new flow will only be allowed any rate at 1244 all if it starts with a cautious packet, and the more cautious 1245 packets there are starting new flows, the more they will be limited. 1246 Essentially a re-ECN policer limits the bulk of all congestion 1247 entering the network through a physical interface; limiting the 1248 congestion caused by each flow is merely an optional extra. 1250 5.2. End-to-end QoS 1252 {ToDo: (Section 3.3.2 of [Re-fb] entitled `Edge QoS' gives an outline 1253 of the text that will be added here).} 1255 5.3. Traffic Engineering 1257 {ToDo: } 1259 5.4. Inter-Provider Service Monitoring 1261 {ToDo: } 1263 6. Limitations 1265 The known limitations of the re-ECN approach are: 1267 o We still cannot defend against the attack described in Section 10 1268 where a malicious source sends negative traffic through the same 1269 egress dropper as another flow and imitates its flow identifiers, 1270 allowing a malicious source to cause an innocent flow to 1271 experience heavy drop. 1273 o Re-feedback for TTL (re-TTL) would also be desirable at the same 1274 time as re-ECN. Unfortunately this requires a further standards 1275 action for the mechanisms briefly described in Appendix D 1277 o Traffic must be ECN-capable for re-ECN to be effective. The only 1278 defence against malicious users who turn off ECN capbility is that 1279 networks are expected to rate limit Not-ECT traffic and to apply 1280 higher drop preference to it during congestion. Although these 1281 are blunt instruments, they at least represent a feasible scenario 1282 for the future Internet where Not-ECT traffic co-exists with re- 1283 ECN traffic, but as a severely hobbled under-class. We recommend 1284 (Section 7.1) that while accommodating a smooth initial transition 1285 to re-ECN, policing policies should gradually be tightened to rate 1286 limit Not-ECT traffic more strictly in the longer term. 1288 o When checking whether a flow is balancing positive packets with 1289 negative packets (measured in bytes), re-ECN can only account for 1290 congestion marking, not drops. So, whenever a sender experiences 1291 drop, it does not have to re-echo the congestion event by sending 1292 positive packet(s). Nonetheless, it is hardly any advantage to be 1293 able to send faster than other flows only if your traffic is 1294 dropped and the other traffic isn't. 1296 o We are considering the issue of whether it would be useful to 1297 truncate rather than drop packets that appear to be malicious, so 1298 that the feedback loop is not broken but useful data can be 1299 removed. 1301 7. Incremental Deployment 1303 7.1. Incremental Deployment Features 1305 The design of the re-ECN protocol started from the fact that the 1306 current ECN marking behaviour of queues was sufficient and that re- 1307 feedback could be introduced around these queues by changing the 1308 sender behaviour but not the routers. Otherwise, if we had required 1309 routers to be changed, the chance of encountering a path that had 1310 every router upgraded would be vanishly small during early 1311 deployment, giving no incentive to start deployment. Also, as there 1312 is no new forwarding behaviour, routers and hosts do not have to 1313 signal or negotiate anything. 1315 However, networks that choose to protect themselves using re-ECN do 1316 have to add new security functions at their trust boundaries with 1317 others. They distinguish legacy traffic by its ECN field. Traffic 1318 from Not-ECT transports is distinguishable by its Not-ECT marking . 1319 Traffic from RFC3168 compliant ECN transports is distinguished from 1320 re-ECN by which of ECT(0) or ECT(1) is used. We chose to use ECT(1) 1321 for re-ECN traffic deliberately. Existing ECN sources set ECT(0) on 1322 either 50% (the nonce) or 100% (the default) of packets, whereas re- 1323 ECN does not use ECT(0) at all. We can use this distinguishing 1324 feature of RFC3168 compliant ECN traffic to separate it out for 1325 different treatment at the various border security functions: egress 1326 dropping, ingress policing and border policing. 1328 The general principle we adopt is that an egress dropper will not 1329 drop any legacy traffic, but ingress and border policers will limit 1330 the bulk rate of legacy traffic (Not-ECT, ECT(0) and those amrked 1331 with the unused codepoint as defined in [Re-TCP]) that can enter each 1332 network. Then, during early re-ECN deployment, operators can set 1333 very permissive (or non-existent) rate-limits on legacy traffic, but 1334 once re-ECN implementations are generally available, legacy traffic 1335 can be rate-limited increasingly harshly. Ultimately, an operator 1336 might choose to block all legacy traffic entering its network, or at 1337 least only allow through a trickle. 1339 Then, as the limits are set more strictly, the more RFC3168 ECN 1340 sources will gain by upgrading to re-ECN. Thus, towards the end of 1341 the voluntary incremental deployment period, RFC3168 compliant 1342 transports can be given progressively stronger encouragement to 1343 upgrade. 1345 7.2. Incremental Deployment Incentives 1347 It would only be worth standardising the re-ECN protocol if there 1348 existed a coherent story for how it might be incrementally deployed. 1349 In order for it to have a chance of deployment, everyone who needs to 1350 act must have a strong incentive to act, and the incentives must 1351 arise in the order that deployment would have to happen. Re-ECN 1352 works around unmodified ECN routers, but we can't just discuss why 1353 and how re-ECN deployment might build on ECN deployment, because 1354 there is precious little to build on in the first place. Instead, we 1355 aim to show that re-ECN deployment could carry ECN with it. We focus 1356 on commercial deployment incentives, although some of the arguments 1357 apply equally to academic or government sectors. 1359 ECN deployment: 1361 ECN is largely implemented in commercial routers, but generally 1362 not as a supported feature, and it has largely not been deployed 1363 by commercial network operators. ECN has been implemented in most 1364 Unix-based operating systems for some time. Microsoft first 1365 implemented ECN in Windows Vista, but it is only on by default for 1366 the server end of a TCP connection. Unfortunately the client end 1367 had to be turned off by default, because a non-zero ECN field 1368 triggers a bug in a legacy home gateway which makes it crash. For 1369 detailed deployment status, see [ECN-Deploy]. We believe the 1370 reason ECN deployment has not happened is twofold: 1372 * ECN requires changes to both routers and hosts. If someone 1373 wanted to sell the improvement that ECN offers, they would have 1374 to co-ordinate deployment of their product with others. An ECN 1375 server only gives any improvement on an ECN network. An ECN 1376 network only gives any improvement if used by ECN devices. 1377 Deployment that requires co-ordination adds cost and delay and 1378 tends to dilute any competitive advantage that might be gained. 1380 * ECN `only' gives a performance improvement. Making a product a 1381 bit faster (whether the product is a device or a network), 1382 isn't usually a sufficient selling point to be worth the cost 1383 of co-ordinating across the industry to deploy it. Network 1384 operators tend to avoid re-configuring a working network unless 1385 launching a new product. 1387 ECN and Re-ECN for Edge-to-edge Assured QoS: 1389 We believe the proposal to provide assured QoS sessions using a 1390 form of ECN called pre-congestion notification (PCN) [RFC5559] is 1391 most likely to break the deadlock in ECN deployment first. It 1392 only requires edge-to-edge deployment so it does not require 1393 endpoint support. It can be deployed in a single network, then 1394 grow incrementally to interconnected networks. And it provides a 1395 different `product' (internetworked assured QoS), rather than 1396 merely making an existing product a bit faster. 1398 Not only could this assured QoS application kick-start ECN 1399 deployment, it could also carry re-ECN deployment with it; because 1400 re-ECN can enable the assured QoS region to expand to a large 1401 internetwork where neighbouring networks do not trust each other. 1402 [Re-PCN] argues that re-ECN security should be built in to the QoS 1403 system from the start, explaining why and how. 1405 If ECN and re-ECN were deployed edge-to-edge for assured QoS, 1406 operators would gain valuable experience. They would also clear 1407 away many technical obstacles such as firewall configurations that 1408 block all but the RFC3168 settings of the ECN field and the RE 1409 flag. 1411 ECN in Access Networks: 1413 The next obstacle to ECN deployment would be extension to access 1414 and backhaul networks, where considerable link layer differences 1415 makes implementation non-trivial, particularly on congested 1416 wireless links. ECN and re-ECN work fine during partial 1417 deployment, but they will not be very useful if the most congested 1418 elements in networks are the last to support them. Access network 1419 support is one of the weakest parts of this deployment story. All 1420 we can hope is that, once the benefits of ECN are better 1421 understood by operators, they will push for the necessary link 1422 layer implementations as deployment proceeds. 1424 Policing Unresponsive Flows: 1426 Re-ECN allows a network to offer differentiated quality of service 1427 as explained in Section 5.2. But we do not believe this will 1428 motivate initial deployment of re-ECN, because the industry is 1429 already set on alternative ways of doing QoS. Despite being much 1430 more complicated and expensive, the alternative approaches are 1431 here and now. 1433 But re-ECN is critical to QoS deployment in another respect. It 1434 can be used to prevent applications from taking whatever bandwidth 1435 they choose without asking. 1437 Currently, applications that remain resolute in their lack of 1438 response to congestion are rewarded by other TCP applications. In 1439 other words, TCP is naively friendly, in that it reduces its rate 1440 in response to congestion whether it is competing with friends 1441 (other TCPs) or with enemies (unresponsive applications). 1443 Therefore, those network owners that want to sell QoS will be keen 1444 to ensure that their users can't help themselves to QoS for free. 1445 Given the very large revenues at stake, we believe effective 1446 policing of congestion response will become highly sought after by 1447 network owners. 1449 But this does not necessarily argue for re-ECN deployment. 1450 Network owners might choose to deploy bottleneck policers rather 1451 than re-ECN-based policing. However, under Related Work 1452 (Section 9) we argue that bottleneck policers are inherently 1453 vulnerable to circumvention. 1455 Therefore we believe there will be a strong demand from network 1456 owners for re-ECN deployment so they can police flows that do not 1457 ask to be unresponsive to congestion, in order to protect their 1458 revenues from flows that do ask (QoS). In particular, we suspect 1459 that the operators of cellular networks will want to prevent VoIP 1460 and video applications being used freely on their networks as a 1461 more open market develops in GPRS and 3G devices. 1463 Initial deployments are likely to be isolated to single cellular 1464 networks. Cellular operators would first place requirements on 1465 device manufacturers to include re-ECN in the standards for mobile 1466 devices. In parallel, they would put out tenders for ingress and 1467 egress policers. Then, after a while they would start to tighten 1468 rate limits on Not-ECT traffic from non-standard devices and they 1469 would start policing whatever non-accredited applications people 1470 might install on mobile devices with re-ECN support in the 1471 operating system. This would force even independent mobile device 1472 manufacturers to provide re-ECN support. Early standardisation 1473 across the cellular operators is likely, including interconnection 1474 agreements with penalties for excess downstream congestion. 1476 We suspect some fixed broadband networks (whether cable or DSL) 1477 would follow a similar path. However, we also believe that larger 1478 parts of the fixed Internet would not choose to police on a per- 1479 flow basis. Some might choose to police congestion on a per-user 1480 basis in order to manage heavy peer-to-peer file-sharing, but it 1481 seems likely that a sizeable majority would not deploy any form of 1482 policing. 1484 This hybrid situation begs the question, "How does re-ECN work for 1485 networks that choose to using policing if they connect with others 1486 that don't?" Traffic from non-ECN capable sources will arrive 1487 from other networks and cause congestion within the policed, ECN- 1488 capable networks. So networks that chose to police congestion 1489 would rate-limit Not-ECT traffic throughout their network, 1490 particularly at their borders. They would probably also set 1491 higher usage prices in their interconnection contracts for 1492 incoming Not-ECT and Not-RECT traffic. We assume that 1493 interconnection contracts between networks in the same tier will 1494 include congestion penalties before contracts with provider 1495 backbones do. 1497 A hybrid situation could remain for all time. As was explained in 1498 the introduction, we believe in healthy competition between 1499 policing and not policing, with no imperative to convert the whole 1500 world to the religion of policing. Networks that chose not to 1501 deploy egress droppers would leave themselves open to being 1502 congested by senders in other networks. But that would be their 1503 choice. 1505 The important aspect of the egress dropper though is that it most 1506 protects the network that deploys it. If a network does not 1507 deploy an egress dropper, sources sending into it from other 1508 networks will be able to understate the congestion they are 1509 causing. Whereas, if a network deploys an egress dropper, it can 1510 know how much congestion other networks are dumping into it, and 1511 apply penalties or charges accordingly. So, whether or not a 1512 network polices its own sources at ingress, it is in its interests 1513 to deploy an egress dropper. 1515 Host support: 1517 In the above deployment scenario, host operating system support 1518 for re-ECN came about through the cellular operators demanding it 1519 in device standards (i.e. 3GPP). Of course, increasingly, mobile 1520 devices are being built to support multiple wireless technologies. 1521 So, if re-ECN were stipulated for cellular devices, it would 1522 automatically appear in those devices connected to the wireless 1523 fringes of fixed networks if they coupled cellular with WiFi or 1524 Bluetooth technology, for instance. Also, once implemented in the 1525 operating system of one mobile device, it would tend to be found 1526 in other devices using the same family of operating system. 1528 Therefore, whether or not a fixed network deployed ECN, or 1529 deployed re-ECN policers and droppers, many of its hosts might 1530 well be using re-ECN over it. Indeed, they would be at an 1531 advantage when communicating with hosts across re-ECN policed 1532 networks that rate limited Not-RECT traffic. 1534 Other possible scenarios: 1536 The above is thankfully not the only plausible scenario we can 1537 think of. One of the many clubs of operators that meet regularly 1538 around the world might decide to act together to persuade a major 1539 operating system manufacturer to implement re-ECN. And they may 1540 agree between them on an interconnection model that includes 1541 congestion penalties. 1543 Re-ECN provides an interesting opportunity for device 1544 manufacturers as well as network operators. Policers can be 1545 configured loosely when first deployed. Then as re-ECN take-up 1546 increases, they can be tightened up, so that a network with re-ECN 1547 deployed can gradually squeeze down the service provided to 1548 RFC3168 compliant devices that have not upgraded to re-ECN. Many 1549 device vendors rely on replacement sales. And operating system 1550 companies rely heavily on new release sales. Also support 1551 services would like to be able to force stragglers to upgrade. 1552 So, the ability to throttle service to RFC3168 compliant operating 1553 systems is quite valuable. 1555 Also, policing unresponsive sources may not be the only or even 1556 the first application that drives deployment. It may be policing 1557 causes of heavy congestion (e.g. peer-to-peer file-sharing). Or 1558 it may be mitigation of denial of service. Or we may be wrong in 1559 thinking simpler QoS will not be the initial motivation for re-ECN 1560 deployment. Indeed, the combined pressure for all these may be 1561 the motivator, but it seems optimistic to expect such a level of 1562 joined-up thinking from today's communications industry. We 1563 believe a single application alone must be a sufficient motivator. 1565 In short, everyone gains from adding accountability to TCP/IP, 1566 except the selfish or malicious. So, deployment incentives tend 1567 to be strong. 1569 8. Architectural Rationale 1571 In the Internet's technical community, the danger of not responding 1572 to congestion is well-understood, as well as its attendant risk of 1573 congestion collapse [RFC3714]. However, one side of the Internet's 1574 commercial community considers that the very essence of IP is to 1575 provide open access to the internetwork for all applications. They 1576 see congestion as a symptom of over-conservative investment, and rely 1577 on revising application designs to find novel ways to keep 1578 applications working despite congestion. They argue that the 1579 Internet was never intended to be solely for TCP-friendly 1580 applications. Meanwhile, another side of the Internet's commercial 1581 community believes that it is worthwhile providing a network for 1582 novel applications only if it has sufficient capacity, which can 1583 happen only if a greater share of application revenues can be 1584 /assured/ for the infrastructure provider. Otherwise the major 1585 investments required would carry too much risk and wouldn't happen. 1587 The lesson articulated in [Tussle] is that we shouldn't embed our 1588 view on these arguments into the Internet at design time. Instead we 1589 should design the Internet so that the outcome of these arguments can 1590 get decided at run-time. Re-ECN is designed in that spirit. Once 1591 the protocol is available, different network operators can choose how 1592 liberal they want to be in holding people accountable for the 1593 congestion they cause. Some might boldly invest in capacity and not 1594 police its use at all, hoping that novel applications will result. 1595 Others might use re-ECN for fine-grained flow policing, expecting to 1596 make money selling vertically integrated services. Yet others might 1597 sit somewhere half-way, perhaps doing coarse, per-user policing. All 1598 might change their minds later. But re-ECN always allows them to 1599 interconnect so that the careful ones can protect themselves from the 1600 liberal ones. 1602 The incentive-based approach used for re-ECN is based on Gibbens and 1603 Kelly's arguments [Evol_cc] on allowing endpoints the freedom to 1604 evolve new congestion control algorithms for new applications. They 1605 ensured responsible behaviour despite everyone's self-interest by 1606 applying pricing to ECN marking, and Kelly had proved stability and 1607 optimality in an earlier paper. 1609 Re-ECN keeps all the underlying economic incentives, but rearranges 1610 the feedback. The idea is to allow a network operator (if it 1611 chooses) to deploy engineering mechanisms like policers at the front 1612 of the network which can be designed to behave /as if/ they are 1613 responding to congestion prices. Rather than having to subject users 1614 to congestion pricing, networks can then use more traditional 1615 charging regimes (or novel ones). But the engineering can constrain 1616 the overall amount of congestion a user can cause. This provides a 1617 buffer against completely outrageous congestion control, but still 1618 makes it easy for novel applications to evolve if they need different 1619 congestion control to the norms. It also allows novel charging 1620 regimes to evolve. 1622 Despite being achieved with a relatively minor protocol change, re- 1623 ECN is an architectural change. Previously, Internet congestion 1624 could only be controlled by the data sender, because it was the only 1625 one both in a position to control the load and in a position to see 1626 information on congestion. Re-ECN levels the playing field. It 1627 recognises that the network also has a role to play in moderating 1628 (policing) congestion control. But policing is only truly effective 1629 at the first ingress into an internetwork, whereas path congestion 1630 was previously only visible at the last egress. So, re-ECN 1631 democratises congestion information. Then the choice over who 1632 actually controls congestion can be made at run-time, not design 1633 time---a bit like an aircraft with dual controls. And different 1634 operators can make different choices. We believe non-architectural 1635 approaches to this problem are unlikely to offer more than partial 1636 solutions (see Section 9). 1638 Importantly, re-ECN does not require assumptions about specific 1639 congestion responses to be embedded in any network elements, except 1640 at the first ingress to the internetwork if that level of control is 1641 desired by the ingress operator. But such tight policing will be a 1642 matter of agreement between the source and its access network 1643 operator. The ingress operator need not police congestion response 1644 at flow granularity; it can simply hold a source responsible for the 1645 aggregate congestion it causes, perhaps keeping it within a monthly 1646 congestion quota. Or if the ingress network trusts the source, it 1647 can do nothing. 1649 Therefore, the aim of the re-ECN protocol is NOT solely to police 1650 TCP-friendliness. Re-ECN preserves IP as a generic network layer for 1651 all sorts of responses to congestion, for all sorts of transports. 1652 Re-ECN merely ensures truthful downstream congestion information is 1653 available in the network layer for all sorts of accountability 1654 applications. 1656 The end to end design principle does not say that all functions 1657 should be moved out of the lower layers---only those functions that 1658 are not generic to all higher layers. Re-ECN adds a function to the 1659 network layer that is generic, but was omitted: accountability for 1660 causing congestion. Accountability is not something that an end-user 1661 can provide to themselves. We believe re-ECN adds no more than is 1662 sufficient to hold each flow accountable, even if it consists of a 1663 single datagram. 1665 "Accountability" implies being able to identify who is responsible 1666 for causing congestion. However, at the network layer it would NOT 1667 be useful to identify the cause of congestion by adding individual or 1668 organisational identity information, NOR by using source IP 1669 addresses. Rather than bringing identity information to the point of 1670 congestion, we bring downstream congestion information to the point 1671 where the cause can be most easily identified and dealt with. That 1672 is, at any trust boundary congestion can be associated with the 1673 physically connected upstream neighbour that is directly responsible 1674 for causing it (whether intentionally or not). A trust boundary 1675 interface is exactly the place to police or throttle in order to 1676 directly mitigate congestion, rather than having to trace the 1677 (ir)responsible party in order to shut them down. 1679 Some considered that ECN itself was a layering violation. The 1680 reasoning went that the interface to a layer should provide a service 1681 to the higher layer and hide how the lower layer does it. However, 1682 ECN reveals the state of the network layer and below to the transport 1683 layer. A more positive way to describe ECN is that it is like the 1684 return value of a function call to the network layer. It explicitly 1685 returns the status of the request to deliver a packet, by returning a 1686 value representing the current risk that a packet will not be served. 1687 Re-ECN has similar semantics, except the transport layer must try to 1688 guess the return value, then it can use the actual return value from 1689 the network layer to modify the next guess. 1691 The guiding principle behind all the discussion in Section 4.5 on 1692 Policing is that any gain from subverting the protocol should be 1693 precisely neutralised, rather than punished. If a gain is punished 1694 to a greater extent than is sufficient to neutralise it, it will most 1695 likely open up a new vulnerability, where the amplifying effect of 1696 the punishment mechanism can be turned on others. 1698 For instance, if possible, flows should be removed as soon as they go 1699 negative, but we do NOT RECOMMEND any attempts to discard such flows 1700 further upstream while they are still positive. Such over-zealous 1701 push-back is unnecessary and potentially dangerous. These flows have 1702 paid their `fare' up to the point they go negative, so there is no 1703 harm in delivering them that far. If someone downstream asks for a 1704 flow to be dropped as near to the source as possible, because they 1705 say it is going to become negative later, an upstream node cannot 1706 test the truth of this assertion. Rather than have to authenticate 1707 such messages, re-ECN has been designed so that flows can be dropped 1708 solely based on locally measurable evidence. A message hinting that 1709 a flow should be watched closely to test for negativity is fine. But 1710 not a message that claims that a positive flow will go negative 1711 later, so it should be dropped. . 1713 9. Related Work 1715 {Due to lack of time, this section is incomplete. The reader is 1716 referred to the Related Work section of [Re-fb] for a brief selection 1717 of related ideas.} 1719 9.1. Policing Rate Response to Congestion 1721 ATM network elements send congestion back-pressure 1722 messages [ITU-T.I.371] along each connection, duplicating any end to 1723 end feedback because they don't trust it. On the other hand, re-ECN 1724 ensures information in forwarded packets can be used for congestion 1725 management without requiring a connection-oriented architecture and 1726 re-using the overhead of fields that are already set aside for end to 1727 end congestion control (and routing loop detection in the case of re- 1728 TTL in Appendix D). 1730 We borrowed ideas from policers in the literature [pBox],[XCHOKe], 1731 AFD etc. for our rate equation policer. However, without the benefit 1732 of re-ECN they don't police the correct rate for the condition of 1733 their path. They detect unusually high /absolute/ rates, but only 1734 while the policer itself is congested, because they work by detecting 1735 prevalent flows in the discards from the local RED queue. These 1736 policers must sit at every potential bottleneck, whereas our policer 1737 need only be located at each ingress to the internetwork. As Floyd & 1738 Fall explain [pBox], the limitation of their approach is that a high 1739 sending rate might be perfectly legitimate, if the rest of the path 1740 is uncongested or the round trip time is short. Commercially 1741 available rate policers cap the rate of any one flow. Or they 1742 enforce monthly volume caps in an attempt to control high volume 1743 file-sharing. They limit the value a customer derives. They might 1744 also limit the congestion customers can cause, but only as an 1745 accidental side-effect. They actually punish traffic that fills 1746 troughs as much as traffic that causes peaks in utilisation. In 1747 practice network operators need to be able to allocate service by 1748 cost during congestion, and by value at other times. 1750 9.2. Congestion Notification Integrity 1752 The choice of two ECT code-points in the ECN field [RFC3168] 1753 permitted future flexibility, optionally allowing the sender to 1754 encode the experimental ECN nonce [RFC3540] in the packet stream. 1755 This mechanism has since been included in the specifications of DCCP 1756 [RFC4340]. 1758 The ECN nonce is an elegant scheme that allows the sender to detect 1759 if someone in the feedback loop - the receiver especially - tries to 1760 claim no congestion was experienced when in fact congestion led to 1761 packet drops or ECN marks. For each packet it sends, the sender 1762 chooses between the two ECT codepoints in a pseudo-random sequence. 1763 Then, whenever the network marks a packet with CE, if the receiver 1764 wants to deny congestion happened, she has to guess which ECT 1765 codepoint was overwritten. She has only a 50:50 chance of being 1766 correct each time she denies a congestion mark or a drop, which 1767 ultimately will give her away. 1769 The purpose of a network-layer nonce should primarily be protection 1770 of the network, while a transport-layer nonce would be better used to 1771 protect the sender from cheating receivers. Now, the assumption 1772 behind the ECN nonce is that a sender will want to detect whether a 1773 receiver is suppressing congestion feedback. This is only true if 1774 the sender's interests are aligned with the network's, or with the 1775 community of users as a whole. This may be true for certain large 1776 senders, who are under close scrutiny and have a reputation to 1777 maintain. But we have to deal with a more hostile world, where 1778 traffic may be dominated by peer-to-peer transfers, rather than 1779 downloads from a few popular sites. Often the `natural' self- 1780 interest of a sender is not aligned with the interests of other 1781 users. It often wishes to transfer data quickly to the receiver as 1782 much as the receiver wants the data quickly. 1784 In contrast, the re-ECN protocol enables policing of an agreed rate- 1785 response to congestion (e.g. TCP-friendliness) at the sender's 1786 interface with the internetwork. It also ensures downstream networks 1787 can police their upstream neighbours, to encourage them to police 1788 their users in turn. But most importantly, it requires the sender to 1789 declare path congestion to the network and it can remove traffic at 1790 the egress if this declaration is dishonest. So it can police 1791 correctly, irrespective of whether the receiver tries to suppress 1792 congestion feedback or whether the sender ignores genuine congestion 1793 feedback. Therefore the re-ECN protocol addresses a much wider range 1794 of cheating problems, which includes the one addressed by the ECN 1795 nonce. 1797 9.3. Identifying Upstream and Downstream Congestion 1799 Purple [Purple] proposes that queues should use the CWR flag in the 1800 TCP header of ECN-capable flows to work out path congestion and 1801 therefore downstream congestion in a similar way to re-ECN. However, 1802 because CWR is in the transport layer, it is not always visible to 1803 network layer routers and policers. Purple's motivation was to 1804 improve AQM, not policing. But, of course, nodes trying to avoid a 1805 policer would not be expected to allow CWR to be visible. 1807 10. Security Considerations 1809 Nearly the whole of this document concerns security. 1811 11. IANA Considerations 1813 This memo includes no request to IANA. 1815 12. Conclusions 1817 {ToDo:} 1819 13. Acknowledgements 1821 Sebastien Cazalet and Andrea Soppera contributed to the idea of re- 1822 feedback. All the following have given helpful comments: Andrea 1823 Soppera, David Songhurst, Peter Hovell, Louise Burness, Phil Eardley, 1824 Steve Rudkin, Marc Wennink, Fabrice Saffre, Cefn Hoile, Steve Wright, 1825 John Davey, Martin Koyabe, Carla Di Cairano-Gilfedder, Alexandru 1826 Murgu, Nigel Geffen, Pete Willis, John Adams (BT), Sally Floyd 1827 (ICIR), Joe Babiarz, Kwok Ho-Chan (Nortel), Stephen Hailes, Mark 1828 Handley (who developed the attack with cancelled packets), Adam 1829 Greenhalgh (who developed the attack on DNS) (UCL), Jon Crowcroft 1830 (Uni Cam), David Clark, Bill Lehr, Sharon Gillett, Steve Bauer (who 1831 complemented our own dummy traffic attacks with others), Liz Maida 1832 (MIT), and comments from participants in the CRN/CFP Broadband and 1833 DoS-resistant Internet working groups.A special thank you to 1834 Alessandro Salvatori for coming up with fiendish attacks on re-ECN. 1836 14. Comments Solicited 1838 Comments and questions are encouraged and very welcome. They can be 1839 addressed to the IETF Transport Area working group's mailing list 1840 , and/or to the authors. 1842 15. References 1844 15.1. Normative References 1846 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1847 Requirement Levels", BCP 14, RFC 2119, March 1997. 1849 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The 1850 Addition of Explicit Congestion Notification (ECN) 1851 to IP", RFC 3168, September 2001. 1853 15.2. Informative References 1855 [Bauer06] Bauer, S., Faratin, P., and R. Beverly, "Assessing 1856 the assumptions underlying mechanism design for the 1857 Internet", Proc. Workshop on the Economics of 1858 Networked Systems (NetEcon06) , June 2006, . 1862 [CLoop_pol] Salvatori, A., "Closed Loop Traffic Policing", 1863 Politecnico Torino and Institut Eurecom Masters 1864 Thesis , September 2005. 1866 [ECN-Deploy] Floyd, S., "ECN (Explicit Congestion Notification) 1867 in TCP/IP; Implementation and Deployment of ECN", 1868 Web-page , May 2004, . 1871 [Evol_cc] Gibbens, R. and F. Kelly, "Resource pricing and the 1872 evolution of congestion control", 1873 Automatica 35(12)1969--1985, December 1999, 1874 . 1876 [ITU-T.I.371] ITU-T, "Traffic Control and Congestion Control in 1877 B-ISDN", ITU-T Rec. I.371 (03/04), March 2004. 1879 [Jiang02] Jiang, H. and D. Dovrolis, "The Macroscopic 1880 Behavior of the TCP Congestion Avoidance 1881 Algorithm", ACM SIGCOMM CCR 32(3)75-88, July 2002, 1882 . 1884 [Mathis97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, 1885 "The Macroscopic Behavior of the TCP Congestion 1886 Avoidance Algorithm", ACM SIGCOMM CCR 27(3)67--82, 1887 July 1997, 1888 . 1890 [Purple] Pletka, R., Waldvogel, M., and S. Mannal, "PURPLE: 1891 Predictive Active Queue Management Utilizing 1892 Congestion Information", Proc. Local Computer 1893 Networks (LCN 2003) , October 2003. 1895 [RFC2208] Mankin, A., Baker, F., Braden, B., Bradner, S., 1896 O'Dell, M., Romanow, A., Weinrib, A., and L. Zhang, 1897 "Resource ReSerVation Protocol (RSVP) Version 1 1898 Applicability Statement Some Guidelines on 1899 Deployment", RFC 2208, September 1997. 1901 [RFC3514] Bellovin, S., "The Security Flag in the IPv4 1902 Header", RFC 3514, April 2003. 1904 [RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust 1905 Explicit Congestion Notification (ECN) Signaling 1906 with Nonces", RFC 3540, June 2003. 1908 [RFC3714] Floyd, S. and J. Kempf, "IAB Concerns Regarding 1909 Congestion Control for Voice Traffic in the 1910 Internet", RFC 3714, March 2004. 1912 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1913 Congestion Control Protocol (DCCP)", RFC 4340, 1914 March 2006. 1916 [RFC4341] Floyd, S. and E. Kohler, "Profile for Datagram 1917 Congestion Control Protocol (DCCP) Congestion 1918 Control ID 2: TCP-like Congestion Control", 1919 RFC 4341, March 2006. 1921 [RFC4342] Floyd, S., Kohler, E., and J. Padhye, "Profile for 1922 Datagram Congestion Control Protocol (DCCP) 1923 Congestion Control ID 3: TCP-Friendly Rate Control 1924 (TFRC)", RFC 4342, March 2006. 1926 [RFC5559] Eardley, P., "Pre-Congestion Notification (PCN) 1927 Architecture", RFC 5559, June 2009. 1929 [Re-PCN] Briscoe, B., "Emulating Border Flow Policing using 1930 Re-PCN on Bulk Data", 1931 draft-briscoe-re-pcn-border-cheat-03 (work in 1932 progress), October 2009. 1934 [Re-TCP] Briscoe, B., Jacquet, A., Moncaster, T., and A. 1935 Smith, "Re-ECN: Adding Accountability for Causing 1936 Congestion to TCP/IP", 1937 draft-briscoe-tsvwg-re-ecn-tcp-09 (work in 1938 progress), October 2010. 1940 [Re-fb] Briscoe, B., Jacquet, A., Di Cairano-Gilfedder, C., 1941 Salvatori, A., Soppera, A., and M. Koyabe, 1942 "Policing Congestion Response in an Internetwork 1943 Using Re-Feedback", ACM SIGCOMM CCR 35(4)277--288, 1944 August 2005, . 1947 [Savage99] Savage, S., Cardwell, N., Wetherall, D., and T. 1948 Anderson, "TCP congestion control with a 1949 misbehaving receiver", ACM SIGCOMM CCR 29(5), 1950 October 1999, 1951 . 1953 [Smart_rtg] Goldenberg, D., Qiu, L., Xie, H., Yang, Y., and Y. 1954 Zhang, "Optimizing Cost and Performance for 1955 Multihoming", ACM SIGCOMM CCR 34(4)79--92, 1956 October 2004, 1957 . 1959 [Steps_DoS] Handley, M. and A. Greenhalgh, "Steps towards a 1960 DoS-resistant Internet Architecture", Proc. ACM 1961 SIGCOMM workshop on Future directions in network 1962 architecture (FDNA'04) pp 49--56, August 2004. 1964 [Tussle] Clark, D., Sollins, K., Wroclawski, J., and R. 1965 Braden, "Tussle in Cyberspace: Defining Tomorrow's 1966 Internet", ACM SIGCOMM CCR 32(4)347--356, 1967 October 2002, . 1970 [XCHOKe] Chhabra, P., Chuig, S., Goel, A., John, A., Kumar, 1971 A., Saran, H., and R. Shorey, "XCHOKe: Malicious 1972 Source Control for Congestion Avoidance at Internet 1973 Gateways", Proceedings of IEEE International 1974 Conference on Network Protocols (ICNP-02) , 1975 November 2002, 1976 . 1978 [pBox] Floyd, S. and K. Fall, "Promoting the Use of End- 1979 to-End Congestion Control in the Internet", IEEE/ 1980 ACM Transactions on Networking 7(4) 458--472, 1981 August 1999, 1982 . 1984 [relax-fairness] Briscoe, B., Moncaster, T., and L. Burness, 1985 "Problem Statement: Transport Protocols Don't Have 1986 To Do Fairness", 1987 draft-briscoe-tsvwg-relax-fairness-01 (work in 1988 progress), July 2008. 1990 Appendix A. Example Egress Dropper Algorithm 1992 {ToDo: Write up the basic algorithm with flow state, then the 1993 aggregated one.} 1995 Appendix B. Policer Designs to ensure Congestion Responsiveness 1997 B.1. Per-user Policing 1999 User policing requires a policer on the ingress interface of the 2000 access router associated with the user. At that point, the traffic 2001 of the user hasn't diverged on different routes yet; nor has it mixed 2002 with traffic from other sources. 2004 In order to ensure that a user doesn't generate more congestion in 2005 the network than her due share, a modified bulk token-bucket is 2006 maintained with the following parameter: 2008 o b_0 the initial token level 2010 o r the filling rate 2012 o b_max the bucket depth 2014 The same token bucket algorithm is used as in many areas of 2015 networking, but how it is used is very different: 2017 o all traffic from a user over the lifetime of their subscription is 2018 policed in the same token bucket. 2020 o only positive and cancelled packets (positive, cautious and 2021 cancelled) consume tokens 2023 Such a policer will allow network operators to throttle the 2024 contribution of their users to network congestion. This will require 2025 the appropriate contractual terms to be in place between operators 2026 and users. For instance: a condition for a user to subscribe to a 2027 given network service may be that she should not cause more than a 2028 volume C_user of congestion over a reference period T_user, although 2029 she may carry forward up to N_user times her allowance at the end of 2030 each period. These terms directly set the parameter of the user 2031 policer: 2033 o b_0 = C_user 2035 o r = C_user/T_user 2037 o b_max = b_0 * (N_user +1) 2039 Besides the congestion budget policer above, another user policer may 2040 be necessary to further rate-limit cautious packets, if they are to 2041 be marked rather than dropped (see discussion in [ref other 2042 document].). Rate-limiting cautious packets will prevent high bursts 2043 of new flow arrivals, which is a very useful feature in DoS 2044 prevention. A condition to subscribe to a given network service 2045 would have to be that a user should not generate more than C_cautious 2046 cautious packets, over a reference period T_cautious, with no option 2047 to carry forward any of the allowance at the end of each period. 2048 These terms directly set the parameters of the cautious packet 2049 policer: 2051 o b_0 = C_cautious 2053 o r = C_cautious/T_cautious 2055 o b_max = b_0 2057 T_cautious should be a much shorter period than T_user: for instance 2058 T_cautious could be in the order of minutes while T_user could be in 2059 order of weeks. 2061 B.2. Per-flow Rate Policing 2063 Whilst we believe that simple per-user policing would be sufficient 2064 to ensure senders comply with congestion control, some operators may 2065 wish to police the rate response of each flow to congestion as well. 2066 Although we do not believe this will be neceesary, we include this 2067 section to show how one could perform per-flow policing using 2068 enforcement of TCP-fairness as an example. Per-flow policing aims to 2069 enforce congestion responsiveness on the shortest information 2070 timescale on a network path: packet roundtrips. 2072 This again requires that the appropriate terms be agreed between a 2073 network operator and its users, where a congestion responsiveness 2074 policy might be required for the use of a given network service 2075 (perhaps unless the user specifically requests otherwise). 2077 As an example, we describe below how a rate adaptation policer can be 2078 designed when the applicable rate adaptation policy is TCP- 2079 compliance. In that context, the average throughput of a flow will 2080 be expected to be bounded by the value of the TCP throughput during 2081 congestion avoidance, given in Mathis' formula [Mathis97] 2083 x_TCP = k * s / ( T * sqrt(m) ) 2085 where: 2087 o x_TCP is the throughput of the TCP flow in packets per second, 2089 o k is a constant upper-bounded by sqrt(3/2), 2091 o s is the average packet size of the flow, 2093 o T is the roundtrip time of the flow, 2095 o m is the congestion level experienced by the flow. 2097 We define the marking period N=1/m which represents the average 2098 number of packets between two positive or cancelled packets. Mathis' 2099 formula can be re-written as: 2101 x_TCP = k*s*sqrt(N)/T 2103 We can then get the average inter-mark time in a compliant TCP flow, 2104 dt_TCP, by solving (x_TCP/s)*dt_TCP = N which gives 2106 dt_TCP = sqrt(N)*T/k 2108 We rely on this equation for the design of a rate-adaptation policer 2109 as a variation of a token bucket. In that case a policer has to be 2110 set up for each policed flow. This may be triggered by cautious 2111 packets, with the remainder of flows being all rate limited together 2112 if they do not start with a cautious packet. 2114 Where maintaining per flow state is not a problem, for instance on 2115 some access routers, systematic per-flow policing may be considered. 2116 Should per-flow state be more constrained, rate adaptation policing 2117 could be limited to a random sample of flows exhibiting positive or 2118 cancelled packets. 2120 As in the case of user policing, only positive or cancelled packets 2121 will consume tokens, however the amount of tokens consumed will 2122 depend on the congestion signal. 2124 When a new rate adaptation policer is set up for flow j, the 2125 following state is created: 2127 o a token bucket b_j of depth b_max starting at level b_0 2129 o a timestamp t_j = timenow() 2131 o a counter N_j = 0 2133 o a roundtrip estimate T_j 2135 o a filling rate r 2137 When the policing node forwards a packet of flow j with no positive 2138 packets: 2140 o . the counter is incremented: N_j += 1 2142 When the policing node forwards a packet of flow j carrying a 2143 negative packet: 2145 o the counter is incremented: N_j += 1 2147 o the token level is adjusted: b_j += r*(timenow()-t_j) - sqrt(N_j)* 2148 T_j/k 2150 o the counter is reset: N_j = 0 2152 o the timer is reset: t_j = timenow() 2154 An implementation example will be given in a later draft that avoids 2155 having to extract the square root. 2157 Analysis: For a TCP flow, for r= 1 token/sec, on average, 2159 r*(timenow()-t_j)-sqrt(N_j)* T_j/k = dt_TCP - sqrt(N)*T/k = 0 2161 This means that the token level will fluctuate around its initial 2162 level. The depth b_max of the bucket sets the timescale on which the 2163 rate adaptation policy is performed while the filling rate r sets the 2164 trade-off between responsiveness and robustness: 2166 o the higher b_max, the longer it will take to catch greedy flows 2168 o the higher r, the fewer false positives (greedy verdict on 2169 compliant flows) but the more false negatives (compliant verdict 2170 on greedy flows) 2172 This rate adaptation policer requires the availability of a roundtrip 2173 estimate which may be obtained for instance from the application of 2174 re-feedback to the downstream delay Appendix D or passive estimation 2175 [Jiang02]. 2177 When the bucket of a policer located at the access router (whether it 2178 is a per-user policer or a per-flow policer) becomes empty, the 2179 access router SHOULD drop at least all packets causing the token 2180 level to become negative. The network operator MAY take further 2181 sanctions if the token level of the per-flow policers associated with 2182 a user becomes negative. 2184 Appendix C. Downstream Congestion Metering Algorithms 2186 C.1. Bulk Downstream Congestion Metering Algorithm 2188 To meter the bulk amount of downstream congestion in traffic crossing 2189 an inter-domain border an algorithm is needed that accumulates the 2190 size of positive packets and subtracts the size of negative packets. 2191 We maintain two counters: 2193 V_b: accumulated congestion volume 2195 B: total data volume (in case it is needed) 2197 A suitable pseudo-code algorithm for a border router is as follows: 2199 ==================================================================== 2200 V_b = 0 2201 B = 0 2202 for each Re-ECN-capable packet { 2203 b = readLength(packet) /* set b to packet size */ 2204 B += b /* accumulate total volume */ 2205 if readEECN(packet) == (positive || cautious { 2206 V_b += b /* increment... */ 2207 } elseif readEECN(packet) == negative { 2208 V_b -= b /* ...or decrement V_b... */ 2209 } /*...depending on EECN field */ 2210 } 2211 ==================================================================== 2213 At the end of an accounting period this counter V_b represents the 2214 congestion volume that penalties could be applied to, as described in 2215 Section 4.5. 2217 For instance, accumulated volume of congestion through a border 2218 interface over a month might be V_b = 5PB (petabyte = 10^15 byte). 2219 This might have resulted from an average downstream congestion level 2220 of 1% on an accumulated total data volume of B = 500PB. 2222 C.2. Inflation Factor for Persistently Negative Flows 2224 The following process is suggested to complement the simple algorithm 2225 above in order to protect against the various attacks from 2226 persistently negative flows described in Section 4.5. As explained 2227 in that section, the most important and first step is to estimate the 2228 contribution of persistently negative flows to the bulk volume of 2229 downstream pre-congestion and to inflate this bulk volume as if these 2230 flows weren't there. The process below has been designed to give an 2231 unbiased estimate, but it may be possible to define other processes 2232 that achieve similar ends. 2234 While the above simple metering algorithm is counting the bulk of 2235 traffic over an accounting period, the meter should also select a 2236 subset of the whole flow ID space that is small enough to be able to 2237 realistically measure but large enough to give a realistic sample. 2238 Many different samples of different subsets of the ID space should be 2239 taken at different times during the accounting period, preferably 2240 covering the whole ID space. During each sample, the meter should 2241 count the volume of positive packets and subtract the volume of 2242 negative, maintaining a separate account for each flow in the sample. 2243 It should run a lot longer than the large majority of flows, to avoid 2244 a bias from missing the starts and ends of flows, which tend to be 2245 positive and negative respectively. 2247 Once the accounting period finishes, the meter should calculate the 2248 total of the accounts V_{bI} for the subset of flows I in the sample, 2249 and the total of the accounts V_{fI} excluding flows with a negative 2250 account from the subset I. Then the weighted mean of all these 2251 samples should be taken a_S = sum_{forall I} V_{fI} / sum_{forall I} 2252 V_{bI}. 2254 If V_b is the result of the bulk accounting algorithm over the 2255 accounting period (Appendix C.1) it can be inflated by this factor 2256 a_S to get a good unbiased estimate of the volume of downstream 2257 congestion over the accounting period a_S.V_b, without being polluted 2258 by the effect of persistently negative flows. 2260 Appendix D. Re-TTL 2262 This Appendix gives an overview of a proposal to be able to overload 2263 the TTL field in the IP header to monitor downstream propagation 2264 delay. This is included to show that it would be possible to take 2265 account of RTT if it was deemed desirable. 2267 Delay re-feedback can be achieved by overloading the TTL field, 2268 without changing IP or router TTL processing. A target value for TTL 2269 at the destination would need standardising, say 16. If the path hop 2270 count increased by more than 16 during a routing change, it would 2271 temporarily be mistaken for a routing loop, so this target would need 2272 to be chosen to exceed typical hop count increases. The TCP wire 2273 protocol and handlers would need modifying to feed back the 2274 destination TTL and initialise it. It would be necessary to 2275 standardise the unit of TTL in terms of real time (as was the 2276 original intent in the early days of the Internet). 2278 In the longer term, precision could be improved if routers 2279 decremented TTL to represent exact propagation delay to the next 2280 router. That is, for a router to decrement TTL by, say, 1.8 time 2281 units it would alternate the decrement of every packet between 1 & 2 2282 at a ratio of 1:4. Although this might sometimes require a seemingly 2283 dangerous null decrement, a packet in a loop would still decrement to 2284 zero after 255 time units on average. As more routers were upgraded 2285 to this more accurate TTL decrement, path delay estimates would 2286 become increasingly accurate despite the presence of some RFC3168 2287 compliant routers that continued to always decrement the TTL by 1. 2289 Appendix E. Argument for holding back the ECN nonce 2291 The ECN nonce is a mechanism that allows a /sending/ transport to 2292 detect if drop or ECN marking at a congested router has been 2293 suppressed by a node somewhere in the feedback loop---another router 2294 or the receiver. 2296 Space for the ECN nonce was set aside in [RFC3168] (currently 2297 proposed standard) while the full nonce mechanism is specified in 2298 [RFC3540] (currently experimental). The specifications for [RFC4340] 2299 (currently proposed standard) requires that "Each DCCP sender SHOULD 2300 set ECN Nonces on its packets...". It also mandates as a requirement 2301 for all CCID profiles that "Any newly defined acknowledgement 2302 mechanism MUST include a way to transmit ECN Nonce Echoes back to the 2303 sender.", therefore: 2305 o The CCID profile for TCP-like Congestion Control [RFC4341] 2306 (currently proposed standard) says "The sender will use the ECN 2307 Nonce for data packets, and the receiver will echo those nonces in 2308 its Ack Vectors." 2310 o The CCID profile for TCP-Friendly Rate Control (TFRC) [RFC4342] 2311 recommends that "The sender [use] Loss Intervals options' ECN 2312 Nonce Echoes (and possibly any Ack Vectors' ECN Nonce Echoes) to 2313 probabilistically verify that the receiver is correctly reporting 2314 all dropped or marked packets." 2316 The primary function of the ECN nonce is to protect the integrity of 2317 the information about congestion: ECN marks and packet drops. 2318 However, when the nonce is used to protect the integrity of 2319 information about packet drops, rather than ECN marks, a transport 2320 layer nonce will always be sufficient (because a drop loses the 2321 transport header as well as the ECN field in the network header), 2322 which would avoid using scarce IP header codepoint space. Similarly, 2323 a transport layer nonce would protect against a receiver sending 2324 early acknowledgements [Savage99]. 2326 If the ECN nonce reveals integrity problems with the information 2327 about congestion, the sending transport can use that knowledge for 2328 two functions: 2330 o to protect its own resources, by allocating them in proportion to 2331 the rates that each network path can sustain, based on congestion 2332 control, 2334 o and to protect congested routers in the network, by slowing down 2335 drastically its connection to the destination with corrupt 2336 congestion information. 2338 If the sending transport chooses to act in the interests of congested 2339 routers, it can reduce its rate if it detects some malicious party in 2340 the feedback loop may be suppressing ECN feedback. But it would only 2341 be useful to congested routers when /all/ senders using them are 2342 trusted to act in interest of the congested routers. 2344 In the end, the only essential use of a network layer nonce is when 2345 sending transports (e.g. large servers) want to allocate their /own/ 2346 resources in proportion to the rates that each network path can 2347 sustain, based on congestion control. In that case, the nonce allows 2348 senders to be assured that they aren't being duped into giving more 2349 of their own resources to a particular flow. And if congestion 2350 suppression is detected, the sending transport can rate limit the 2351 offending connection to protect its own resources. Certainly, this 2352 is a useful function, but the IETF should carefully decide whether 2353 such a single, very specific case warrants IP header space. 2355 In contrast, Re-ECN allows all routers to fully protect themselves 2356 from such attacks, without having to trust anyone - senders, 2357 receivers, neighbouring networks. Re-ECN is therefore proposed in 2358 preference to the ECN nonce on the basis that it addresses the 2359 generic problem of accountability for congestion of a network's 2360 resources at the IP layer. 2362 Delaying the ECN nonce is justified because the applicability of the 2363 ECN nonce seems too limited for it to consume a two-bit codepoint in 2364 the IP header. It therefore seems prudent to give time for an 2365 alternative way to be found to do the one function the nonce is 2366 essential for. 2368 Moreover, while we have re-designed the Re-ECN codepoints so that 2369 they do not prevent the ECN nonce progressing, the same is not true 2370 the other way round. If the ECN nonce started to see some deployment 2371 (perhaps because it was blessed with proposed standard status), 2372 incremental deployment of Re-ECN would effectively be impossible, 2373 because Re-ECN marking fractions at inter-domain borders would be 2374 polluted by unknown levels of nonce traffic. 2376 The authors are aware that Re-ECN must prove it has the potential it 2377 claims if it is to displace the nonce. Therefore, every effort has 2378 been made to complete a comprehensive specification of Re-ECN so that 2379 its potential can be assessed. We therefore seek the opinion of the 2380 Internet community on whether the Re-ECN protocol is sufficiently 2381 useful to warrant standards action. 2383 Authors' Addresses 2385 Bob Briscoe (editor) 2386 BT 2387 B54/77, Adastral Park 2388 Martlesham Heath 2389 Ipswich IP5 3RE 2390 UK 2392 Phone: +44 1473 645196 2393 EMail: bob.briscoe@bt.com 2394 URI: http://bobbriscoe.net/ 2396 Arnaud Jacquet 2397 BT 2398 B54/70, Adastral Park 2399 Martlesham Heath 2400 Ipswich IP5 3RE 2401 UK 2403 Phone: +44 1473 647284 2404 EMail: arnaud.jacquet@bt.com 2405 URI: 2407 Toby Moncaster 2408 Moncaster.com 2409 Dukes 2410 Layer Marney 2411 Colchester CO5 9UZ 2412 UK 2414 EMail: toby@moncaster.com 2416 Alan Smith 2417 BT 2418 B54/76, Adastral Park 2419 Martlesham Heath 2420 Ipswich IP5 3RE 2421 UK 2423 Phone: +44 1473 640404 2424 EMail: alan.p.smith@bt.com