idnits 2.17.1 draft-ietf-opsawg-large-flow-load-balancing-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 26, 2014) is 3498 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-08) exists of draft-sridharan-virtualization-nvgre-05 -- Obsolete informational reference (is this intentional?): RFC 7223 (Obsoleted by RFC 8343) == Outdated reference: A later version (-08) exists of draft-davie-stt-06 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 OPSAWG R. Krishnan 2 Internet Draft Brocade Communications 3 Intended status: Informational L. Yong 4 Expires: March 13, 2015 Huawei USA 5 A. Ghanwani 6 Dell 7 Ning So 8 Tata Communications 9 B. Khasnabish 10 ZTE Corporation 11 September 26, 2014 13 Mechanisms for Optimizing LAG/ECMP Component Link Utilization in 14 Networks 16 draft-ietf-opsawg-large-flow-load-balancing-14.txt 18 Status of this Memo 20 This Internet-Draft is submitted in full conformance with the 21 provisions of BCP 78 and BCP 79. This document may not be modified, 22 and derivative works of it may not be created, except to publish it 23 as an RFC and to translate it into languages other than English. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF), its areas, and its working groups. Note that 27 other groups may also distribute working documents as Internet- 28 Drafts. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 The list of current Internet-Drafts can be accessed at 36 http://www.ietf.org/ietf/1id-abstracts.txt 38 The list of Internet-Draft Shadow Directories can be accessed at 39 http://www.ietf.org/shadow.html 41 This Internet-Draft will expire on March 26, 2015. 43 Copyright Notice 45 Copyright (c) 2014 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 49 2014 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Abstract 63 Demands on networking infrastructure are growing exponentially due to 64 bandwidth hungry applications such as rich media applications and 65 inter-data center communications. In this context, it is important to 66 optimally use the bandwidth in wired networks that extensively use 67 link aggregation groups and equal cost multi-paths as techniques for 68 bandwidth scaling. This draft explores some of the mechanisms useful 69 for achieving this. 71 Table of Contents 73 1. Introduction...................................................3 74 1.1. Acronyms..................................................4 75 1.2. Terminology...............................................4 76 2. Flow Categorization............................................5 77 3. Hash-based Load Distribution in LAG/ECMP.......................6 78 4. Mechanisms for Optimizing LAG/ECMP Component Link Utilization..8 79 4.1. Differences in LAG vs ECMP................................8 80 4.2. Operational Overview.....................................10 81 4.3. Large Flow Recognition...................................11 82 4.3.1. Flow Identification.................................11 83 4.3.2. Criteria and Techniques for Large Flow Recognition..11 84 4.3.3. Sampling Techniques.................................12 85 4.3.4. Inline Data Path Measurement........................13 86 4.3.5. Use of Multiple Methods for Large Flow Recognition..14 87 4.4. Load Rebalancing Options.................................15 88 4.4.1. Alternative Placement of Large Flows................15 89 4.4.2. Redistributing Small Flows..........................15 90 4.4.3. Component Link Protection Considerations............16 91 4.4.4. Load Rebalancing Algorithms.........................16 92 4.4.5. Load Rebalancing Example............................16 93 5. Information Model for Flow Rebalancing........................17 95 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 96 2014 98 5.1. Configuration Parameters for Flow Rebalancing............18 99 5.2. System Configuration and Identification Parameters.......19 100 5.3. Information for Alternative Placement of Large Flows.....19 101 5.4. Information for Redistribution of Small Flows............20 102 5.5. Export of Flow Information...............................20 103 5.6. Monitoring information...................................21 104 5.6.1. Interface (link) utilization........................21 105 5.6.2. Other monitoring information........................21 106 6. Operational Considerations....................................22 107 6.1. Rebalancing Frequency....................................22 108 6.2. Handling Route Changes...................................22 109 6.3. Forwarding Resources.....................................22 110 7. IANA Considerations...........................................23 111 8. Security Considerations.......................................23 112 9. Contributing Authors..........................................23 113 10. Acknowledgements.............................................23 114 11. References...................................................24 115 11.1. Normative References....................................24 116 11.2. Informative References..................................24 118 1. Introduction 120 Networks extensively use link aggregation groups (LAG) [802.1AX] and 121 equal cost multi-paths (ECMP) [RFC 2991] as techniques for capacity 122 scaling. For the problems addressed by this document, network traffic 123 can be predominantly categorized into two traffic types: long-lived 124 large flows and other flows. These other flows, which include long- 125 lived small flows, short-lived small flows, and short-lived large 126 flows, are referred to as "small flows" in this document. Long-lived 127 large flows are simply referred to as "large flows." 129 Stateless hash-based techniques [ITCOM, RFC 2991, RFC 2992, RFC 6790] 130 are often used to distribute both large flows and small flows over 131 the component links in a LAG/ECMP. However the traffic may not be 132 evenly distributed over the component links due to the traffic 133 pattern. 135 This draft describes mechanisms for optimizing LAG/ECMP component 136 link utilization while using hash-based techniques. The mechanisms 137 comprise the following steps -- recognizing large flows in a router; 138 and assigning the large flows to specific LAG/ECMP component links or 139 redistributing the small flows when a component link on the router is 140 congested. 142 It is useful to keep in mind that in typical use cases for this 143 mechanism the large flows are those that consume a significant amount 145 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 146 2014 148 of bandwidth on a link, e.g. greater than 5% of link bandwidth. The 149 number of such flows would necessarily be fairly small, e.g. on the 150 order of 10's or 100's per LAG/ECMP. In other words, the number of 151 large flows is NOT expected to be on the order of millions of flows. 152 Examples of such large flows would be IPsec tunnels in service 153 provider backbone networks or storage backup traffic in data center 154 networks. 156 1.1. Acronyms 158 DOS: Denial of Service 160 ECMP: Equal Cost Multi-path 162 GRE: Generic Routing Encapsulation 164 LAG: Link Aggregation Group 166 MPLS: Multiprotocol Label Switching 168 NVGRE: Network Virtualization using Generic Routing Encapsulation 170 PBR: Policy Based Routing 172 QoS: Quality of Service 174 STT: Stateless Transport Tunneling 176 TCAM: Ternary Content Addressable Memory 178 VXLAN: Virtual Extensible LAN 180 1.2. Terminology 182 Central management entity: Refers to an entity that is capable of 183 monitoring information about link utilization and flows in routers 184 across the network and may be capable of making traffic engineering 185 decisions for placement of large flows. It may include the functions 186 of a collector if the routers employ a sampling technique [RFC 7011]. 188 ECMP component link: An individual nexthop within an ECMP group. An 189 ECMP component link may itself comprise a LAG. 191 ECMP table: A table that is used as the nexthop of an ECMP route that 192 comprises the set of ECMP component links and the weights associated 193 with each of those ECMP component links. The input for looking up 195 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 196 2014 198 the table is the hash value for the packet, and the weights are used 199 to determine which values of the hash function map to a given ECMP 200 component link. 202 LAG component link: An individual link within a LAG. A LAG component 203 link is typically a physical link. 205 LAG table: A table that is used as the output port which is a LAG 206 that comprises the set of LAG component links and the weights 207 associated with each of those component links. The input for looking 208 up the table is the hash value for the packet, and the weights are 209 used to determine which values of the hash function map to a given 210 LAG component link. 212 Large flow(s): Refers to long-lived large flow(s). 214 Small flow(s): Refers to any of, or a combination of, long-lived 215 small flow(s), short-lived small flows, and short-lived large 216 flow(s). 218 2. Flow Categorization 220 In general, based on the size and duration, a flow can be categorized 221 into any one of the following four types, as shown in Figure 1: 223 (a) Short-lived Large Flow (SLLF), 224 (b) Short-lived Small Flow (SLSF), 225 (c) Long-lived Large Flow (LLLF), and 226 (d) Long-lived Small Flow (LLSF). 228 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 229 2014 231 Flow Bandwidth 232 ^ 233 |--------------------|--------------------| 234 | | | 235 Large | SLLF | LLLF | 236 Flow | | | 237 |--------------------|--------------------| 238 | | | 239 Small | SLSF | LLSF | 240 Flow | | | 241 +--------------------+--------------------+-->Flow Duration 242 Short-lived Long-lived 243 Flow Flow 245 Figure 1: Flow Categorization 247 In this document, as mentioned earlier, we categorize long-lived 248 large flows as "large flows", and all of the others -- long-lived 249 small flows, short-lived small flows, and short-lived large flows 250 as "small flows". 252 3. Hash-based Load Distribution in LAG/ECMP 254 Hash-based techniques are often used for traffic load balancing to 255 select among multiple available paths within a LAG/ECMP group. The 256 advantages of hash-based techniques for load distribution are the 257 preservation of the packet sequence in a flow and the real-time 258 distribution without maintaining per-flow state in the router. Hash- 259 based techniques use a combination of fields in the packet's headers 260 to identify a flow, and the hash function computed using these fields 261 is used to generate a unique number that identifies a link/path in a 262 LAG/ECMP group. The result of the hashing procedure is a many-to-one 263 mapping of flows to component links. 265 If the traffic mix constitutes flows such that the result of the hash 266 function across these flows is fairly uniform so that a similar 267 number of flows is mapped to each component link, if the individual 268 flow rates are much smaller as compared to the link capacity, and if 269 the rate differences are not dramatic, hash-based techniques produce 270 good results with respect to utilization of the individual component 271 links. However, if one or more of these conditions are not met, hash- 272 based techniques may result in imbalance in the loads on individual 273 component links. 275 One example is illustrated in Figure 2. In Figure 2, there are two 276 routers, R1 and R2, and there is a LAG between them which has 3 278 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 279 2014 281 component links (1), (2), (3). There are a total of 10 flows that 282 need to be distributed across the links in this LAG. The result of 283 applying the hash-based technique is as follows: 285 . Component link (1) has 3 flows -- 2 small flows and 1 large 286 flow -- and the link utilization is normal. 288 . Component link (2) has 3 flows -- 3 small flows and no large 289 flow -- and the link utilization is light. 291 o The absence of any large flow causes the component link 292 under-utilized. 294 . Component link (3) has 4 flows -- 2 small flows and 2 large 295 flows -- and the link capacity is exceeded resulting in 296 congestion. 298 o The presence of 2 large flows causes congestion on this 299 component link. 301 +-----------+ -> +-----------+ 302 | | -> | | 303 | | ===> | | 304 | (1)|--------|(1) | 305 | | -> | | 306 | | -> | | 307 | (R1) | -> | (R2) | 308 | (2)|--------|(2) | 309 | | -> | | 310 | | -> | | 311 | | ===> | | 312 | | ===> | | 313 | (3)|--------|(3) | 314 | | | | 315 +-----------+ +-----------+ 317 Where: -> small flow 318 ===> large flow 320 Figure 2: Unevenly Utilized Component Links 322 This document presents mechanisms for addressing the imbalance in 323 load distribution resulting from commonly used hash-based techniques 324 for LAG/ECMP that were shown in the above example. The mechanisms use 325 large flow awareness to compensate for the imbalance in load 326 distribution. 328 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 329 2014 331 4. Mechanisms for Optimizing LAG/ECMP Component Link Utilization 333 The suggested mechanisms in this draft are about a local optimization 334 solution; they are local in the sense that both the identification of 335 large flows and re-balancing of the load can be accomplished 336 completely within individual nodes in the network without the need 337 for interaction with other nodes. 339 This approach may not yield a global optimization of the placement of 340 large flows across multiple nodes in a network, which may be 341 desirable in some networks. On the other hand, a local approach may 342 be adequate for some environments for the following reasons: 344 1) Different links within a network experience different levels of 345 utilization and, thus, a "targeted" solution is needed for those hot- 346 spots in the network. An example is the utilization of a LAG between 347 two routers that needs to be optimized. 349 2) Some networks may lack end-to-end visibility, e.g. when a 350 certain network, under the control of a given operator, is a transit 351 network for traffic from other networks that are not under the 352 control of the same operator. 354 4.1. Differences in LAG vs ECMP 356 While the mechanisms explained herein are applicable to both LAGs and 357 ECMP groups, it is useful to note that there are some key differences 358 between the two that may impact how effective the mechanism is. This 359 relates, in part, to the localized information with which the scheme 360 is intended to operate. 362 A LAG is usually established across links that are between 2 adjacent 363 routers. As a result, the scope of problem of optimizing the 364 bandwidth utilization on the component links is fairly narrow. It 365 simply involves re-balancing the load across the component links 366 between these two routers, and there is no impact whatsoever to other 367 parts of the network. The scheme works equally well for unicast and 368 multicast flows. 370 On the other hand, with ECMP, redistributing the load across 371 component links that are part of the ECMP group may impact traffic 372 patterns at all of the nodes that are downstream of the given router 373 between itself and the destination. The local optimization may 374 result in congestion at a downstream node. (In its simplest form, an 375 ECMP group may be used to distribute traffic on component links that 376 are between two adjacent routers, and in that case, the ECMP group is 378 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 379 2014 381 no different than a LAG for the purpose of this discussion. It 382 should be noted that an ECMP component link may itself comprise a 383 LAG, in which case the scheme may be further applied to the component 384 links within the LAG.) 386 +-----+ +-----+ 387 | S1 | | S2 | 388 +-----+ +-----+ 389 / \ \ / /\ 390 / +---------+ / \ 391 / / \ \ / \ 392 / / \ +------+ \ 393 / / \ / \ \ 394 +-----+ +-----+ +-----+ 395 | L1 | | L2 | | L3 | 396 +-----+ +-----+ +-----+ 398 Figure 3: Two-level Clos Network 400 To demonstrate the limitations of local optimization, consider a two- 401 level Clos network topology as shown in Figure 3 with three leaf 402 nodes (L1, L2, L3) and two spine nodes (S1, S2). Assume all of the 403 links are 10 Gbps. 405 Let L1 have two flows of 4 Gbps each towards L3, and let L2 have one 406 flow of 7 Gbps also towards L3. If L1 balances the load optimally 407 between S1 and S2, and L2 sends the flow via S1, then the downlink 408 from S1 to L3 would get congested resulting in packet discards. On 409 the other hand, if L1 had sent both its flows towards S1 and L2 had 410 sent its flow towards S2, there would have been no congestion at 411 either S1 or S2. 413 The other issue with applying this scheme to ECMP groups is that it 414 may not apply equally to unicast and multicast traffic because of the 415 way multicast trees are constructed. 417 Finally, it is possible for a single physical link to participate as 418 a component link in multiple ECMP groups, whereas with LAGs, a link 419 can participate as a component link of only one LAG. 421 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 422 2014 424 4.2. Operational Overview 426 The various steps in optimizing LAG/ECMP component link utilization 427 in networks are detailed below: 429 Step 1) This involves large flow recognition in routers and 430 maintaining the mapping of the large flow to the component link that 431 it uses. The recognition of large flows is explained in Section 4.3. 433 Step 2) The egress component links are periodically scanned for link 434 utilization and the imbalance for the LAG/ECMP group is monitored. If 435 the imbalance exceeds a certain imbalance threshold, then re- 436 balancing is triggered. Measurement of the imbalance is discussed 437 further in 5.1. Additional criteria may also be used to determine 438 whether or not to trigger rebalancing, such as the maximum 439 utilization of any of the component links, in addition to the 440 imbalance. The use of sampling techniques for the measurement of 441 egress component link utilization, including the issues of depending 442 on ingress sampling for these measurements, are discussed in Section 443 4.3.3. 445 Step 3) As a part of rebalancing, the operator can choose to 446 rebalance the large flows on to lightly loaded component links of the 447 LAG/ECMP group, redistribute the small flows on the congested link to 448 other component links of the group, or a combination of both. 450 All of the steps identified above can be done locally within the 451 router itself or could involve the use of a central management 452 entity. 454 Providing large flow information to a central management entity 455 provides the capability to globally optimize flow distribution as 456 described in Section 4.1. Consider the following example. A router 457 may have 3 ECMP nexthops that lead down paths P1, P2, and P3. A 458 couple of hops downstream on path P1 there may be a congested link, 459 while paths P2 and P3 may be under-utilized. This is something that 460 the local router does not have visibility into. With the help of a 461 central management entity, the operator could redistribute some of 462 the flows from P1 to P2 and/or P3 resulting in a more optimized flow 463 of traffic. 465 The mechanisms described above are especially useful when bundling 466 links of different bandwidths for e.g. 10 Gbps and 100 Gbps as 467 described in [ID.ietf-rtgwg-cl-requirement]. 469 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 470 2014 472 4.3. Large Flow Recognition 474 4.3.1. Flow Identification 476 A flow (large flow or small flow) can be defined as a sequence of 477 packets for which ordered delivery should be maintained. Flows are 478 typically identified using one or more fields from the packet header, 479 for example: 481 . Layer 2: Source MAC address, destination MAC address, VLAN ID. 483 . IP header: IP Protocol, IP source address, IP destination 484 address, flow label (IPv6 only) 486 . Transport protocol header: Source port number, destination port 487 number. These apply to protocols such as TCP, UDP, SCTP. 489 . MPLS Labels. 491 For tunneling protocols like Generic Routing Encapsulation (GRE) 492 [RFC 2784], Virtual eXtensible Local Area Network (VXLAN) [RFC 7348], 493 Network Virtualization using Generic Routing Encapsulation (NVGRE) 494 [NVGRE], Stateless Transport Tunneling (STT) [STT], Layer 2 Tunneling 495 Protocol (L2TP) [RFC 3931], etc., flow identification is possible 496 based on inner and/or outer headers as well as fields introduced by 497 the tunnel header, as any or all such fields may be used for load 498 balancing decisions [RFC 5640]. The above list is not exhaustive. 499 The mechanisms described in this document are agnostic to the fields 500 that are used for flow identification. 502 This method of flow identification is consistent with that of IPFIX 503 [RFC 7011]. 505 4.3.2. Criteria and Techniques for Large Flow Recognition 507 From a bandwidth and time duration perspective, in order to recognize 508 large flows we define an observation interval and observe the 509 bandwidth of the flow over that interval. A flow that exceeds a 510 certain minimum bandwidth threshold over that observation interval 511 would be considered a large flow. 513 The two parameters -- the observation interval, and the minimum 514 bandwidth threshold over that observation interval -- should be 515 programmable to facilitate handling of different use cases and 516 traffic characteristics. For example, a flow which is at or above 10% 518 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 519 2014 521 of link bandwidth for a time period of at least 1 second could be 522 declared a large flow [DevoFlow]. 524 In order to avoid excessive churn in the rebalancing, once a flow has 525 been recognized as a large flow, it should continue to be recognized 526 as a large flow for as long as the traffic received during an 527 observation interval exceeds some fraction of the bandwidth 528 threshold, for example 80% of the bandwidth threshold. 530 Various techniques to recognize a large flow are described below. 532 4.3.3. Sampling Techniques 534 A number of routers support sampling techniques such as sFlow [sFlow- 535 v5, sFlow-LAG], PSAMP [RFC 5475] and NetFlow Sampling [RFC 3954]. 536 For the purpose of large flow recognition, sampling needs to be 537 enabled on all of the egress ports in the router where such 538 measurements are desired. 540 Using sFlow as an example, processing in a sFlow collector will 541 provide an approximate indication of the large flows mapping to each 542 of the component links in each LAG/ECMP group. It is possible to 543 implement this part of the collector function in the control plane of 544 the router reducing dependence on an external management station, 545 assuming sufficient control plane resources are available. 547 If egress sampling is not available, ingress sampling can suffice 548 since the central management entity used by the sampling technique 549 typically has multi-node visibility and can use the samples from an 550 immediately downstream node to make measurements for egress traffic 551 at the local node. 553 The option of using ingress sampling for this purpose may not be 554 available if the downstream device is under the control of a 555 different operator, or if the downstream device does not support 556 sampling. 558 Alternatively, since sampling techniques require that the sample be 559 annotated with the packet's egress port information, ingress sampling 560 may suffice. However, this means that sampling would have to be 561 enabled on all ports, rather than only on those ports where such 562 monitoring is desired. There is one situation in which this approach 563 may not work. If there are tunnels that originate from the given 564 router, and if the resulting tunnel comprises the large flow, then 565 this cannot be deduced from ingress sampling at the given router. 567 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 568 2014 570 Instead, if egress sampling is unavailable, then ingress sampling 571 from the downstream router must be used. 573 To illustrate the use of ingress versus egress sampling, we refer to 574 Figure 2. Since we are looking at rebalancing flows at R1, we would 575 need to enable egress sampling on ports (1), (2), and (3) on R1. If 576 egress sampling is not available, and if R2 is also under the control 577 of the same administrator, enabling ingress sampling on R2's ports 578 (1), (2), and (3) would also work, but it would necessitate the 579 involvement of a central management entity in order for R1 to obtain 580 large flow information for each of its links. Finally, R1 can enable 581 ingress sampling only on all of its ports (not just the ports that 582 are part of the LAG/ECMP group being monitored) and that would 583 suffice if the sampling technique annotates the samples with the 584 egress port information. 586 The advantages and disadvantages of sampling techniques are as 587 follows. 589 Advantages: 591 . Supported in most existing routers. 593 . Requires minimal router resources. 595 Disadvantages: 597 . In order to minimize the error inherent in sampling, there is a 598 minimum delay for the recognition time of large flows, and in 599 the time that it takes to react to this information. 601 With sampling, the detection of large flows can be done on the order 602 of one second [DevoFlow]. A discussion on determining the 603 appropriate sampling frequency is available in the following 604 reference [SAMP-BASIC]. 606 4.3.4. Inline Data Path Measurement 608 Implementations may perform recognition of large flows by performing 609 measurements on traffic in the data path of a router. Such an 610 approach would be expected to operate at the interface speed on every 611 interface, accounting for all packets processed by the data path of 612 the router. An example of such an approach is described in IPFIX 613 [RFC 5470]. 615 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 616 2014 618 Using inline data path measurement, a faster and more accurate 619 indication of large flows mapped to each of the component links in a 620 LAG/ECMP group may be possible (as compared to the sampling-based 621 approach). 623 The advantages and disadvantages of inline data path measurement are: 625 Advantages: 627 . As link speeds get higher, sampling rates are typically reduced 628 to keep the number of samples manageable which places a lower 629 bound on the detection time. With inline data path measurement, 630 large flows can be recognized in shorter windows on higher link 631 speeds since every packet is accounted for [NDTM]. 633 . Eliminates the potential dependence on an external management 634 station for large flow recognition. 636 Disadvantages: 638 . It is more resource intensive in terms of the tables sizes 639 required for monitoring all flows in order to perform the 640 measurement. 642 As mentioned earlier, the observation interval for determining a 643 large flow and the bandwidth threshold for classifying a flow as a 644 large flow should be programmable parameters in a router. 646 The implementation details of inline data path measurement of large 647 flows is vendor dependent and beyond the scope of this document. 649 4.3.5. Use of Multiple Methods for Large Flow Recognition 651 It is possible that a router may have line cards that support a 652 sampling technique while other line cards support inline data path 653 measurement of large flows. As long as there is a way for the router 654 to reliably determine the mapping of large flows to component links 655 of a LAG/ECMP group, it is acceptable for the router to use more than 656 one method for large flow recognition. 658 If both methods are supported, inline data path measurement may be 659 preferable because of its speed of detection [FLOW-ACC]. 661 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 662 2014 664 4.4. Load Rebalancing Options 666 Below are suggested techniques for load balancing. Equipment vendors 667 may implement more than one technique, including those not described 668 in this document, and allow the operator to choose between them. 670 Note that regardless of the method used, perfect rebalancing of large 671 flows may not be possible since flows arrive and depart at different 672 times. Also, any flows that are moved from one component link to 673 another may experience momentary packet reordering. 675 4.4.1. Alternative Placement of Large Flows 677 Within a LAG/ECMP group, the member component links with least 678 average port utilization are identified. Some large flow(s) from the 679 heavily loaded component links are then moved to those lightly-loaded 680 member component links using a policy-based routing (PBR) rule in the 681 ingress processing element(s) in the routers. 683 With this approach, only certain large flows are subjected to 684 momentary flow re-ordering. 686 When a large flow is moved, this will increase the utilization of the 687 link that it moved to potentially creating imbalance in the 688 utilization once again across the component links. Therefore, when 689 moving large flows, care must be taken to account for the existing 690 load, and what the future load will be after large flow has been 691 moved. Further, the appearance of new large flows may require a 692 rearrangement of the placement of existing flows. 694 Consider a case where there is a LAG compromising four 10 Gbps 695 component links and there are four large flows, each of 1 Gbps. 696 These flows are each placed on one of the component links. 697 Subsequent, a fifth large flow of 2 Gbps is recognized and to 698 maintain equitable load distribution, it may require placement of one 699 of the existing 1 Gbps flow to a different component link. And this 700 would still result in some imbalance in the utilization across the 701 component links. 703 4.4.2. Redistributing Small Flows 705 Some large flows may consume the entire bandwidth of the component 706 link(s). In this case, it would be desirable for the small flows to 707 not use the congested component link(s). This can be accomplished in 708 one of the following ways. 710 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 711 2014 713 This method works on some existing router hardware. The idea is to 714 prevent, or reduce the probability, that the small flow hashes into 715 the congested component link(s). 717 . The LAG/ECMP table is modified to include only non-congested 718 component link(s). Small flows hash into this table to be mapped 719 to a destination component link. Alternatively, if certain 720 component links are heavily loaded, but not congested, the 721 output of the hash function can be adjusted to account for large 722 flow loading on each of the component links. 724 . The PBR rules for large flows (refer to Section 4.4.1) must 725 have strict precedence over the LAG/ECMP table lookup result. 727 With this approach the small flows that are moved would be subject to 728 reordering. 730 4.4.3. Component Link Protection Considerations 732 If desired, certain component links may be reserved for link 733 protection. These reserved component links are not used for any flows 734 in the absence of any failures. In the case when the component 735 link(s) fail, all the flows on the failed component link(s) are moved 736 to the reserved component link(s). The mapping table of large flows 737 to component link simply replaces the failed component link with the 738 reserved link. Likewise, the LAG/ECMP table replaces the failed 739 component link with the reserved link. 741 4.4.4. Load Rebalancing Algorithms 743 Specific algorithms for placement of large flows are out of scope of 744 this document. One possibility is to formulate the problem for large 745 flow placement as the well-known bin-packing problem and make use of 746 the various heuristics that are available for that problem [bin- 747 pack]. 749 4.4.5. Load Rebalancing Example 751 Optimizing LAG/ECMP component utilization for the use case in Figure 752 2 is depicted below in Figure 4. The large flow rebalancing explained 753 in Section 4.4 is used. The improved link utilization is as follows: 755 . Component link (1) has 3 flows -- 2 small flows and 1 large 756 flow -- and the link utilization is normal. 758 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 759 2014 761 . Component link (2) has 4 flows -- 3 small flows and 1 large 762 flow -- and the link utilization is normal now. 764 . Component link (3) has 3 flows -- 2 small flows and 1 large 765 flow -- and the link utilization is normal now. 767 +-----------+ -> +-----------+ 768 | | -> | | 769 | | ===> | | 770 | (1)|--------|(1) | 771 | | | | 772 | | ===> | | 773 | | -> | | 774 | | -> | | 775 | (R1) | -> | (R2) | 776 | (2)|--------|(2) | 777 | | | | 778 | | -> | | 779 | | -> | | 780 | | ===> | | 781 | (3)|--------|(3) | 782 | | | | 783 +-----------+ +-----------+ 785 Where: -> small flow 786 ===> large flow 788 Figure 4: Evenly Utilized Composite Links 790 Basically, the use of the mechanisms described in Section 4.4.1 791 resulted in a rebalancing of flows where one of the large flows on 792 component link (3) which was previously congested was moved to 793 component link (2) which was previously under-utilized. 795 5. Information Model for Flow Rebalancing 797 In order to support flow rebalancing in a router from an external 798 system, the exchange of some information is necessary between the 799 router and the external system. This section provides an exemplary 800 information model covering the various components needed for the 801 purpose. The model is intended to be informational and may be used 802 as input for development of a data model. 804 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 805 2014 807 5.1. Configuration Parameters for Flow Rebalancing 809 The following parameters are required the configuration of this 810 feature: 812 . Large flow recognition parameters: 814 o Observation interval: The observation interval is the time 815 period in seconds over which the packet arrivals are 816 observed for the purpose of large flow recognition. 818 o Minimum bandwidth threshold: The minimum bandwidth threshold 819 would be configured as a percentage of link speed and 820 translated into a number of bytes over the observation 821 interval. A flow for which the number of bytes received, 822 for a given observation interval, exceeds this number would 823 be recognized as a large flow. 825 o Minimum bandwidth threshold for large flow maintenance: The 826 minimum bandwidth threshold for large flow maintenance is 827 used to provide hysteresis for large flow recognition. 828 Once a flow is recognized as a large flow, it continues to 829 be recognized as a large flow until it falls below this 830 threshold. This is also configured as a percentage of link 831 speed and is typically lower than the minimum bandwidth 832 threshold defined above. 834 . Imbalance threshold: A measure of the deviation of the 835 component link utilizations from the utilization of the overall 836 LAG/ECMP group. Since component links can be of a different 837 speed, the imbalance can be computed as follows. Let the 838 utilization of each component link in a LAG/ECMP group with n 839 links of speed b_1, b_2 .. b_n, be u_1, u_2 .. u_n. The mean 840 utilization is computed is u_ave = [ (u_1 x b_1) + (u_2 x b_2) + 841 .. + (u_n x b_n) ] / [b_1 + b_2 + .. + b_n]. The imbalance is 842 then computed as max_{i=1..n} | u_i - u_ave |. 844 . Rebalancing interval: The minimum amount of time between 845 rebalancing events. This parameter ensures that rebalancing is 846 not invoked too frequently as it impacts packet ordering. 848 These parameters may be configured on a system-wide basis or it may 849 apply to an individual LAG. It may be applied to an ECMP group 850 provided the component links are not shared with any other ECMP 851 group. 853 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 854 2014 856 5.2. System Configuration and Identification Parameters 858 The following parameters are useful for router configuration and 859 operation when using the mechanisms in this document. 861 . IP address: The IP address of a specific router that the 862 feature is being configured on, or that the large flow placement 863 is being applied to. 865 . LAG ID: Identifies the LAG on a given router. The LAG ID may be 866 required when configuring this feature (to apply a specific set 867 of large flow identification parameters to the LAG) and will be 868 required when specifying flow placement to achieve the desired 869 rebalancing. 871 . Component Link ID: Identifies the component link within a LAG 872 or ECMP group. This is required when specifying flow placement 873 to achieve the desired rebalancing. 875 . Component Link Weight: The relative weight to be applied to 876 traffic for a given component link when using hash-based 877 techniques for load distribution. 879 . ECMP group: Identifies a particular ECMP group. The ECMP group 880 may be required when configuring this feature (to apply a 881 specific set of large flow identification parameters to the ECMP 882 group) and will be required when specifying flow placement to 883 achieve the desired rebalancing. We note that multiple ECMP 884 groups can share an overlapping set (or non-overlapping subset) 885 of component links. This document does not deal with the 886 complexity of addressing such configurations. 888 The feature may be configured globally for all LAGs and/or for all 889 ECMP groups, or it may be configured specifically for a given LAG or 890 ECMP group. 892 5.3. Information for Alternative Placement of Large Flows 894 In cases where large flow recognition is handled by an external 895 management station (see Section 4.3.3), an information model for 896 flows is required to allow the import of large flow information to 897 the router. 899 Typical fields use for identifying large flows were discussed in 900 Section 4.3.1. The IPFIX information model [RFC 7012] can be 901 leveraged for large flow identification. 903 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 904 2014 906 Large Flow placement is achieved by specifying the relevant flow 907 information along with the following: 909 . For LAG: Router's IP address, LAG ID, LAG component link ID. 911 . For ECMP: Router's IP address, ECMP group, ECMP component link 912 ID. 914 In the case where the ECMP component link itself comprises a LAG, we 915 would have to specify the parameters for both the ECMP group as well 916 as the LAG to which the large flow is being directed. 918 5.4. Information for Redistribution of Small Flows 920 Redistribution of small flows is done using the following: 922 . For LAG: The LAG ID and the component link IDs along with the 923 relative weight of traffic to be assigned to each component link 924 ID are required. 926 . For ECMP: The ECMP group and the ECMP Nexthop along with the 927 relative weight of traffic to be assigned to each ECMP Nexthop 928 are required. 930 It is possible to have an ECMP nexthop that itself comprises a LAG. 931 In that case, we would have to specify the new weights for both the 932 ECMP nexthops within the ECMP group as well as the component links 933 within the LAG. 935 In the case where an ECMP component link itself comprises a LAG, we 936 would have to specify new weights for both the component links within 937 the ECMP group as well as the component links within the LAG. 939 5.5. Export of Flow Information 941 Exporting large flow information is required when large flow 942 recognition is being done on a router, but the decision to rebalance 943 is being made in an external management station. Large flow 944 information includes flow identification and the component link ID 945 that the flow currently is assigned to. Other information such as 946 flow QoS and bandwidth may be exported too. 948 The IPFIX information model [RFC 7012] can be leveraged for large 949 flow identification. 951 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 952 2014 954 5.6. Monitoring information 956 5.6.1. Interface (link) utilization 958 The incoming bytes (ifInOctets), outgoing bytes (ifOutOctets) and 959 interface speed (ifSpeed) can be obtained, for example, from the 960 Interface table (iftable) MIB [RFC 1213]. 962 The link utilization can then be computed as follows: 964 Incoming link utilization = (delta_ifInOctets * 8) / (ifSpeed * T) 966 Outgoing link utilization = (delta_ifOutOctets * 8) / (ifSpeed * T) 968 Where T is the interval over which the utilization is being measured, 969 delta_ifInOctets is the change in ifInOctets over that interval, and 970 delta_ifOutOctets is the change in ifOutOctets over that interval. 972 For high speed Ethernet links, the etherStatsHighCapacityTable MIB 973 [RFC 3273] can be used. 975 Similar results may be achieved using the corresponding objects of 976 other interface management data models such as YANG [RFC 7223] if 977 those are used instead of MIBs. 979 For scalability, it is recommended to use the counter push mechanism 980 in [sflow-v5] for the interface counters. Doing so would help avoid 981 counter polling through the MIB interface. 983 The outgoing link utilization of the component links within a 984 LAG/ECMP group can be used to compute the imbalance (See Section 5.1) 985 for the LAG/ECMP group. 987 5.6.2. Other monitoring information 989 Additional monitoring information that is useful includes: 991 . Number of times rebalancing was done. 993 . Time since the last rebalancing event. 995 . The number of large flows currently rebalanced by the scheme. 997 . A list of the large flows that have been rebalanced including 999 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 1000 2014 1002 o the rate of each large flow at the time of the last 1003 rebalancing for that flow, 1005 o the time that rebalancing was last performed for the given 1006 large flow, and 1008 o the interfaces that the large flows was (re)directed to. 1010 . The settings for the weights of the interfaces within a 1011 LAG/ECMP used by the small flows which depend on hashing. 1013 6. Operational Considerations 1015 6.1. Rebalancing Frequency 1017 Flows should be rebalanced only when the imbalance in the utilization 1018 across component links exceeds a certain threshold. Frequent 1019 rebalancing to achieve precise equitable utilization across component 1020 links could be counter-productive as it may result in moving flows 1021 back and forth between the component links impacting packet ordering 1022 and system stability. This applies regardless of whether large flows 1023 or small flows are redistributed. It should be noted that reordering 1024 is a concern for TCP flows with even a few packets because three out- 1025 of-order packets would trigger sufficient duplicate ACKs to the 1026 sender resulting in a retransmission [RFC 5681]. 1028 The operator would have to experiment with various values of the 1029 large flow recognition parameters (minimum bandwidth threshold, 1030 observation interval) and the imbalance threshold across component 1031 links to tune the solution for their environment. 1033 6.2. Handling Route Changes 1035 Large flow rebalancing must be aware of any changes to the FIB. In 1036 cases where the nexthop of a route no longer to points to the LAG, or 1037 to an ECMP group, any PBR entries added as described in Section 4.4.1 1038 and 4.4.2 must be withdrawn in order to avoid the creation of 1039 forwarding loops. 1041 6.3. Forwarding Resources 1043 Hash-based techniques used for load balancing with LAG/ECMP are 1044 usually stateless. The mechanisms described in this document require 1045 additional resources in the forwarding plane of routers for creating 1046 PBR rules that are capable of overriding the forwarding decision from 1047 the hash-based approach. These resources may limit the number of 1049 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 1050 2014 1052 flows that can be rebalanced and may also impact the latency 1053 experienced by packets due to the additional lookups that are 1054 required. 1056 7. IANA Considerations 1058 This memo includes no request to IANA. 1060 8. Security Considerations 1062 This document does not directly impact the security of the Internet 1063 infrastructure or its applications. In fact, it could help if there 1064 is a DOS attack pattern which causes a hash imbalance resulting in 1065 heavy overloading of large flows to certain LAG/ECMP component 1066 links. 1068 An attacker with knowledge of the large flow recognition algorithm 1069 and any stateless distribution method can generate flows that are 1070 distributed in a way that overloads a specific path. This could be 1071 used to cause the creation of PBR rules that exhaust the available 1072 rule capacity on nodes. If PBR rules are consequently discarded, 1073 this could result in congestion on the attacker-selected path. 1074 Alternatively, tracking large numbers of PBR rules could result in 1075 performance degradation. 1077 9. Contributing Authors 1079 Sanjay Khanna 1080 Cisco Systems 1081 Email: sanjakha@gmail.com 1083 10. Acknowledgements 1085 The authors would like to thank the following individuals for their 1086 review and valuable feedback on earlier versions of this document: 1087 Shane Amante, Fred Baker, Michael Bugenhagen, Zhen Cao, Brian 1088 Carpenter, Benoit Claise, Michael Fargano, Wes George, Sriganesh 1089 Kini, Roman Krzanowski, Andrew Malis, Dave McDysan, Pete Moyer, 1090 Peter Phaal, Dan Romascanu, Curtis Villamizar, Jianrong Wong, George 1091 Yum, and Weifeng Zhang. As a part of the IETF Last Call process, 1092 valuable comments were received from Martin Thomson and Carlos 1093 Pignatro. 1095 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 1096 2014 1098 11. References 1100 11.1. Normative References 1102 [802.1AX] IEEE Standards Association, "IEEE Std 802.1AX-2008 IEEE 1103 Standard for Local and Metropolitan Area Networks - Link 1104 Aggregation", 2008. 1106 [RFC 2991] Thaler, D. and C. Hopps, "Multipath Issues in Unicast and 1107 Multicast," November 2000. 1109 [RFC 7011] Claise, B. et al., "Specification of the IP Flow 1110 Information Export (IPFIX) Protocol for the Exchange of IP Traffic 1111 Flow Information," September 2013. 1113 [RFC 7012] Claise, B. and B. Trammell, "Information Model for IP Flow 1114 Information Export (IPFIX)," September 2013. 1116 11.2. Informative References 1118 [bin-pack] Coffman, Jr., E., M. Garey, and D. Johnson. Approximation 1119 Algorithms for Bin-Packing -- An Updated Survey. In Algorithm Design 1120 for Computer System Design, ed. by Ausiello, Lucertini, and Serafini. 1121 Springer-Verlag, 1984. 1123 [CAIDA] "Caida Internet Traffic Analysis," http://www.caida.org/home. 1125 [DevoFlow] Mogul, J., et al., "DevoFlow: Cost-Effective Flow 1126 Management for High Performance Enterprise Networks," Proceedings of 1127 the ACM SIGCOMM, August 2011. 1129 [FLOW-ACC] Zseby, T., et al., "Packet sampling for flow accounting: 1130 challenges and limitations," Proceedings of the 9th international 1131 conference on Passive and active network measurement, 2008. 1133 [ID.ietf-rtgwg-cl-requirement] Villamizar, C. et al., "Requirements 1134 for MPLS over a Composite Link," September 2013. 1136 [ITCOM] Jo, J., et al., "Internet traffic load balancing using 1137 dynamic hashing with flow volume," SPIE ITCOM, 2002. 1139 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 1140 2014 1142 [NDTM] Estan, C. and G. Varghese, "New directions in traffic 1143 measurement and accounting," Proceedings of ACM SIGCOMM, August 2002. 1145 [NVGRE] Sridharan, M. et al., "NVGRE: Network Virtualization using 1146 Generic Routing Encapsulation," draft-sridharan-virtualization- 1147 nvgre-05, January 2015. 1149 [RFC 2784] Farinacci, D. et al., "Generic Routing Encapsulation 1150 (GRE)," March 2000. 1152 [RFC 6790] Kompella, K. et al., "The Use of Entropy Labels in MPLS 1153 Forwarding," November 2012. 1155 [RFC 1213] McCloghrie, K., "Management Information Base for Network 1156 Management of TCP/IP-based internets: MIB-II," March 1991. 1158 [RFC 2992] Hopps, C., "Analysis of an Equal-Cost Multi-Path 1159 Algorithm," November 2000. 1161 [RFC 3273] Waldbusser, S., "Remote Network Monitoring Management 1162 Information Base for High Capacity Networks," July 2002. 1164 [RFC 3931] Lau, J. (Ed.), M. Townsley (Ed.), and I. Goyret (Ed.), 1165 "Layer 2 Tunneling Protocol - Version 3," March 2005. 1167 [RFC 3954] Claise, B., "Cisco Systems NetFlow Services Export Version 1168 9," October 2004. 1170 [RFC 5470] G. Sadasivan et al., "Architecture for IP Flow Information 1171 Export," March 2009. 1173 [RFC 5475] Zseby, T. et al., "Sampling and Filtering Techniques for 1174 IP Packet Selection," March 2009. 1176 [RFC 5640] Filsfils, C., P. Mohapatra, and C. Pignataro, "Load 1177 Balancing for Mesh Softwires," August 2009. 1179 [RFC 5681] Allman, M. et al., "TCP Congestion Control," September 1180 2009. 1182 [RFC 7223] Bjorklund, M., "A YANG Data Model for Interface 1183 Management," May 2014. 1185 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 1186 2014 1188 [SAMP-BASIC] Phaal, P. and S. Panchen, "Packet Sampling Basics," 1189 http://www.sflow.org/packetSamplingBasics/. 1191 [sFlow-v5] Phaal, P. and M. Lavine, "sFlow version 5," 1192 http://www.sflow.org/sflow_version_5.txt, July 2004. 1194 [sFlow-LAG] Phaal, P. and A. Ghanwani, "sFlow LAG counters 1195 structure," http://www.sflow.org/sflow_lag.txt, September 2012. 1197 [STT] Davie, B. (Ed.) and J. Gross, "A Stateless Transport Tunneling 1198 Protocol for Network Virtualization (STT)," draft-davie-stt-06, March 1199 2014. 1201 [RFC 7348] Mahalingam, M. et al., "VXLAN: A Framework for Overlaying 1202 Virtualized Layer 2 Networks over Layer 3 Networks," August 2014. 1204 [YONG] Yong, L., "Enhanced ECMP and Large Flow Aware Transport," 1205 draft-yong-pwe3-enhance-ecmp-lfat-01, September 2010. 1207 Appendix A. Internet Traffic Analysis and Load Balancing Simulation 1209 Internet traffic [CAIDA] has been analyzed to obtain flow statistics 1210 such as the number of packets in a flow and the flow duration. The 1211 five tuples in the packet header (IP addresses, TCP/UDP Ports, and IP 1212 protocol) are used for flow identification. The analysis indicates 1213 that < ~2% of the flows take ~30% of total traffic volume while the 1214 rest of the flows (> ~98%) contributes ~70% [YONG]. 1216 The simulation has shown that given Internet traffic pattern, the 1217 hash-based technique does not evenly distribute the flows over ECMP 1218 paths. Some paths may be > 90% loaded while others are < 40% loaded. 1219 The more ECMP paths exist, the more severe the misbalancing. This 1220 implies that hash-based distribution can cause some paths to become 1221 congested while other paths are underutilized [YONG]. 1223 The simulation also shows substantial improvement by using the large 1224 flow-aware hash-based distribution technique described in this 1225 document. In using the same simulated traffic, the improved 1226 rebalancing can achieve < 10% load differences among the paths. It 1227 proves how large flow-aware hash-based distribution can effectively 1228 compensate the uneven load balancing caused by hashing and the 1229 traffic characteristics [YONG]. 1231 Internet-Draft Optimizing Load Distribution over LAG/ECMP September 1232 2014 1234 Authors' Addresses 1236 Ram Krishnan 1237 Brocade Communications 1238 San Jose, 95134, USA 1239 Phone: +1-408-406-7890 1240 Email: ramkri123@gmail.com 1242 Lucy Yong 1243 Huawei USA 1244 5340 Legacy Drive 1245 Plano, TX 75025, USA 1246 Phone: +1-469-277-5837 1247 Email: lucy.yong@huawei.com 1249 Anoop Ghanwani 1250 Dell 1251 San Jose, CA 95134 1252 Phone: +1-408-571-3228 1253 Email: anoop@alumni.duke.edu 1255 Ning So 1256 Tata Communications 1257 Plano, TX 75082, USA 1258 Phone: +1-972-955-0914 1259 Email: ning.so@tatacommunications.com 1261 Bhumip Khasnabish 1262 ZTE Corporation 1263 New Jersey, 07960, USA 1264 Phone: +1-781-752-8003 1265 Email: vumip1@gmail.com