idnits 2.17.1 draft-ietf-bess-evpn-unequal-lb-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 14, 2021) is 1078 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'PE-1' is mentioned on line 592, but not defined == Missing Reference: 'PE-2' is mentioned on line 592, but not defined == Missing Reference: 'PE-3' is mentioned on line 592, but not defined == Missing Reference: 'EVI' is mentioned on line 766, but not defined == Missing Reference: 'ES' is mentioned on line 766, but not defined == Missing Reference: 'RFC8126' is mentioned on line 815, but not defined == Unused Reference: 'EVPN-VIRTUAL-ES' is defined on line 854, but no explicit reference was found in the text == Unused Reference: 'RFC7814' is defined on line 870, but no explicit reference was found in the text == Outdated reference: A later version (-13) exists of draft-ietf-bess-evpn-pref-df-06 == Outdated reference: A later version (-10) exists of draft-ietf-bess-evpn-per-mcast-flow-df-election-04 == Outdated reference: A later version (-15) exists of draft-ietf-bess-evpn-virtual-eth-segment-06 ** Downref: Normative reference to an Informational RFC: RFC 7814 Summary: 1 error (**), 0 flaws (~~), 12 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 BESS WorkGroup N. Malhotra, Ed. 3 Internet-Draft A. Sajassi 4 Intended status: Standards Track Cisco Systems 5 Expires: November 15, 2021 J. Rabadan 6 Nokia 7 J. Drake 8 Juniper 9 A. Lingala 10 ATT 11 S. Thoria 12 Cisco Systems 13 May 14, 2021 15 Weighted Multi-Path Procedures for EVPN Multi-Homing 16 draft-ietf-bess-evpn-unequal-lb-14 18 Abstract 20 EVPN enables all-active multi-homing for a CE device connected to two 21 or more PEs via a LAG, such that bridged and routed traffic from 22 remote PEs to hosts attached to the Ethernet Segment can be equally 23 load balanced (it uses Equal Cost Multi Path) across the multi-homing 24 PEs. EVPN also enables multi-homing for IP subnets advertised in IP 25 Prefix routes, so that routed traffic from remote PEs to those IP 26 subnets can be load balanced. This document defines extensions to 27 EVPN procedures to optimally handle unequal access bandwidth 28 distribution across a set of multi-homing PEs in order to: 30 o provide greater flexibility, with respect to adding or removing 31 individual multi-homed PE-CE links. 33 o handle multi-homed PE-CE link failures that can result in unequal 34 PE-CE access bandwidth across a set of multi-homing PEs. 36 Status of This Memo 38 This Internet-Draft is submitted in full conformance with the 39 provisions of BCP 78 and BCP 79. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF). Note that other groups may also distribute 43 working documents as Internet-Drafts. The list of current Internet- 44 Drafts is at https://datatracker.ietf.org/drafts/current/. 46 Internet-Drafts are draft documents valid for a maximum of six months 47 and may be updated, replaced, or obsoleted by other documents at any 48 time. It is inappropriate to use Internet-Drafts as reference 49 material or to cite them other than as "work in progress." 51 This Internet-Draft will expire on November 15, 2021. 53 Copyright Notice 55 Copyright (c) 2021 IETF Trust and the persons identified as the 56 document authors. All rights reserved. 58 This document is subject to BCP 78 and the IETF Trust's Legal 59 Provisions Relating to IETF Documents 60 (https://trustee.ietf.org/license-info) in effect on the date of 61 publication of this document. Please review these documents 62 carefully, as they describe your rights and restrictions with respect 63 to this document. Code Components extracted from this document must 64 include Simplified BSD License text as described in Section 4.e of 65 the Trust Legal Provisions and are provided without warranty as 66 described in the Simplified BSD License. 68 Table of Contents 70 1. Requirements Language and Terminology . . . . . . . . . . . . 3 71 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 72 2.1. PE-CE Link Provisioning . . . . . . . . . . . . . . . . . 4 73 2.2. PE-CE Link Failures . . . . . . . . . . . . . . . . . . . 6 74 2.3. Design Requirement . . . . . . . . . . . . . . . . . . . 6 75 3. Solution Overview . . . . . . . . . . . . . . . . . . . . . . 8 76 4. EVPN Link Bandwidth Extended Community . . . . . . . . . . . 8 77 4.1. Encoding and Usage of EVPN Link Bandwidth Extended 78 Community . . . . . . . . . . . . . . . . . . . . . . . . 8 79 4.2. Note on BGP Link Bandwidth Extended Community . . . . . . 10 80 5. Weighted Unicast Traffic Load-balancing to an Ethernet 81 Segment . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 82 5.1. Egress PE Behavior . . . . . . . . . . . . . . . . . . . 10 83 5.2. Ingress PE Behavior . . . . . . . . . . . . . . . . . . . 10 84 6. Weighted BUM Traffic Load-Sharing across an Ethernet Segment 12 85 6.1. The BW Capability in the DF Election Extended Community . 12 86 6.2. BW Capability and Default DF Election algorithm . . . . . 13 87 6.3. BW Capability and HRW DF Election algorithm (Type 1 and 88 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 89 6.3.1. BW Increment . . . . . . . . . . . . . . . . . . . . 14 90 6.3.2. HRW Hash Computations with BW Increment . . . . . . . 14 91 6.4. BW Capability and Preference DF Election algorithm . . . 16 92 7. Cost-Benefit Tradeoff on Link Failures . . . . . . . . . . . 16 93 8. Real-time Available Bandwidth . . . . . . . . . . . . . . . . 17 94 9. Weighted Load-balancing to Multi-homed Subnets . . . . . . . 17 95 10. Weighted Load-balancing without EVPN aliasing . . . . . . . . 17 96 11. EVPN-IRB Multi-homing With Non-EVPN routing . . . . . . . . . 17 97 12. Operational Considerations . . . . . . . . . . . . . . . . . 17 98 13. Security Considerations . . . . . . . . . . . . . . . . . . . 18 99 14. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 100 15. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 18 101 16. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 18 102 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 19 103 17.1. Normative References . . . . . . . . . . . . . . . . . . 19 104 17.2. Informative References . . . . . . . . . . . . . . . . . 19 105 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 20 107 1. Requirements Language and Terminology 109 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 110 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 111 document are to be interpreted as described in [RFC2119]. 113 "Local PE" in the context of an Ethernet Segment refers to a provider 114 edge switch OR router that physically hosts the Ethernet Segment. 116 "Remote PE" in the context of an Ethernet Segment refers to a 117 provider edge switch OR router in an EVPN overlay, whose overlay 118 reachability to the Ethernet Segment is via the Local PE. 120 o BW: BandWidth 122 o LAG: Link Aggregation Group 124 o ES: Ethernet Segment 126 o ESI: Ethernet Segment ID 128 o vES: Virtual Ethernet Segment 130 o EVI: Ethernet virtual Instance, this is a mac-vrf. 132 o Path-List: A forwarding object used to load-balance routed or 133 bridged traffic across multiple forwarding paths. 135 o Access Bandwidth: Bandwidth of PE-CE links in an Ethernet Segment 137 o Egress PE: In the context of an Ethernet Segment or a route, this 138 is the PE that advertises a locally attached Ethernet Segment RT- 139 1, or a locally attached host or prefix route (RT-2, RT-5). 141 o Ingress PE: In the context of an Ethernet Segment or a route, this 142 is the receiving PE that learns remote Ethernet Segment RT-1 and/ 143 or host and prefix routes (RT-2, RT-5) from the Egress PE 145 o IMET: Inclusive Multicast Route 147 o DF: Designated Forwarder 149 o BDF: Backup Designated Forwarder 151 o DCI: Data Center Interconnect Router 153 2. Introduction 155 In an EVPN-IRB based network overlay, with a CE multi-homed via a 156 EVPN all-active multi-homing, bridged and routed traffic from ingress 157 PEs can be equally load balanced (ECMPed) across the multi-homing 158 egress PEs: 160 o ECMP Load-balancing for bridged unicast traffic is enabled via 161 aliasing and mass-withdraw procedures detailed in RFC 7432. 163 o ECMP Load-balancing for routed unicast traffic is enabled via 164 existing L3 ECMP mechanisms. 166 o Load-sharing of bridged BUM traffic on local ports is enabled via 167 EVPN DF election procedure detailed in RFC 7432 169 All of the above load balancing and DF election procedures implicitly 170 assume equal bandwidth distribution between the CE and the set of 171 egress PEs. Essentially, with this assumption of equal "access" 172 bandwidth distribution across all egress PEs, ALL remote traffic is 173 equally load balanced across the egress PEs. This assumption of 174 equal access bandwidth distribution can be restrictive with respect 175 to adding / removing links in a multi-homed LAG interface and may 176 also be easily broken on individual link failures. A solution to 177 handle unequal access bandwidth distribution across a set of egress 178 PEs is proposed in this document. Primary motivation behind this 179 proposal is to enable greater flexibility with respect to adding / 180 removing member PE-CE links, as needed and to optimally handle PE-CE 181 link failures. 183 2.1. PE-CE Link Provisioning 184 +------------------------+ 185 | Underlay Network Fabric| 186 +------------------------+ 188 +-----+ +-----+ 189 | PE1 | | PE2 | 190 +-----+ +-----+ 191 \ / 192 \ ES-1 / 193 \ / 194 +\---/+ 195 | \ / | 196 +--+--+ 197 | 198 CE1 200 Figure 1 202 Consider CE1 that is dual-homed to egress PE1 and egress PE2 via EVPN 203 all-active multi-homing with single member links of equal bandwidth 204 to each PE (aka, equal access bandwidth distribution across PE1 and 205 PE2). If the provider wants to increase link bandwidth to CE1, it 206 must add a link to both PE1 and PE2 in order to maintain equal access 207 bandwidth distribution and inter-work with EVPN ECMP load balancing. 208 In other words, for a dual-homed CE, total number of CE links must be 209 provisioned in multiples of 2 (2, 4, 6, and so on). For a triple- 210 homed CE, number of CE links must be provisioned in multiples of 211 three (3, 6, 9, and so on). To generalize, for a CE that is multi- 212 homed to "n" PEs, number of PE-CE physical links provisioned must be 213 an integral multiple of "n". This is restrictive in case of dual- 214 homing and very quickly becomes prohibitive in case of multi-homing. 216 Instead, a provider may wish to increase PE-CE bandwidth OR number of 217 links in any link increments. As an example, for CE1 dual-homed to 218 egress PE1 and egress PE2 in all-active mode, provider may wish to 219 add a third link to only PE1 to increase total bandwidth for this CE 220 by 50%, rather than being required to increase access bandwidth by 221 100% by adding a link to each of the two PEs. While existing EVPN 222 based all-active load balancing procedures do not necessarily 223 preclude such asymmetric access bandwidth distribution among the PEs 224 providing redundancy, it may result in unexpected traffic loss due to 225 congestion in the access interface towards CE. This traffic loss is 226 due to the fact that PE1 and PE2 will continue to be treated as equal 227 cost paths at remote PEs, and as a result may attract approximately 228 equal amount of CE1 destined traffic, even when PE2 only has half the 229 bandwidth to CE1 as PE1. This may lead to congestion and traffic 230 loss on the PE2-CE1 link. If bandwidth distribution to CE1 across 231 PE1 and PE2 is 2:1, traffic from remote hosts must also be load 232 balanced across PE1 and PE2 in 2:1 manner. 234 2.2. PE-CE Link Failures 236 More importantly, unequal PE-CE bandwidth distribution described 237 above may occur during regular operation following a link failure, 238 even when PE-CE links were provisioned to provide equal bandwidth 239 distribution across multi-homing PEs. 241 +------------------------+ 242 | Underlay Network Fabric| 243 +------------------------+ 245 +-----+ +-----+ 246 | PE1 | | PE2 | 247 +-----+ +-----+ 248 \\ // 249 \\ ES-1 // 250 \\ /X 251 +\\---//+ 252 | \\ // | 253 +---+---+ 254 | 255 CE1 257 Figure 2 259 Consider a CE1 that is multi-homed to egress PE1 and egress PE2 via a 260 LAG with two member links to each PE. On a PE2-CE1 physical link 261 failure, LAG represented by an Ethernet Segment ES-1 on PE2 stays up, 262 however, its bandwidth is cut in half. With existing ECMP 263 procedures, both PE1 and PE2 may continue to attract equal amount of 264 traffic from remote PEs, even when PE1 has double the bandwidth to 265 CE1. If bandwidth distribution to CE1 across PE1 and PE2 is 2:1, 266 traffic from remote hosts must also be load balanced across PE1 and 267 PE2 in 2:1 manner to avoid unexpected congestion and traffic loss on 268 PE2-CE1 links within the LAG. As an alternative, min-link on LAGs is 269 sometimes used to bring down the LAG interface on member link 270 failures. This however results in loss of available bandwidth in the 271 network, and is not ideal. 273 2.3. Design Requirement 274 +-----------------------+ 275 |Underlay Network Fabric| 276 +-----------------------+ 278 +-----+ +-----+ +-----+ +-----+ 279 | PE1 | | PE2 | ..... | PEx | | PEn | 280 +-----+ +-----+ +-----+ +-----+ 281 \ \ // // 282 \ L1 \ L2 // Lx // Ln 283 \ \ // // 284 +-\-------\-----------//--------//-+ 285 | \ \ ES-1 // // | 286 +----------------------------------+ 287 | 288 CE 290 Figure 3 292 To generalize, if total link bandwidth to a CE is distributed across 293 "n" egress PEs, with Lx being the total bandwidth to PEx across all 294 links, traffic from ingress PEs to this CE must be load balanced 295 unequally across egress PE set [PE1, PE2, ....., PEn] such that, 296 fraction of total unicast and BUM flows destined for CE that are 297 serviced by egress PEx is: 299 Lx / [L1+L2+.....+Ln] 301 Figure 3 illustrates a scenario where egress PE1..PEn are attached to 302 a multi-homed Ethernet Segment, however this document generalizes 303 this requirement so that the unequal load balancing can be applied to 304 PEs attached to a vES or to a multi-homed subnet advertised by EVPN 305 IP Prefix routes. 307 The solution proposed below includes extensions to EVPN procedures to 308 achieve the above. Following assumption apply to procedure described 309 in this document: 311 o For procedures related to bridged unicast and BUM traffic, EVPN 312 all active multi-homing is assumed. 314 o Procedures related to bridged unicast and BUM traffic are 315 applicable to both aliasing and non-alaising mode as defined in 316 [RFC7432]. 318 3. Solution Overview 320 In order to achieve weighted load balancing to an ES or vES for 321 overlay unicast traffic, Ethernet A-D per ES route (EVPN Route Type 322 1) is leveraged to signal the Ethernet Segment weight to ingress PEs. 323 Using Ethernet A-D per ES route to signal the Ethernet Segment weight 324 provides a mechanism that reacts to changes in access bandwidth or 325 number of access links in a service and host independent manner. 326 Ingress PEs computing the MAC path-lists based on global and aliasing 327 Ethernet A-D routes now have the ability to setup weighted load 328 balancing path-lists based on the ES access bandwidth or number of 329 links received from each egress PE that the ES is multi-homed to. 331 In order to achieve weighted load balancing of overlay BUM traffic, 332 EVPN ES route (Route Type 4) is leveraged to signal the ES weight to 333 egress PEs within an ES's redundancy group to influence per-service 334 DF election. Egress PEs in an ES redundancy group now have the 335 ability to do service carving in proportion to each egress PE's 336 relative ES weight. 338 Unequal load balancing to multi-homed subnets is achieved by 339 signaling the weight along with the IP Prefix routes advertised for 340 the subnet. 342 Procedures to accomplish this are described in greater detail next. 344 4. EVPN Link Bandwidth Extended Community 346 A new EVPN Link Bandwidth extended community is defined for the 347 solution specified in this document: 349 o This extended community is defined of type 0x06 (EVPN). 351 o IANA is requested to assign a sub-type value of 0x10 for the EVPN 352 Link bandwidth extended community, of type 0x06 (EVPN). 354 o EVPN Link Bandwidth extended community is defined as transitive. 356 4.1. Encoding and Usage of EVPN Link Bandwidth Extended Community 358 EVPN Link Bandwidth Extended Community value field is used to carry 359 total bandwidth of egress PE's all physical links in an ethernet 360 segment, expressed in Mbits/sec (MegabitsPerSecond) represented as an 361 unsigned integer. Note however that the load balancing algorithm 362 defined in this document uses ratio of Link Bandwidths. Hence, the 363 operator may choose a different unit or use the community as a 364 generalized weight that may be set to link count, locally configured 365 weight, or a value computed based on something other than link 366 bandwidth. In such case, the operator MUST ensure consistent usage 367 of the unit across all egress PEs in an ethernet segment. This may 368 involve multiple routing domains/Autonomous Systems. 370 In order to facilitate this, as well as avoid interop issues because 371 of provisioning error, one octet in the extended community's six 372 octet 'value' field is used to explicitly signal if the weight 373 encoded in the remaining five octets is link bandwidth expressed in 374 Mbps or a generalized weight value. This results in the following 375 encoding for EVPN link bandwidth extended community: 377 0 1 2 3 378 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 379 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 380 | Type | Sub-Type | Value-Units | | 381 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + 382 | Value-Weight | 383 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 385 Figure 4 387 Value-Units is encoded as: 389 o 0x00: weight expressed using default units of Mbps 391 o 0x01: generalized weight expressed in something other than Mbps 393 Generalized weight units are intentionally left arbritrary to allow 394 for flexibility in its usage for different applications without 395 having to define new encoding for each non-default application. 396 Implementations SHOULD support the default units of Mbps, while 397 support of non-default generalized weight is considered optional. 399 Additionally, following considerations apply to handling of this 400 extended community at the ingress PE: 402 o An ingress PE MUST check for consistent 'Value-Units' received in 403 the EVPN link bandwidth exteneded community from each egress PE in 404 an Ethernet Segment. In case of any inconsistency in 'Value- 405 Units' across egress PEs in an Ethernet Segment, this EVPN Link 406 Bandwidth extended community is to be ignored. 408 o An ingress PE MUST ensure that each route contains only a single 409 instance of this extended community sub-type. In case of more 410 than one instance, this EVPN Link Bandwidth extended community is 411 to be ignored. 413 4.2. Note on BGP Link Bandwidth Extended Community 415 Link bandwidth extended community described in [BGP-LINK-BW] for 416 layer 3 VPNs was considered for re-use here. This Link bandwidth 417 extended community is however defined in [BGP-LINK-BW] as optional 418 non-transitive. Since it is not possible to change deployed behavior 419 of extended community defined in [BGP-LINK-BW], it was decided to 420 define a new one. In inter-AS scenarios, link-bandwidth needs to be 421 signaled to eBGP neighbors. When signaled across AS boundary, this 422 extended community can be used to achieve optimal load-balancing 423 towards egress PEs in a different AS. This is applicable both when 424 next-hop is changed or unchanged across AS boundaries. 426 5. Weighted Unicast Traffic Load-balancing to an Ethernet Segment 428 5.1. Egress PE Behavior 430 A PE that is part of an Ethernet Segment's redundancy group SHOULD 431 advertise an additional "EVPN link bandwidth" extended community with 432 Ethernet A-D per ES route (EVPN Route Type 1), that carries total 433 bandwidth of PE's physical links in an Ethernet Segment or a 434 generalized weight. New EVPN link bandwidth extended community 435 defined in this document is used for this purpose. 437 EVPN link bandwidth extended community SHOULD NOT be attached to per- 438 EVI RT-1 or to EVPN RT-2. 440 5.2. Ingress PE Behavior 442 An ingress PE MUST ensure that the EVPN link bandwidth extended 443 community is recevied from all the egress PEs in an Ethernet Segment 444 and check for consistent 'Value-Units' received from each egress PE 445 in an Ethernet Segment. In case of missing EVPN Link Bandwidth 446 extended community OR inconsistent 'Value-Units' from any of the 447 egress PEs in an Ethernet Segment, this EVPN Link Bandwidth extended 448 community is to be ignored by the ingress PE and ingress PE is to 449 follow regular ECMP forwarding to that Ethernet Segment. 451 Once consistency of 'Value-Units' is validated, ingress PE SHOULD use 452 the 'Value-Weight' received from each egress PE to compute a relative 453 (normalized) weight for each egress PE, per ES, and then use this 454 relative weight to compute a weighted path-list to be used for load 455 balancing, as opposed to using an ECMP path-list for load balancing 456 across the egress PE paths. Egress PE Weight and resulting weighted 457 path-list computation at ingress PEs is a local matter. An example 458 computation algorithm is shown below to illustrate the idea: 460 if, 461 L(x,y) : link bandwidth advertised by egress PE-x for ES-y 463 W(x,y) : normalized weight assigned to egress PE-x for ES-y 465 H(y) : Highest Common Factor (HCF) of [L(1,y), L(2,y), ....., L(n,y)] 467 then, the normalized weight assigned to egress PE-x for ES-y may be 468 computed as follows: 470 W(x,y) = L(x,y) / H(y) 472 For a MAC+IP route (EVPN Route Type 2) received with ES-y, ingress PE 473 may compute MAC and IP forwarding path-list weighted by the above 474 normalized weights. 476 As an example, for a CE multi-homed to PE-1, PE-2, PE-3 via 2, 1, and 477 1 GE physical links respectively, as part of a LAG represented by ES- 478 10: 480 L(1, 10) = 2000 Mbps 482 L(2, 10) = 1000 Mbps 484 L(3, 10) = 1000 Mbps 486 H(10) = 1000 488 Normalized weights assigned to each egress PE for ES-10 are as 489 follows: 491 W(1, 10) = 2000 / 1000 = 2. 493 W(2, 10) = 1000 / 1000 = 1. 495 W(3, 10) = 1000 / 1000 = 1. 497 For a remote MAC+IP host route received with ES-10, forwarding load 498 balancing path-list may now be computed as: [PE-1, PE-1, PE-2, PE-3] 499 instead of [PE-1, PE-2, PE-3]. This now results in load balancing of 500 all traffic destined for ES-10 across the three egress PEs in 501 proportion to ES-10 bandwidth at each egress PE. 503 Weighted path-list computation must only be done for an ES if EVPN 504 link bandwidth extended community is received from all of the egress 505 PE's advertising reachability to that ES via Ethernet A-D per ES 506 Route Type 1. In an unlikely event that EVPN link bandwidth extended 507 community is not received from one or more egress PEs, forwarding 508 path-list should be computed using regular ECMP semantics. Note that 509 a default weight cannot be assumed for an egress PE that does not 510 advertise its link bandwidth as the weight to be used in path-list 511 computation is relative. 513 If per-ES RT-1 is not advertised or withdrawn from any of the egress 514 PE(s), as per [RFC7432], egress PE is removed from the forwarding 515 path-list for that [EVI, ES]. Hence, the weighted path-list MUST be 516 re-computed. 518 In an unlikely scenario that per-[ES, EVI] RT-1 is not advertised 519 from any of the egress PE(s), as per [RFC7432], egress PE is not 520 included in the forwarding path-list for that [EVI, ES]. Hence, the 521 weighted path-list for the [EVI, ES] MUST be computed based only on 522 the weights received from egress PEs that advertised the per-[ES, 523 EVI] RT-1. 525 6. Weighted BUM Traffic Load-Sharing across an Ethernet Segment 527 Optionally, load sharing of per-service DF role, weighted by 528 individual egress PE's link-bandwidth share within a multi-homed ES 529 may also be achieved. 531 In order to do that, a new DF Election Capability [RFC8584] called 532 "BW" (Bandwidth Weighted DF Election) is defined. BW MAY be used 533 along with some DF Election Types, as described in the following 534 sections. 536 6.1. The BW Capability in the DF Election Extended Community 538 [RFC8584] defines a new extended community for PEs within a 539 redundancy group to signal and agree on uniform DF Election Type and 540 Capabilities for each ES. This document requests IANA for a bit in 541 the DF Election extended community Bitmap: 543 Bit 28: BW (Bandwidth Weighted DF Election) 545 ES routes advertised with the BW bit set will indicate the desire of 546 the advertising egress PE to consider the link-bandwidth in the DF 547 Election algorithm defined by the value in the "DF Type". 549 As per [RFC8584], all the egress PEs in the ES MUST advertise the 550 same Capabilities and DF Type, otherwise the PEs will fall back to 551 Default [RFC7432] DF Election procedure. 553 The BW Capability MAY be advertised with the following DF Types: 555 o Type 0: Default DF Election algorithm, as in [RFC7432] 556 o Type 1: HRW algorithm, as in [RFC8584] 558 o Type 2: Preference algorithm, as in [EVPN-DF-PREF] 560 o Type 4: HRW per-multicast flow DF Election, as in [EVPN-PER-MCAST- 561 FLOW-DF] 563 The following sections describe how the DF Election procedures are 564 modified for the above DF Types when the BW Capability is used. 566 6.2. BW Capability and Default DF Election algorithm 568 When all the PEs in the Ethernet Segment (ES) agree to use the BW 569 Capability with DF Type 0, the Default DF Election procedure as 570 defined in [RFC7432] is modified as follows: 572 o Each PE advertises a "EVPN Link Bandwidth" extended community 573 along with the ES route to signal the PE-CE link bandwidth (LBW) 574 for the ES. 576 o A receiving egress PE MUST use the ES link bandwidth extended 577 community received from each egress PE to compute a relative 578 weight for each egress PE in an Ethernet Segment. 580 o The DF Election procedure MUST now use this weighted list of 581 egress PEs to compute the per-VLAN Designated Forwarder, such that 582 the DF role is distributed in proportion to this normalized 583 weight. As a result, a single PE may have multiple ordinals in 584 the DF candidate PE list and 'N' used in (V mod N) operation as 585 defined in [RFC7432] is modified to be total number of ordinals 586 instead of being total number of egress PEs in an Ethernet 587 Segment. 589 Considering the same example as in Section 5.2, the candidate PE list 590 for DF election is: 592 [PE-1, PE-1, PE-2, PE-3]. 594 The DF for a given VLAN-a on ES-10 is now computed as (VLAN-a % 4). 595 This would result in the DF role being distributed across PE1, PE2, 596 and PE3 in portion to each PE's normalized weight for ES-10. 598 6.3. BW Capability and HRW DF Election algorithm (Type 1 and 4) 600 [RFC8584] introduces Highest Random Weight (HRW) algorithm (DF Type 601 1) for DF election in order to solve potential DF election skew 602 depending on Ethernet tag space distribution. [EVPN-PER-MCAST-FLOW- 603 DF] further extends HRW algorithm for per-multicast flow based hash 604 computations (DF Type 4). This section describes extensions to HRW 605 Algorithm for EVPN DF Election specified in [RFC8584] and in [EVPN- 606 PER-MCAST-FLOW-DF] in order to achieve DF election distribution that 607 is weighted by link bandwidth. 609 6.3.1. BW Increment 611 A new variable called "bandwidth increment" is computed for each [PE, 612 ES] advertising the ES link bandwidth extended community as follows: 614 In the context of an ES, 616 L(i) = Link bandwidth advertised by PE(i) for this ES 618 L(min) = lowest link bandwidth advertised across all PEs for this ES 620 Bandwidth increment, "b(i)" for a given PE(i) advertising a link 621 bandwidth of L(i) is defined as an integer value computed as: 623 b(i) = L(i) / L(min) 625 As an example, 627 with PE(1) = 10, PE(2) = 10, PE(3) = 20 629 bandwidth increment for each PE would be computed as: 631 b(1) = 1, b(2) = 1, b(3) = 2 633 with PE(1) = 10, PE(2) = 10, PE(3) = 10 635 bandwidth increment for each PE would be computed as: 637 b(1) = 1, b(2) = 1, b(3) = 1 639 Note that the bandwidth increment must always be an integer, 640 including, in an unlikely scenario of a PE's link bandwidth not being 641 an exact multiple of L(min). If it computes to a non-integer value 642 (including as a result of link failure), it MUST be rounded down to 643 an integer. 645 6.3.2. HRW Hash Computations with BW Increment 647 HRW algorithm as described in [RFC8584] and in [EVPN-PER-MCAST-FLOW- 648 DF] computes a random hash value for each PE(i), where, (0 < i <= N), 649 PE(i) is the PE at ordinal i, and Address(i) is the IP address of 650 PE(i). 652 For 'N' PEs sharing an Ethernet segment, this results in 'N' 653 candidate hash computations. The PE that has the highest hash value 654 is selected as the DF. 656 We refer to this hash value as "affinity" in this document. Hash or 657 affinity computation for each PE(i) is extended to be computed one 658 per bandwidth increment associated with PE(i) instead of a single 659 affinity computation per PE(i). 661 PE(i) with b(i) = j, results in j affinity computations: 663 affinity(i, x), where 1 < x <= j 665 This essentially results in number of candidate HRW hash computations 666 for each PE that is directly proportional to that PE's relative 667 bandwidth within an ES and hence gives PE(i) a probability of being 668 DF in proportion to it's relative bandwidth within an ES. 670 As an example, consider an ES that is multi-homed to two PEs, PE1 and 671 PE2, with equal bandwidth distribution across PE1 and PE2. This 672 would result in a total of two candidate hash computations: 674 affinity(PE1, 1) 676 affinity(PE2, 1) 678 Now, consider a scenario with PE1's link bandwidth as 2x that of PE2. 679 This would result in a total of three candidate hash computations to 680 be used for DF election: 682 affinity(PE1, 1) 684 affinity(PE1, 2) 686 affinity(PE2, 1) 688 which would give PE1 2/3 probability of getting elected as a DF, in 689 proportion to its relative bandwidth in the ES. 691 Depending on the chosen HRW hash function, affinity function MUST be 692 extended to include bandwidth increment in the computation. 694 For e.g., 696 affinity function specified in [EVPN-PER-MCAST-FLOW-DF] MAY be 697 extended as follows to incorporate bandwidth increment j: 699 affinity(S,G,V, ESI, Address(i,j)) = 700 (1103515245.((1103515245.Address(i).j + 12345) XOR 701 D(S,G,V,ESI))+12345) (mod 2^31) 703 affinity or random function specified in [RFC8584] MAY be extended as 704 follows to incorporate bandwidth increment j: 706 affinity(v, Es, Address(i,j)) = (1103515245((1103515245.Address(i).j 707 + 12345) XOR D(v,Es))+12345)(mod 2^31) 709 6.4. BW Capability and Preference DF Election algorithm 711 This section applies to ES'es where all the PEs in the ES agree use 712 the BW Capability with DF Type 2. The BW Capability modifies the 713 Preference DF Election procedure [EVPN-DF-PREF], by adding the LBW 714 value as a tie-breaker as follows: 716 Section 4.1, bullet (f) in [EVPN-DF-PREF] now considers the LBW 717 value: 719 f) In case of equal Preference in two or more PEs in the ES, the tie- 720 breakers will be the DP bit, the LBW value and the lowest IP PE in 721 that order. For instance: 723 o If vES1 parameters were [Pref=500,DP=0,LBW=1000] in PE1 and 724 [Pref=500,DP=1, LBW=2000] in PE2, PE2 would be elected due to the 725 DP bit. 727 o If vES1 parameters were [Pref=500,DP=0,LBW=1000] in PE1 and 728 [Pref=500,DP=0, LBW=2000] in PE2, PE2 would be elected due to a 729 higher LBW, even if PE1's IP address is lower. 731 o The LBW exchanged value has no impact on the Non-Revertive option 732 described in [EVPN-DF-PREF]. 734 7. Cost-Benefit Tradeoff on Link Failures 736 While incorporating link bandwidth into the DF election process 737 provides optimal BUM traffic distribution across the ES links, it 738 also implies that DF elections are re-adjusted on link failures or 739 bandwidth changes. If the operator does not wish to have this level 740 of churn in their DF election, then they should not advertise the BW 741 capability. Not advertising BW capability may result in less than 742 optimal BUM traffic distribution while still retaining the ability to 743 allow an ingress PE to do weighted ECMP for its unicast traffic to a 744 set of egress PEs. 746 8. Real-time Available Bandwidth 748 PE-CE link bandwidth availability may sometimes vary in real-time 749 disproportionately across PE-CE links within a multi-homed ES due to 750 various factors such as flow based hashing combined with fat flows 751 and unbalanced hashing. Reacting to real-time available bandwidth is 752 at this time outside the scope of this document. 754 9. Weighted Load-balancing to Multi-homed Subnets 756 EVPN Link bandwidth extended community may also be used to achieve 757 unequal load-balancing of prefix routed traffic by including this 758 extended community in EVPN Route Type 5. When included in EVPN RT-5, 759 its value is to be interpreted as egress PE's relative weight for the 760 prefix included in this RT-5. Ingress PE will then compute the 761 forwarding path-list for the prefix route using weighted paths 762 received from each egress PE. 764 10. Weighted Load-balancing without EVPN aliasing 766 [RFC7432] defines per-[ES, EVI] RT-1 based EVPN aliasing procedure as 767 an optional propcedure. In an unlikely scenario where an EVPN 768 implementation does not support EVPN aliasing procedures, MAC 769 forwarding path-list at the ingress PE is computed based on per-ES 770 RT-1 and RT-2 routes received from egress PEs, instead of per-ES RT-1 771 and per-[ES, EVI] RT-1 from egress PEs. In such a case, only the 772 weights received via per-ES RT-1 from the egress PEs included in the 773 MAC path-list are to be considered for weighted path-list 774 computation. 776 11. EVPN-IRB Multi-homing With Non-EVPN routing 778 EVPN-LAG based multi-homing on an IRB gateway may also be deployed 779 together with non-EVPN routing, such as global routing or an L3VPN 780 routing control plane. Key property that differentiates this set of 781 use cases from EVPN IRB use cases discussed earlier is that EVPN 782 control plane is used only to enable LAG interface based multi-homing 783 and NOT as an overlay VPN control plane. Applicability of weighted 784 ECMP procedures proposed in this document to these set of use cases 785 is an area of further consideration beyond the scope of this 786 document. 788 12. Operational Considerations 790 None 792 13. Security Considerations 794 This document raises no new security issues for EVPN. 796 14. IANA Considerations 798 [RFC8584] defines a new extended community for egress PEs within a 799 redundancy group to signal and agree on uniform DF Election Type and 800 Capabilities for each ES. This document requests IANA for a bit in 801 the DF Election extended community Bitmap: 803 Bit 28: BW (Bandwidth Weighted DF Election) 805 A new EVPN Link Bandwidth extended community is defined to signal 806 local ES link bandwidth to ingress PEs. This extended community is 807 defined of type 0x06 (EVPN). IANA is requested to assign a sub-type 808 value of 0x10 for the EVPN Link bandwidth extended community, of type 809 0x06 (EVPN). EVPN Link Bandwidth extended community is defined as 810 transitive. 812 IANA is requested to set up a registry called "Value-Units" for the 813 1-octet field in the EVPN Link Bandwidth Extended Community. New 814 registrations will be made through the "RFC Required" procedure 815 defined in [RFC8126]. The following initial values in that registry 816 exist: 818 Value Name Reference 819 ---- ---------------- ------------- 820 0 Weight in units of Mbps This document 821 1 Generalized Weight This document 822 2-255 Unassigned 824 15. Acknowledgements 826 Authors would like to thank Satya Mohanty for valuable review and 827 inputs with respect to HRW and weighted HRW algorithm refinements 828 proposed in this document. Authors would also like to thank Bruno 829 Decraene and Sergey Fomin for valuable review and comments. 831 16. Contributors 833 Satya Ranjan Mohanty 834 Cisco Systems 835 US 836 Email: satyamoh@cisco.com 838 17. References 840 17.1. Normative References 842 [EVPN-DF-PREF] 843 Rabadan, J., Sathappan, S., Przygienda, T., Lin, W., 844 Drake, J., Sajassi, A., Mohanty, S., and , "Preference- 845 based EVPN DF Election", draft-ietf-bess-evpn-pref-df-06 846 (work in progress), June 2020. 848 [EVPN-PER-MCAST-FLOW-DF] 849 Sajassi, A., mishra, m., Thoria, S., Rabadan, J., and J. 850 Drake, "Per multicast flow Designated Forwarder Election 851 for EVPN", draft-ietf-bess-evpn-per-mcast-flow-df- 852 election-04 (work in progress), August 2020. 854 [EVPN-VIRTUAL-ES] 855 Sajassi, A., Brissette, P., Schell, R., Drake, J., 856 Rabadan, J., and , "EVPN Virtual Ethernet Segment", draft- 857 ietf-bess-evpn-virtual-eth-segment-06 (work in progress), 858 March 2020. 860 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 861 Requirement Levels", BCP 14, RFC 2119, 862 DOI 10.17487/RFC2119, March 1997, 863 . 865 [RFC7432] Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A., 866 Uttaro, J., Drake, J., and W. Henderickx, "BGP MPLS-Based 867 Ethernet VPN", RFC 7432, DOI 10.17487/RFC7432, February 868 2015, . 870 [RFC7814] Xu, X., Jacquenet, C., Raszuk, R., Boyes, T., and B. Fee, 871 "Virtual Subnet: A BGP/MPLS IP VPN-Based Subnet Extension 872 Solution", RFC 7814, DOI 10.17487/RFC7814, March 2016, 873 . 875 [RFC8584] Rabadan, J., Ed., Mohanty, R., Sajassi, N., Drake, A., 876 Nagaraj, K., and S. Sathappan, "Framework for Ethernet VPN 877 Designated Forwarder Election Extensibility", RFC 8584, 878 DOI 10.17487/RFC8584, April 2019, 879 . 881 17.2. Informative References 883 [BGP-LINK-BW] 884 Mohapatra, P. and R. Fernando, "BGP Link Bandwidth 885 Extended Community", draft-ietf-idr-link-bandwidth-07 886 (work in progress), March 2019. 888 Authors' Addresses 890 Neeraj Malhotra (editor) 891 Cisco Systems 892 170 W. Tasman Drive 893 San Jose, CA 95134 894 USA 896 Email: nmalhotr@cisco.com 898 Ali Sajassi 899 Cisco Systems 900 170 W. Tasman Drive 901 San Jose, CA 95134 902 USA 904 Email: sajassi@cisco.com 906 Jorge Rabadan 907 Nokia 908 777 E. Middlefield Road 909 Mountain View, CA 94043 910 USA 912 Email: jorge.rabadan@nokia.com 914 John Drake 915 Juniper 917 Email: jdrake@juniper.net 919 Avinash Lingala 920 ATT 921 200 S. Laurel Avenue 922 Middletown, CA 07748 923 USA 925 Email: ar977m@att.com 926 Samir Thoria 927 Cisco Systems 928 170 W. Tasman Drive 929 San Jose, CA 95134 930 USA 932 Email: sthoria@cisco.com