idnits 2.17.1 draft-sajassi-bess-evpn-mvpn-seamless-interop-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 26 instances of lines with control characters in the document. ** The abstract seems to contain references ([EVPN-IRB]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? RFC 2119 keyword, line 223: '... The solution SHALL support optimum ...' RFC 2119 keyword, line 225: '...As. The solution SHALL support optimum...' RFC 2119 keyword, line 231: '...ability, the solution SHALL use only a...' RFC 2119 keyword, line 234: '...ls. The solution MUST support optimum ...' RFC 2119 keyword, line 237: '... - Non-IP traffic SHALL be forwarded per EVPN baseline [RFC7432] or...' (31 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 2, 2017) is 2489 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'EVPN-IRB' is mentioned on line 303, but not defined == Missing Reference: 'RFC2119' is mentioned on line 182, but not defined == Missing Reference: 'MVPN' is mentioned on line 395, but not defined == Missing Reference: 'RFC 6514' is mentioned on line 342, but not defined == Missing Reference: 'RFC 6513' is mentioned on line 351, but not defined == Missing Reference: 'RF7432' is mentioned on line 421, but not defined == Missing Reference: 'EVPN-IRB-MCAST' is mentioned on line 678, but not defined == Missing Reference: 'TUNNEL-ENCAP' is mentioned on line 798, but not defined == Unused Reference: 'RFC7024' is defined on line 994, but no explicit reference was found in the text == Unused Reference: 'RFC7080' is defined on line 1003, but no explicit reference was found in the text == Unused Reference: 'RFC7209' is defined on line 1007, but no explicit reference was found in the text == Unused Reference: 'RFC4389' is defined on line 1010, but no explicit reference was found in the text == Unused Reference: 'RFC4761' is defined on line 1013, but no explicit reference was found in the text == Unused Reference: 'TUNNEL-ENCAPS' is defined on line 1031, but no explicit reference was found in the text == Outdated reference: A later version (-12) exists of draft-ietf-bess-evpn-overlay-01 Summary: 3 errors (**), 0 flaws (~~), 16 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 BESS Working Group A. Sajassi 3 Internet Draft S. Thoria 4 Category: Standard Track N. Fazlollahi 5 Cisco 6 A. Gupta 7 Avi Networks 9 Expires: January 2, 2017 July 2, 2017 11 Seamless Multicast Interoperability between EVPN and MVPN PEs 12 draft-sajassi-bess-evpn-mvpn-seamless-interop-00.txt 14 Abstract 16 Ethernet Virtual Private Network (EVPN) solution is becoming 17 pervasive for Network Virtualization Overlay (NVO) services in data 18 center (DC) networks and as the next generation VPN services in 19 service provider (SP) networks. 21 As service providers transform their networks in their COs toward 22 next generation data center with Software Defined Networking (SDN) 23 based fabric and Network Function Virtualization (NFV), they want to 24 be able to maintain their offered services including multicast VPN 25 (MVPN) service between their existing network and their new SPDC 26 network seamlessly without the use of gateway devices. They want to 27 have such seamless interoperability between their new SPDCs and their 28 existing networks for a) reducing cost, b) having optimum forwarding, 29 and c) reducing provisioning. This document describes a unified 30 solution based on RFC 6513 for seamless interoperability of multicast 31 VPN between EVPN and MVPN PEs. Furthermore, it describes how the 32 proposed solution can be used as a routed multicast solution in data 33 centers with EVPN-IRB PEs per [EVPN-IRB]. 35 Status of this Memo 37 This Internet-Draft is submitted to IETF in full conformance with the 38 provisions of BCP 78 and BCP 79. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF), its areas, and its working groups. Note that 42 other groups may also distribute working documents as 43 Internet-Drafts. 45 Internet-Drafts are draft documents valid for a maximum of six months 46 and may be updated, replaced, or obsoleted by other documents at any 47 time. It is inappropriate to use Internet-Drafts as reference 48 material or to cite them other than as "work in progress." 49 The list of current Internet-Drafts can be accessed at 50 http://www.ietf.org/1id-abstracts.html 52 The list of Internet-Draft Shadow Directories can be accessed at 53 http://www.ietf.org/shadow.html 55 Copyright and License Notice 57 Copyright (c) 2015 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents 62 (http://trustee.ietf.org/license-info) in effect on the date of 63 publication of this document. Please review these documents 64 carefully, as they describe your rights and restrictions with respect 65 to this document. Code Components extracted from this document must 66 include Simplified BSD License text as described in Section 4.e of 67 the Trust Legal Provisions and are provided without warranty as 68 described in the Simplified BSD License. 70 Table of Contents 72 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 73 2. Requirements Language . . . . . . . . . . . . . . . . . . . . 5 74 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 75 4. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 5 76 4.1. Optimum Forwarding . . . . . . . . . . . . . . . . . . . . 6 77 4.2. Optimum Replication . . . . . . . . . . . . . . . . . . . . 6 78 4.3. All-Active and Single-Active Multi-Homing . . . . . . . . . 6 79 4.4. Inter-AS Tree Stitching . . . . . . . . . . . . . . . . . . 6 80 4.5. EVPN Service Interfaces . . . . . . . . . . . . . . . . . . 7 81 4.6. Distributed Anycast Gateway . . . . . . . . . . . . . . . . 7 82 4.7. Selective & Aggregate Selective Tunnels . . . . . . . . . . 7 83 4.8. Tenants' (S,G) or (*,G) states . . . . . . . . . . . . . . 7 84 5. Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 85 5.1. Operational Model for Homogenous EVPN IRB NVEs . . . . . . 8 86 5.1.1 Control Plane Operation . . . . . . . . . . . . . . . . 10 87 5.1.2 Data Plane Operation . . . . . . . . . . . . . . . . . 12 88 5.1.2.1 Sender and Receiver in same MAC-VRF . . . . . . . . 12 89 5.1.2.2 Sender and Receiver in different MAC-VRF . . . . . . 13 90 5.2. Operational Model for Heterogeneous EVPN IRB PEs . . . . . 13 91 5.3. All-Active Multi-Homing . . . . . . . . . . . . . . . . . 13 92 5.3.1. Source and receivers in same ES but on different 93 subnets . . . . . . . . . . . . . . . . . . . . . . . 14 94 5.3.2. Source and some receivers in same ES and on same 95 subnet . . . . . . . . . . . . . . . . . . . . . . . . 14 96 5.4. Mobility for Tenant's sources and receivers . . . . . . . 15 97 5.5. Single-Active Multi-Homing . . . . . . . . . . . . . . . . 15 98 6. DCs with only EVPN NVEs . . . . . . . . . . . . . . . . . . . 15 99 6.1 Setup of overlay multicast delivery . . . . . . . . . . . . 16 100 6.3 Data plane considerations . . . . . . . . . . . . . . . . . 17 101 7 Handling of different encapsulations . . . . . . . . . . . . . . 17 102 7.1 MPLS Encapsulation . . . . . . . . . . . . . . . . . . . . 18 103 7.2 VxLAN Encapsulation . . . . . . . . . . . . . . . . . . . . 18 104 7.3 Other Encapsulation . . . . . . . . . . . . . . . . . . . . 18 105 8. DCI with MPLS in WAN and VxLAN in DCs . . . . . . . . . . . . 18 106 8.1 Control plane inter-connect . . . . . . . . . . . . . . . . 18 107 8.2 Data plane inter-connect . . . . . . . . . . . . . . . . . . 20 108 8.3 Multi-homing among DCI gateways . . . . . . . . . . . . . . 20 109 9. Inter-AS Operation . . . . . . . . . . . . . . . . . . . . . . 20 110 10. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 20 111 10.1 DCs with only IGMP/MLD hosts w/o tenant router . . . . . . 20 112 10.2 DCs with mixed of IGMP/MLD hosts & multicast routers 113 running PIM-SSM . . . . . . . . . . . . . . . . . . . . . 21 114 10.3 DCs with mixed of IGMP/MLD hosts & multicast routers 115 running PIM-ASM . . . . . . . . . . . . . . . . . . . . . 21 116 10.4 DCs with mixed of IGMP/MLD hosts & multicast routers 117 running PIM-Bidir . . . . . . . . . . . . . . . . . . . . 22 118 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 119 12. Security Considerations . . . . . . . . . . . . . . . . . . . 22 120 13. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 22 121 14. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 122 14.1. Normative References . . . . . . . . . . . . . . . . . . 22 123 15.2. Informative References . . . . . . . . . . . . . . . . . 23 124 15. Authors' Addresses . . . . . . . . . . . . . . . . . . . . . 23 126 1. Introduction 128 Ethernet Virtual Private Network (EVPN) solution is becoming 129 pervasive for Network Virtualization Overlay (NVO) services in data 130 center (DC) networks and as the next generation VPN services in 131 service provider (SP) networks. 133 As service providers transform their networks in their COs toward 134 next generation data center with Software Defined Networking (SDN) 135 based fabric and Network Function Virtualization (NFV), they want to 136 be able to maintain their offered services including multicast VPN 137 (MVPN) service between their existing network and their new SPDC 138 network seamlessly without the use of gateway devices. There are 139 several reasons for having such seamless interoperability between 140 their new DCs and their existing networks: 142 - Lower Cost: gateway devices need to have very high scalability to 143 handle VPN services for their DCs and as such need to handle large 144 number of VPN instances (in tens or hundreds of thousands) and very 145 large number of routes (e.g., in millions). For the same speed and 146 feed, these high scale gateway boxes are relatively much more 147 expensive than their TOR devices that support much lower number of 148 routes and VPN instances. 150 - Optimum Forwarding: in a given CO, both EVPN PEs and MVPN PEs can 151 be connected to the same network (e.g., same IGP domain). In such 152 scenarios, the service providers want to have optimum forwarding 153 among these PE devices without the use of gateway devices. Because if 154 gateway devices are used, then the multicast traffic between an EVPN 155 and MVPN PEs can no longer be optimum and is some case, it may even 156 get tromboned. Furthermore, when an SPDC network spans across 157 multiple LATA (multiple geographic areas) and gateways are used 158 between EVPN and MVPN PEs, then with respect to multicast traffic, 159 only one GW can be designated forwarder (DF) between EVPN and MVPN 160 PEs. Such scenarios not only results in non-optimum forwarding but 161 also it can result in tromboing of multicast traffic between the two 162 LATAs when both source and destination PEs are in the same LATA and 163 the DF gateway is elected to be in a different LATA. 165 - Less Provisioning: If gateways are used, then the operator need to 166 configure per-tenant info. In other words, for each tenant that is 167 configured, one (or maybe two) additional touch points are needed. 169 This document describes a unified solution based on [RFC6513] and 170 [RFC6514] for seamless interoperability of multicast VPN between EVPN 171 and MVPN PEs. Furthermore, it describes how the proposed solution can 172 be used as a routed multicast solution for EVPN-only applications in 173 data centers (e.g., routed multicast VPN only among EVPN PEs). 175 2. Requirements Language 177 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 178 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" are to 179 be interpreted as described in [RFC2119] only when they appear in all 180 upper case. They may also appear in lower or mixed case as English 181 words, without any normative meaning. 183 3. Terminology 185 ARP: Address Resolution Protocol 186 BEB: Backbone Edge Bridge 187 B-MAC: Backbone MAC Address 188 CE: Customer Edge 189 C-MAC: Customer/Client MAC Address 190 ES: Ethernet Segment 191 ESI: Ethernet Segment Identifier 192 IRB: Integrated Routing and Bridging 193 LSP: Label Switched Path 194 MP2MP: Multipoint to Multipoint 195 MP2P: Multipoint to Point 196 ND: Neighbor Discovery 197 NA: Neighbor Advertisement 198 P2MP: Point to Multipoint 199 P2P: Point to Point 200 PE: Provider Edge 201 EVPN: Ethernet VPN 202 EVI: EVPN Instance 203 RT: Route Target 205 Single-Active Redundancy Mode: When only a single PE, among a group 206 of PEs attached to an Ethernet segment, is allowed to forward traffic 207 to/from that Ethernet Segment, then the Ethernet segment is defined 208 to be operating in Single-Active redundancy mode. 210 All-Active Redundancy Mode: When all PEs attached to an Ethernet 211 segment are allowed to forward traffic to/from that Ethernet Segment, 212 then the Ethernet segment is defined to be operating in All-Active 213 redundancy mode. 215 4. Requirements 217 This section describes the requirements specific in providing 218 seamless multicast VPN service between MVPN and EVPN capable 219 networks. 221 4.1. Optimum Forwarding 223 The solution SHALL support optimum multicast forwarding between EVPN 224 and MVPN PEs within a network. The network can be confined to a CO or 225 it can span across multiple LATAs. The solution SHALL support optimum 226 multicast forwarding with both ingress replication tunnels and P2MP 227 tunnels. 229 4.2. Optimum Replication 231 For EVPN PEs with IRB capability, the solution SHALL use only a 232 single multicast tunnel among EVPN and MVPN PEs for IP multicast 233 traffic. Multicast tunnels can be either ingress replication tunnels 234 or P2MP tunnels. The solution MUST support optimum replication for 235 both Intra-subnet and Inter-subnet IP multicast traffic: 237 - Non-IP traffic SHALL be forwarded per EVPN baseline [RFC7432] or 238 [OVERLAY] 240 - If a Multicast VPN spans across both Intra and Inter subnets, then 241 for Ingress replication regardless of whether the traffic is Intra or 242 Inter subnet, only a single copy of multicast traffic SHALL be sent 243 from the source PE to the destination PE. 245 - If a Multicast VPN spans across both Intra and Inter subnets, then 246 for P2MP tunnels regardless of whether the traffic is Intra or Inter 247 subnet, only a single copy of multicast data SHALL be transmitted by 248 the source PE. Source PE can be either EVPN or MVPN PE and receiving 249 PEs can be a mix of EVPN and MVPN PEs - i.e., a multicast VPN can be 250 spread across both EVPN and MVPN PEs. 252 4.3. All-Active and Single-Active Multi-Homing 254 The solution MUST support multi-homing of source devices and 255 receivers that are sitting in the same subnet (e.g., VLAN) and are 256 multi-homed to EVPN PEs. The solution SHALL allow for both Single- 257 Active and All-Active multi-homing. The solution MUST prevent loop 258 during steady and transient states just like EVPN baseline solution 259 [RFC7432] and [OVERLAY] for all multi-homing types. 261 4.4. Inter-AS Tree Stitching 263 The solution SHALL support multicast tree stitching when the tree 264 spans across multiple Autonomous Systems. 266 4.5. EVPN Service Interfaces 268 The solution MUST support all EVPN service interfaces listed in 269 section 6 of [RFC7432]: 271 - VLAN-based service interface 272 - VLAN-bundle service interface 273 - VLAN-aware bundle service interface 275 4.6. Distributed Anycast Gateway 277 The solution SHALL support distributed anycast gateways for tenant 278 workloads on NVE devices operating in EVPN-IRB mode. 280 4.7. Selective & Aggregate Selective Tunnels 282 The solution SHALL support selective and aggregate selective P- 283 tunnels as well as inclusive and aggregate inclusive P-tunnels. When 284 selective tunnels are used, then multicast traffic SHOULD only be 285 forwarded to the remote PE which have receivers - i.e., if there are 286 no receivers at a remote PE, the multicast traffic SHOULD NOT be 287 forwarded to that PE and if there are no receivers on any remote PEs, 288 then the multicast traffic SHOULD NOT be forwarded to the core. 290 4.8. Tenants' (S,G) or (*,G) states 292 The solution SHUOLD store (C-S,C-G) and (C-*,C-G) states only on PE 293 devices that have interest in such states hence reducing memory and 294 processing requirements - i.e., PE devices that have sources and/or 295 receivers interested in such multicast groups. 297 5. Solution 299 [EVPN-IRB] describes the operation for EVPN PEs in IRB mode for 300 unicast traffic. The same EVPN PE model, where an IP-VRF is attached 301 to one or more MAC-VRF via virtual IRB interfaces, is also applicable 302 here. However, there are some noticeable differences between the IRB 303 mode operation for unicast traffic described in [EVPN-IRB] versus for 304 multicast traffic described here. For unicast traffic, the intra- 305 subnet traffic, is bridged within the MAC-VRF associated with that 306 subnet (i.e., a lookup based on MAC-DA is performed); whereas, the 307 inter-subnet traffic is routed in the corresponding IP-VRF (ie, a 308 lookup based on IP-DA is performed). A given tenant can have one or 309 more IP-VRFs; however, without loss of generality, this document 310 assumes one IP-VRF per tenant. For multicast traffic, the intra- 311 subnet traffic is bridged for non-IP traffic and it is Layer-2 312 switched for IP traffic. The differentiation between bridging and L2- 313 switching for multicast traffic is that the former uses MAC-DA lookup 314 for forwarding the traffic; whereas, the latter uses IP-DA lookup for 315 forwarding the multicast traffic where the forwarding states are 316 built using IGMP/MLD snooping. The inter-subnet multicast traffic is 317 always routed in the corresponding IP-VRF. 319 This section describes a multicast VPN solution based on [MVPN] for 320 EVPN PEs operating in IRB mode that want to perform seamless 321 interoperability with their counterparts MVPN PEs. 323 5.1. Operational Model for Homogenous EVPN IRB NVEs 325 In this section, we consider the scenario where all EVPN PEs have IRB 326 capability and operating in IRB mode for both unicast and multicast 327 traffic (e.g., all EVPN PEs are homogenous in terms of their 328 capabilities and operational modes). In this scenario, the EVPN PEs 329 terminate IGMP/MLD messages from tenant host devices or PIM messages 330 from tenant routers on their IRB interfaces, thus avoid sending these 331 messages over MPLS/IP core. A tenant virtual/physical router (e.g., 332 CE) attached to an EVPN PE becomes a multicast routing adjacency of 333 that PE and the multicast routing protocol on the PE-CE link link is 334 presumed to be PIM-SM with both the ASM and the SSM service models 335 per [RFC6513]. Furthermore, the PE uses MVPN BGP protocol and 336 procedures per [RFC6513] and [RFC6514]. With respect to tenant PIM 337 protocol, PIM-SM with Any Source Multicast (ASM) mode, PIM-SM with 338 Source Specific Multicast (SSM) mode, and PIM Bidirectional (BIDIR) 339 mode are all supported per [RFC6513]. Support of PIM-DM (Dense Mode) 340 is excluded in this document per [RFC6513]. 342 The EVPN PEs use MVPN BGP routes [RFC 6514] to convey tenant (S,G) or 343 (*,G) states to other MVPN or EVPN PEs and to set up overlay trees 344 (inclusive or selective) for a given MVPN. The leaves and roots of 345 these overlay trees are composed of Provider Multicast Service 346 Interface (PMSI) and it can be Inclusive-PMSI (I-PMSI) or Selective- 347 PMSI (S-PMSI) per [RFC6513]. A given PMSI is associated with a single 348 IP-VRF of an EVPN PE and/or a MVPN PE for that MVPN - e.g., a MVPN 349 PMSI is never associated with a MAC-VRF of an EVPN PE. Overlay-trees 350 are instantiated by underlay provider tunnels (P-tunnels) - e.g., 351 P2MP, MP2MP, or unicast tunnels per [RFC 6513]. When there are many- 352 to-one mapping of PMSIs to a P-tunnel (e.g. mapping many S-PMSIs or 353 many I-PMSI to a single P-tunnel), the tunnel is referred to as 354 aggregate tunnel. 356 Figure-1 below depicts a scenario where a tenant's MVPN spans across 357 both EVPN and MVPN PEs; where all EVPN PEs have IRB capability. An 358 EVPN PE (with IRB capability) can be modeled as a MVPN PE where the 359 virtual IRB interface of an EVPN PE (virtual interface between MAC- 360 VRF and IP-VRF) can be considered as an attachment circuit (AC) for 361 the MVPN PE. In other words, an EVPN PE can be modeled as a PE that 362 consists of a MVPN PE whose ACs are replaced with IRB interfaces 363 connecting each IP-VRF of the MVPN PE to a set of MAC-VRFs. Similar 364 to a MVPN PE where an attachment circuit serves as a routed multicast 365 interface for an IP-VRF associated with a MVPN instance, an IRB 366 interface serves as a routed multicast interface for the IP-VRF 367 associated with the MVPN instance. Since EVPN PEs run MVPN protocols 368 (e.g., [RFC6513] and [RFC6514]), for all practical purposes, they 369 look just like MVPN PEs to other PE devices. Such modeling of EVPN 370 PEs, transforms the multicast VPN operation of EVPN PEs to that of 371 [MVPN] and thus simplifies the interoperability between EVPN and MVPN 372 PEs to that of running a single unified solution based on [MVPN]. 374 EVPN PE1 375 +------------+ 376 Src1 +----|(MAC-VRF1) | MVPN PE1 377 Rcvr1 +----| \ | +---------+ +--------+ 378 | (IP-VRF)|----| |---|(IP-VRF)|--- Rcvr5 379 | / | | | +--------+ 380 Rcvr2 +---|(MAC-VRF2) | | | 381 +------------+ | | 382 | MPLS/ | 383 EVPN PE2 | IP | 384 +------------+ | | 385 Rcvr3 +---|(MAC-VRF1) | | | MVPN PE2 386 | \ | | | +--------+ 387 | (IP-VRF)|----| |---|(IP-VRF)|--- Rcvr6 388 | / | +---------+ +--------+ 389 Rcvr4 +---|(MAC-VRF3) | 390 +------------+ 392 Figure-1: Homogenous EVPN NVEs 394 Although modeling an EVPN PE as a MVPN PE, conceptually simplifies 395 the operation to that of a solution based on [MVPN], the following 396 operational aspects of EVPN are impacted and needs to be factored in 397 the solution: 399 1) All-Active multi-homing of IP multicast sources and receivers 400 2) Mobility for Tenant's sources and receivers 401 3) Unicast route advertisements for IP multicast source 402 4) non-IP multicast traffic handling 403 The first bullet, All-Active multi-homing of IP multicast source and 404 receivers, is described in section 5.3. The second bullet is 405 described in section 5.4. Third and fourth bullets are described 406 next. 408 When an IP multicast source is attached to an EVPN PE, the unicast 409 route for that IP multicast source needs to be advertised. This 410 unicast route is advertised with VRF Route Import extended community 411 which in turn is used as the Route Target for Join (S,G) messages 412 sent toward the source PE by the remote MVPN PEs. The EVPN PE 413 advertises this unicast route using EVPN route type 5 or IPVPN 414 unicast route or both along with VRF Route Import extended community. 415 When unicast routes are advertised by MVPN PEs, they are advertised 416 using IPVPN unicast route along with VRF Route Import extended 417 community per [RFC6514]. 419 Link local multicast traffic (e.g. addressed to 224.0.0.x in case of 420 IPv4) as well as IP protocols such as OSPF, and non-IP 421 multicast/broadcast traffic are sent per EVPN [RF7432] BUM procedures 422 and does not get routed via IP-VRF for multicast addresses. So, such 423 BUM traffic will be limited to a given EVI/VLAN (e.g., a give 424 subnet); whereas, IP multicast traffic, will be locally switched for 425 local interfaces attached on the same subnet and will be routed for 426 local interfaces attached on a different subnet or for forwarding 427 traffic to other EVPN PEs (refer to section 5.1.1 for data plane 428 operation). 430 5.1.1 Control Plane Operation 432 Just like a MVPN PE, an EVPN PE runs a separate tenant multicast 433 routing instance (VPN-specific) per MVPN instance and the following 434 tenant multicast routing instances are supported: 436 - PIM Sparse Mode (PIM-SM) with the ASM service model 437 - PIM Sparse Mode with the SSM service model 438 - PIM Bidirectional Mode (BIDIR-PIM), which uses bidirectional 439 tenant-trees to support the ASM service model 441 A given tenant's PIM join messages, (C-*, C-G) or (C-S, C-G), are 442 processed by the corresponding tenant multicast routing protocol and 443 they are advertised over MPLS/IP network using Shared Tree Join route 444 (route type 6) and Source Tree Join route (route type 7) respectively 445 of MCAST-VPN NLRI per [RFC6514]. 447 The following NLRIs from [RFC6514] SHOULD be used for forming 448 Underlay/Core tunnels inside a data center. 450 Intra-AS I-PMSI A-D route is used to form default tunnel (also 451 called inclusive tunnel) for a tenant VRF. The tunnel attributes 452 are indicated using PMSI attribute with this route. 454 S-PMSI A-D route is used to form Customer flow specific underlay 455 tunnels. This enables selective delivery of data to PEs having 456 active receivers and optimizes fabric bandwidth utilization. The 457 tunnel attributes are indicated using PMSI attribute with this 458 route. 460 Source Active A-D route is used by source connected PE in order to 461 announce active multicast source. This enables PEs having active 462 receivers for the flow to join the tunnels and switch to Shortest 463 Path tree. 465 Each EVPN PE supporting a specific MVPN discovers the set of other 466 PEs in its AS that are attached to sites of that MVPN using Intra-AS 467 I-PMSI A-D route (route type 1) per [RFC6514]. It can also discover 468 the set of other ASes that have PEs attached to sites of that MVPN 469 using Inter-AS I-PMSI A-D route (route type 2) per [RFC6514]. After 470 the discovery of PEs that are attached to sites of the MVPN, an 471 inclusive overlay tree (I-PMSI) can be setup for carrying tenant 472 multicast flows for that MVPN; however, this is not a requirement per 473 [RFC6514] and it is possible to adopt a policy in which all tenant 474 flows are carried on S-PMSIs. 476 An EVPN PE also sets up a multipoint-to-multipoint (MP2MP) tree per 477 EVI using Inclusive Multicast Ethernet Tag route (route type 3) of 478 EVPN NLRI per [RFC7432]. This MP2MP tree can be instantiated using 479 unicast tunnels or P2MP tunnels. In [RFC7432], this tree is used for 480 transmission of all BUM traffic including IP multicast traffic. 481 However, for multicast traffic handling in EVPN-IRB PEs, this tree is 482 used for all broadcast, unknown-unicast and non-IP multicast traffic 483 - i.e., it is used for all BUM traffic except IP multicast user 484 traffic. Therefore, an EVPN-IRB PE sends a customer IP multicast flow 485 only on a single tunnel that is instantiated for MVPN I-PMSI or S- 486 PMSI. In other words, IP multicast traffic sent over MPLS/IP network 487 are not sent off of MAC-VRF but rather IP-VRF. 489 If a tenant host device is multi-homed to two or more EVPN PEs using 490 All-Active multi-homing, then IGMP join and leave messages are 491 synchronized between these EVPN PEs using EVPN IGMP Join Synch route 492 (route type 7) and EVPN IGMP Leave Synch route (route type 8). There 493 is no need to use EVPN Selective Multicast Tag route (SMET route) 494 because the IGMP messages are terminated by the EVPN-IRB PE and 495 tenant (S,G) or (*,G) join messages are sent via MVPN Source/Shared 496 Tree Join messages. 498 5.1.2 Data Plane Operation 500 When an EVPN-IRB PE receives an IGMP/MLD join message over one of its 501 Attachment Circuits (ACs), it adds that AC to its Layer-2 (L2) OIF 502 list. This L2 OIF list is associated with the MAC-VRF corresponding 503 to the subnet of the tenant device that sent the IGMP/MLD join. 504 Therefore, tenant (S,G) or (*,G) forwarding entries are 505 created/updated for the corresponding MAC-VRF based on these source 506 and group IP addresses. Furthermore, the IGMP/MLD join message is 507 propagated over the corresponding IRB interface and it is processed 508 by the tenant multicast routing instance which creates the 509 corresponding tenant (S,G) or (*,G) Layer-3 (L3) forwarding entries. 510 It adds this IRB interface to the L3 OIF list. An IRB is removed as a 511 L3 OIF when all L2 tenant (S,G) or (*,G) forwarding states is removed 512 for the MAC-VRF associated with that IRB. Furthermore, tenant (S,G) 513 or (*,G) L3 forwarding state is removed when all of its L3 OIFs are 514 removed - i.e., all the IRB interfaces associated with that tenant 515 (S,G) or (*,G) are removed. 517 When an EVPN-IRB PE receives IP multicast traffic, if it has any 518 attached receivers for that subnet, it does L2 switching for such 519 intra-subnet traffic. It then sends the multicast traffic over the 520 corresponding IRB interface. The multicast traffic then gets routed 521 over IRB interfaces that are included in the OIF list for that 522 multicast traffic (and TTL gets decremented). When the multicast 523 traffic is received on an IRB interface by the MAC-VRF corresponding 524 to that interface, it gets L2 switched and sent over ACs that belong 525 to the L2 OIF list. Furthermore, the multicast traffic gets sent over 526 I-PMSI or S-PMSI associated with that multicast flow to other PE 527 devices that are participating in that MVPN. 529 5.1.2.1 Sender and Receiver in same MAC-VRF 531 Rcvr1 in Figure 1 is connected to PE1 in MAC-VRF1 (same as Src1) and 532 sends IGMP join for (C-S, C-G), IGMP snooping will record this state 533 in local bridging entry. A routing entry will be formed as well 534 which will point to MAC-VRF1 as RPF for Src1. We assume that Src1 is 535 known via ARP or similar procedures. Rcvr1 will get a locally 536 bridged copy of multicast traffic from Src1. Rcvr3 is also connected 537 in MAC-VRF1 but to PE2 and hence would send IGMP join which will be 538 recorded at PE2. PE2 will also form routing entry and RPF will be 539 assumed as Tenant Tunnel "Tenant1" formed beforehand using MVPN 540 procedures. Also this would cause multicast control plane to 541 initiate a BGP MCAST-VPN type 7 route which would include VRI for PE1 542 and hence be accepted on PE1. PE1 will include Tenant1 tunnel as 543 Outgoing Interface (OIF) in the routing entry. Now, since it has 544 knowledge of remote receivers via MVPN control plane it will 545 encapsulate original multicast traffic in Tenant1 tunnel towards 546 core. On PE2, since C-S falls in the MAC-VRF1 subnet, MAC-VRF1 547 Outgoing interface is treated as Ingress MAC-VRF bridging. Hence no 548 rewrite is performed on the received customer data packet while 549 forwarding towards Rcvr3. 551 5.1.2.2 Sender and Receiver in different MAC-VRF 553 Rcvr2 in Figure 1 is connected to PE1 in MAC-VRF2 and hence PE2 will 554 record its membership in MAC-VRF2. Since MAC-VRF2 is enabled with 555 IRB, it gets added as another OIF to routing entry formed for (C-S, 556 C-G). Rcvr3 and Rcvr4 are also in different MAC-VRFs than multicast 557 speaker Src1 and hence need Inter-subnet forwarding. PE2 will form 558 local bridging entry in MAC-VRF2 due to IGMP joins received from 559 Rcvr3 and Rcvr4 respectively. PE2 now adds another OIF 'MAC-VRF2' to 560 its existing routing entry. But there is no change in control plane 561 states since its already sent MVPN route and no further signaling is 562 required. Also since Src1 is not part of MAC-VRF2 subnet, it is 563 treated as routing OIF and hence MAC header gets modified as per 564 normal procedures for routing. PE3 forms routing entry very similar 565 to PE2. It is to be noted that PE3 does not have MAC-VRF1 configured 566 locally but still can receive the multicast data traffic over Tenant1 567 tunnel formed due to MVPN procedures 569 5.2. Operational Model for Heterogeneous EVPN IRB PEs 571 5.3. All-Active Multi-Homing 573 EVPN solution [RFC7432] uses ESI MPLS label for split-horizon 574 filtering of Broadcast/Unknown unicast/multicast (BUM) traffic from 575 an All-Active multi-homing Ethernet Segment to ensure that BUM 576 traffic doesn't get loop back to the same Ethernet Segment that it 577 came from. In MVPN, there is no concept of ESI label and split- 578 horizon filtering because there is no support for All-Active multi- 579 homing; however, EVPN NVEs rely on this function to prevent loop for 580 an access Ethernet Segment. Figure-2 depicts a source sitting behind 581 an All-Active dual-homing Ethernet Segment. The following scenarios 582 needs special considerations: 584 EVPN PE1 585 +------------+ 586 Rcvr1 +----|(MAC-VRF1) | MVPN PE1 587 | \ | +---------+ +--------+ 588 | (IP-VRF)|----| |---|(IP-VRF)|--- Rcvr4 589 | / | | | +--------+ 590 +---|(MAC-VRF2) | | | 591 Src1 | +------------+ | | 592 (ES1) | | MPLS/ | 593 Rcvr6 | EVPN PE2 | IP | 594 (*,G) | +------------+ | | 595 +---|(MAC-VRF2) | | | MVPN PE2 596 | \ | | | +--------+ 597 | (IP-VRF)|----| |---|(IP-VRF)|--- Rcvr5 598 | / | +---------+ +--------+ 599 Rcvr2 +----|(MAC-VRF3) | 600 +------------+ 602 Figure-2: Multi-homing 604 5.3.1. Source and receivers in same ES but on different subnets 606 If the tenant multicast source sits on a different subnet than its 607 receivers, then EVPN DF election procedure for multi-homing ES is 608 sufficient and there will be no need to do any split-horizon 609 filtering for that Ethernet Segment because with IGMP/MLD snooping 610 enabled on VLANs for the multi-homing ES, only the VLANs for which 611 IGMP/MLD join have been received are placed in OIF list for that 612 (S,G) or (*,G) on that ES. Therefore, multicast traffic will not be 613 loop backed on the source subnet (because there is no receiver on 614 that subnet) and for other subnets that the multicast traffic is loop 615 backed, the DF election ensures only a single copy of the multicast 616 traffic is sent on that subnet. 618 5.3.2. Source and some receivers in same ES and on same subnet 620 If the tenant multicast source sits on the same subnet and the same 621 ES as some of its receivers and those receivers have interest in 622 (*,G), then Besides DF election mechanism, there needs to be split- 623 horizon filtering to ensure that the multicast traffic originated 624 from that is not loop backed to itself. The existing 625 split-horizon filtering as specified in [RFC7432] cannot be used 626 because the received VPN label identifies the multicast IP-VRF and 627 not MAC-VRF. Therefore, egress PE doesn't know for which EVI/BD it 628 needs to perform split-horizon filtering and for which EVI/BDs 629 belonging to the the same ES, it needs not to perform split-horizon 630 filtering. This issue is resolved by extending the local-bias 631 solution per [OVERLAY] to MPLS tunnels. There are two cases to 632 consider here: a) Ingress-replication tunnels used for the multicast 633 traffic and b) P2MP tunnels used for the multicast traffic. 635 If ingress-replication tunnels are used, then each PE in the multi- 636 homing group instead of advertising an ESI label, it advertises to 637 each PE in the multi-homing group a downstream assigned label 638 identifying that PE, so that when it receives a packet with this 639 label, it know who the originating PE is. Once the egress PE can 640 identify the originating PE for a packet, then it can execute local- 641 bias procedure per [OVERLAY] for each of its EVI/BDs corresponding to 642 that IP-VRF. 644 If P2MP tunnels are used (e.g., mLDP, RSVP-TE, or BIER), the tunnel 645 label identifies the tunnel and thus the originating PE. Since the 646 originating PE can be identified, the local-bias procedure per 647 [OVERLAY] is applied to prevent multicast data to be sent on the 648 Ethernet Segments in common with the originating PE. The difference 649 between the local-bias procedure in here versus the one described in 650 [OVERLAY] is that the multicast traffic in [OVERLAY] is only intended 651 for one subnet (and thus one BD) whereas the multicast traffic in 652 Figure-2 can span across multiple subnets (and thus multiple BDs). 653 Therefore, local-bias procedure in [OVERLAY] is expanded to perform 654 local bias across all the BDs of that tenant. In other words, the 655 same local-bias procedure is applied to all BDs of that tenant in 656 both the originating EVPN NVE as well as all other EVPN NVEs that 657 share the Ethernet Segment with the originating EVPN NVE. 659 5.4. Mobility for Tenant's sources and receivers 661 5.5. Single-Active Multi-Homing 663 6. DCs with only EVPN NVEs 665 As mentioned earlier, the proposed solution can be used as a routed 666 multicast solution for EVPN-only applications in data centers (e.g., 667 routed multicast VPN only among EVPN PEs). It should be noted that 668 the scope of intra-subnet, forwarding for the solution described in 669 this document, is limited to a single EVPN-IRB PE. In other words, 670 the IP multicast traffic that needs to be forwarded from one PE to 671 another is always routed (L3 forwarded) regardless of whether the 672 traffic is intra-subnet or inter-subnet. As the result, the TTL value 673 for intra-subnet traffic that spans across two or more PEs get 674 decremented. Based on past experiences with MVPN over last dozen 675 years for supported IP multicast applications, layer-3 forwarding of 676 intra-subnet multicast traffic should be fine. However, if there are 677 applications that require intra-subnet multicast traffic to be L2 678 forwarded (e.g., without decrementing TTL value), then [EVPN-IRB- 679 MCAST] proposes a solution to accommodate such applications. 681 6.1 Setup of overlay multicast delivery 683 It must be emphasized that this solution poses no restriction on the 684 setup of the tenant BDs and that neither the source PE, nor the 685 receiver PEs do not need to know/learn about the BD configuration on 686 other PEs in the MVPN. The Reverse Path Forwarder (RPF) is selected 687 per the tenant multicast source and the IP-VRF in compliance with the 688 procedures in [RFC6514], using the incoming IP Prefix route (route 689 type 5) of EVPN NLRI per [RFC7432]. 691 The VRF Route Import (VRI) extended community that is carried with 692 the IP-VPN routes in [RFC6514] MUST be carried via the EVPN unicast 693 routes instead. The construction and processing of the VRI are 694 consistent with [RFC6514]. The VRI MUST uniquely identify the PE 695 which is advertising a multicast source and the IP-VRF it resides in. 697 VRI is constructed as following: 699 - The 4-octet Global Administrator field MUST be set to an IP 700 address of the PE. This address SHOULD be common for all the 701 IP-VRFs on the PE (e.g., this address may be the PE's loopback 702 address). 703 - The 2-octet Local Administrator field associated with a given 704 IP-VRF contains a number that uniquely identifies that IP-VRF 705 within the PE that contains the IP-VRF. 707 Every PE which detects a local receiver via a local IGMP join or a 708 local PIM join for a specific source (overlay SSM mode) MUST 709 terminate the IGMP/PIM signaling at the IP-VRF and generate a (C-S,C- 710 G) via the BGP MCAST-VPN route type 7 per [RFC6514] if and only if 711 the RPF for the source points to the fabric. If the RPF points to a 712 local multicast source on the same MAC-VRF or a different MAC-VRF on 713 that PE, the MCAST-VPN MUST NOT be advertised and data traffic will 714 be locally routed/bridged to the receiver as detailed in section 6.2. 716 The VRI received with EVPN route type 5 NLRI from source PE will be 717 appended as an export route-target extended community. More details 718 about handling of various types of local receivers are in section 10. 719 The PE which has advertised the unicast route with VRI, will import 720 the incoming MCAST-VPN NLRI in the IP-VRF with the same import route- 721 target extended-community and other PEs SHOULD ignore it. Following 722 such procedure the source PE learns about the existence of at least 723 one remote receiver in the tenant overlay and programs data plane 724 accordingly so that a single copy of multicast data is forwarded into 725 the core VRF using tenant VRF tunnel. 727 If the multicast source is unknown (overlay ASM mode), the MCAST-VPN 728 route type 6 (C-*,C-G) join SHOULD be targeted towards the designated 729 overlay Rendezvous Point (RP) by appending the received RP VRI as an 730 export route-target extended community. Every PE which detects a 731 local source, registers with its RP PE. That is how the RP learns 732 about the tenant source(s) and group(s) within the MVPN. Once the 733 overlay RP PE receives either the first remote (C-RP,C-G) join or a 734 local IGMP join or a local PIM join, it will trigger an MCAST-VPN 735 route type 7 (C-S,C-G) towards the actual source PE for which it has 736 received PIM register message in full compliance with regular PIM 737 procedures. This involves the source PE to advertise the MCAST-VPN 738 Source Active A-D route (MCAST-VPN route-type 5) towards all PEs. 739 The Source Active A-D route is used to inform the active multicast 740 source to all PEs in the Overlay so they can potentially switch from 741 RP-Shared-Tree to Shortest-Path-Tree. The above procedure is optional 742 per [RFC6514], and user SHALL enable an auto-discovery mode where the 743 temporary RP-Shared-Tree is not involved. In this mode, the source PE 744 MUST advertise the MCAST-VPN Source Active A-D route (type 5) as soon 745 as it detects data traffic from the local tenant multicast source. 746 Hence the PEs at different sites of the same MVPN will directly join 747 the Shortest-Path-Tree once they receive the MCAST-VPN Source Active 748 A-D route. 750 6.3 Data plane considerations 752 Data-center fabrics are implemented using variety of core 753 technologies but predominant ones are IP/VXLAN Ingress Replication, 754 IP/VXLAN PIM and MPLS LSM. IP and MPLS have been predominant choice 755 for MVPN core as well hence all existing procedures for forming 756 tunnels for these technologies are applicable in EVPN as well. Also 757 as described in earlier section, each PE acts as PIM DR in its 758 locally connected Bridge Domain, we MUST NOT forward post-routed 759 traffic out of IRB interfaces towards the core. 761 7 Handling of different encapsulations 763 Just as in [RFC6514] the A-D routes are used to form the overlay 764 multicast tunnels and signal the tunnel type using the P-Multicast 765 Service Interface Tunnel (PMSI Tunnel) attribute. 767 7.1 MPLS Encapsulation 769 The [RFC6514] assumes MPLS/IP core and there is no modification to 770 the signaling procedures and encoding for PMSI tunnel formation 771 therein. Also, there is no need for a gateway to inter-operate with 772 non-EVPN PEs supporting [RFC6514] based MVPN over IP/MPLS. 774 7.2 VxLAN Encapsulation 776 In order to signal VXLAN, the corresponding BGP encapsulation 777 extended community [TUNNEL-ENCAP] SHOULD be appended to the A-D 778 routes. The MPLS label in the PMSI Tunnel Attribute MUST be the 779 Virtual Network Identifier (VNI) associated with the customer MVPN. 780 The supported PMSI tunnel types with VXLAN encapsulation are: PIM-SSM 781 Tree, PIM-SM Tree, BIDIR-PIM Tree, Ingress Replication [RFC6514]. 782 Further details are in [OVERLAY]. 784 In this case, a gateway is needed for inter-operation between the 785 EVPN-IRB PEs and non-EVPN MVPN PEs. The gateway should re-originate 786 the control plane signaling with the relevant tunnel encapsulation on 787 either side. In the data plane, the gateway terminates the tunnels 788 formed on either side and performs the relevant stitching/re- 789 encapsulation on data packets. 791 7.3 Other Encapsulation 793 In order to signal a different tunneling encapsulation such as NVGRE, 794 VXLAN-GPE or MPLSoGRE the corresponding BGP encapsulation extended 795 community [TUNNEL-ENCAP] SHOULD be appended to the A-D routes. If the 796 Tunnel Type field in the encapsulation extended-community is set to a 797 type which requires Virtual Network Identifier (VNI), e.g., VXLAN-GPE 798 or NVGRE [TUNNEL-ENCAP], then the MPLS label in the PMSI Tunnel 799 Attribute MUST be the VNI associated with the customer MVPN. Same as 800 in VXLAN case, a gateway is needed for inter-operation between the 801 EVPN-IRB PEs and non-EVPN MVPN PEs. 803 8. DCI with MPLS in WAN and VxLAN in DCs 805 This section describers the inter-operation between MVPN MPLS WAN 806 with MVPN-EVPN in a data-center which runs on VxLAN. Since the tunnel 807 encapsulation between these networks are different, we must have at 808 least one gateway in between. Usually, two or more are required for 809 redundancy and load balancing purpose. Some aspects of the multi- 810 homing between VxLAN DC networks and MPLS WAN is in common with 811 [INTERCON-EVPN]. Herein, only the differences are described. 813 8.1 Control plane inter-connect 814 The gateway(s) MUST be setup with the inclusive set of all the IP- 815 VRFs that span across the two domains. On each gateway, there will be 816 at least two BGP sessions: one towards the DC side and the other 817 towards the WAN side. Usually for redundancy purpose, more sessions 818 are setup on each side. The unicast route propagation follows the 819 exact same procedures in [INTERCON-EVPN]. Hence, a multicast host 820 located in either domain, is advertised with the gateway IP address 821 as the next-hop to the other domain. As a result, PEs view the hosts 822 in the other domain as directly attached to the gateway and all 823 inter-domain multicast signaling is directed towards the gateway(s). 824 Received MVPN routes type 1-7 from either side of the gateway(s), 825 MUST NOT be reflected back to the same side but processed locally and 826 re-advertised (if needed) to the other side: 828 - Intra-AS I-PMSI A-D Route: these are distributed within 829 each domain to form the overlay tunnels which terminate at 830 gateway(s). They are not passed to the other side of the 831 gateway(s). 833 - C-Multicast Route: joins are imported into the corresponding 834 IP-VRF on each gateway and advertised as a new route to the 835 other side with the following modifications (the rest of NLRI 836 fields and path attributes remain on-touched): 837 * Route-Distinguisher is set to that of the IP-VRF 838 * Route-target is set to the exported route-target 839 list on IP-VRF 840 * The PMSI tunnel attribute and BGP Encapsulation 841 extended community will be modified according to 842 section 8 843 * Next-hop will be set to the IP address which represents 844 the gateway on either domain 846 - Source Active A-D Route: same as joins 848 - S-PMSI A-D Route: these are passed to the other side to form 849 selective PMSI tunnels per every (C-S,C-G) from the gateway 850 to the PEs in the other domain provided it contains receivers 851 for the given (C-S, C-G). Similar modifications made to joins 852 are made to the newly originated S-PMSI. 854 In addition, the Originating Router's IP address is set to GW's IP 855 address. Multicast signaling from/to hosts on local ACs on the 856 gateway(s) are generated and propagated in both domains (if needed) 857 per the procedures in section 7 in this document and in [RFC6514] 858 with no change. It must be noted that for a locally attached source, 859 the gateway will program an OIF per every domain from which it 860 receives a remote join in its forwarding plane and different 861 encapsulation will be used on the data packets. 863 Other point to notice is that if there are multiple gateways in an 864 ESI which peer with each other, each one will receive two sets of the 865 local MCAST-VPN routes from the other gateway: 1) the WAN set 2) the 866 DC set. Following the same procedure as in [INTERCON-EVPN], the WAN 867 set SHALL be given a higher priority. 869 8.2 Data plane inter-connect 871 Traffic forwarding procedures on gateways are same as those described 872 for PEs in section 5 and 6 except that, unlike a non-border leaf PE, 873 the gateway will not only route or bridge the incoming traffic from 874 one side to its local receivers, but will also send it to the remote 875 receivers in the the other domain after de-capsulation and appending 876 the right encapsulation. The OIF and IIF are programmed in FIB based 877 on the received joins from either side and the RPF calculation to the 878 source or RP. The de-capsulation and encapsulation actions are 879 programmed based on the received I-PMSI or S-PMSI A-D routes from 880 either sides. 882 If there are more than one gateway between two domains, the multi- 883 homing procedures described in the following section must be 884 considered so that incoming traffic from one side is not looped back 885 to the other gateway. 887 The multicast traffic from local hosts on each gateway flows to the 888 other gateway with the preferred encapsulation (WAN encapsulation is 889 preferred as described in previous section). 891 8.3 Multi-homing among DCI gateways Just as in [INTERCON-EVPN] every set 892 of multi-homed gateways between the WAN and a given DC are assigned a 893 unique ESI. 895 9. Inter-AS Operation 897 10. Use Cases 899 10.1 DCs with only IGMP/MLD hosts w/o tenant router 901 In a EVPN network consisting of only IGMP/MLD hosts, PE's will 902 receive IGMP (*, G) or (S, G) joins from their locally attached host 903 and would originate MVPN C-Multicast Route Type 6 and 7 NLRI's 904 respectively. As described in RFC 6514 these NLRI's are directed 905 towards RP-PE for Type 6 or Source-PE for Type 7. In case of (*, G) 906 join a Shared-Path Tree will be built in the core from RP-PE towards 907 all Receiver-PE's. Once a Source starts to send Multicast data to 908 specified multicast-group, the PE directly connected to Source will 909 do PIM-registration with RP. Since there are existing receivers for 910 the Group, RP will originate a PIM (S, G) join towards Source. This 911 will be converted to MVPN Type 7 NLRI by RP-PE. Please note that 912 since there are no other routers RP-PE would be the PE configured as 913 RP using static configuration or by using BSR or Auto-RP procedures. 914 The detailed working of such protocols is beyond the scope of this 915 document. Upon receiving Type 7 NLRI, Source-PE will include MVPN 916 Tunnel in its Outgoing Interface List. Furthermore, Source-PE will 917 follow the procedures in RFC-6514 to originate MVPN SA-AD route (RT 918 5) to avoid duplicate traffic and allow all Receiver-PE's to shift 919 from Share-Tree to Shortest-Path-Tree rooted at Source-PE. Section 13 920 of RFC6514 describes it. 922 However a network operator can chose to have only Shortest-Path-Tree 923 built in MVPN core as described in RFC6513. To achieve this, all PE's 924 can act as RP for its locally connected hosts and thus avoid sending 925 any Shared-Tree Join (MVPN Type 6) into the core. In this scenario, 926 there will be no PIM registration needed since all PE's are first-hop 927 router as well as acting RP. One a source starts to send multicast 928 data, the PE directly connected to it originates Source-Active AD (RT 929 5) to all other PE's in network. Upon Receiving Source-Active AD 930 route a PE must cache it in its local database and also look for any 931 matching interest for (*, G) where G is the multicast group described 932 in received Source-Active AD route. If it finds any such matching 933 entry, it must originate a C-Multicast route (RT 7) in order to start 934 receiving traffic from Source-PE. This procedure must be repeated on 935 reception of any further Source-Active AD routes. 937 10.2 DCs with mixed of IGMP/MLD hosts & multicast routers running PIM- 938 SSM 940 This scenario has multicast routers which can send PIM SSM (S, G) 941 joins. Upon receiving these joins and if source described in join is 942 learnt to be behind a MVPN peer PE, local PE will originate C- 943 Multicast Join (RT 7) towards Source-PE. It is expected that PIM SSM 944 group ranges are kept separate from ASM range for which IGMP hosts 945 can send (*, G) joins. Hence both ASM and SSM groups shall operate 946 without any overlap. There is no RP needed for SSM range groups and 947 Shortest Path tree rooted at Source is built once a receiver interest 948 is known. 950 10.3 DCs with mixed of IGMP/MLD hosts & multicast routers running PIM- 951 ASM 953 This scenario includes reception of PIM (*, G) joins on PE's local 954 AC. These joins are handled similar to IGMP (*, G) join as explained 955 in sections above. Another interesting case can arise here is when 956 one of the tenant routers can act as RP for some of the ASM Groups. 957 In such scenario, a Upstream Multicast Hop (UMH) will be elected by 958 other PE's in order to send C-Multicast Routes (RT 6). All procedures 959 described in RFC 6513 with respect to UMH should be used to avoid 960 traffic duplication due to incoherent selection of RP-PE by different 961 Receiver-PE's. 963 10.4 DCs with mixed of IGMP/MLD hosts & multicast routers running PIM- 964 Bidir 966 Creating Bidirectional (*, G) trees is useful when a customer wants 967 least amount of control state in network. But on downside all 968 receivers for a particular multicast group receive traffic from all 969 sources sending to that group. However for the purpose of this 970 document, all procedures as described in RFC 6513 and RFC 6514 apply 971 when PIM-Bidir is used. 973 11. IANA Considerations 975 There is no additional IANA considerations for PBB-EVPN beyond what 976 is already described in [RFC7432]. 978 12. Security Considerations 980 All the security considerations in [RFC7432] apply directly to this 981 document because this document leverages [RFC7432] control plane and 982 their associated procedures. 984 13. Acknowledgements 986 The authors would like to thank Samir Thoria, Ashutosh Gupta, 987 Niloofar Fazlollahi, and Aamod Vyavaharkar for their discussions and 988 contributions. 990 14. References 992 14.1. Normative References 994 [RFC7024] Jeng, H., Uttaro, J., Jalil, L., Decraene, B., Rekhter, 995 Y., and R. Aggarwal, "Virtual Hub-and-Spoke in BGP/MPLS 996 VPNs", RFC 7024, October 2013. 998 [RFC7432] A. Sajassi, et al., "BGP MPLS Based Ethernet VPN", RFC 999 7432 , February 2015. 1001 15.2. Informative References 1003 [RFC7080] A. Sajassi, et al., "Virtual Private LAN Service (VPLS) 1004 Interoperability with Provider Backbone Bridges", RFC 1005 7080, December 2013. 1007 [RFC7209] D. Thaler, et al., "Requirements for Ethernet VPN (EVPN)", 1008 RFC 7209, May 2014. 1010 [RFC4389] A. Sajassi, et al., "Neighbor Discovery Proxies (ND 1011 Proxy)", RFC 4389, April 2006. 1013 [RFC4761] K. Kompella, et al., "Virtual Private LAN Service (VPLS) 1014 Using BGP for Auto-Discovery and Signaling", RFC 4761, 1015 Jauary 2007. 1017 [OVERLAY] A. Sajassi, et al., "A Network Virtualization Overlay 1018 Solution using EVPN", draft-ietf-bess-evpn-overlay-01, 1019 work in progress, February 2015. 1021 [RFC6514] R. Aggarwal, et al., "BGP Encodings and Procedures for 1022 Multicast in MPLS/BGP IP VPNs", RFC6514, February 2012. 1024 [RFC6513] E. Rosen, et al., "Multicast in MPLS/BGP IP VPNs", RFC6513, 1025 February 2012. 1027 [INTERCON-EVPN] J. Rabadan, et al., "Interconnect Solution for EVPN 1028 Overlay networks", https://tools.ietf.org/html/draft-ietf- 1029 bess-dci-evpn-overlay-04, September 2016 1031 [TUNNEL-ENCAPS] E. Rosen, et al. "The BGP Tunnel Encapsulation 1032 Attribute", https://tools.ietf.org/html/draft-ietf-idr- 1033 tunnel-encaps-06, work in progress, June 2017. 1035 15. Authors' Addresses 1037 Ali Sajassi 1038 Cisco 1039 170 West Tasman Drive 1040 San Jose, CA 95134, US 1041 Email: sajassi@cisco.com 1042 Samir Thoria 1043 Cisco 1044 170 West Tasman Drive 1045 San Jose, CA 95134, US 1046 Email: sthoria@cisco.com 1048 Niloofar Fazlollahi 1049 Cisco 1050 170 West Tasman Drive 1051 San Jose, CA 95134, US 1052 Email: nifazlol@cisco.com 1054 Ashutosh Gupta 1055 Avi Networks 1056 Email: ashutosh@avinetworks.com