idnits 2.17.1 draft-ietf-nvo3-mcast-framework-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 12, 2017) is 2533 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC 3819' is mentioned on line 118, but not defined == Missing Reference: 'RFC 3569' is mentioned on line 140, but not defined == Missing Reference: 'FW' is mentioned on line 221, but not defined == Unused Reference: 'RFC3569' is defined on line 587, but no explicit reference was found in the text == Unused Reference: 'RFC3819' is defined on line 590, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 NVO3 working group A. Ghanwani 2 Internet Draft Dell 3 Intended status: Informational L. Dunbar 4 Expires: November 8, 2017 M. McBride 5 Huawei 6 V. Bannai 7 Google 8 R. Krishnan 9 Dell 11 May 12, 2017 13 A Framework for Multicast in Network Virtualization Overlays 14 draft-ietf-nvo3-mcast-framework-08 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. This document may not be modified, 23 and derivative works of it may not be created, except to publish it 24 as an RFC and to translate it into languages other than English. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as Internet- 29 Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six 32 months and may be updated, replaced, or obsoleted by other documents 33 at any time. It is inappropriate to use Internet-Drafts as 34 reference material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html 42 This Internet-Draft will expire on November 8, 2016. 44 Copyright Notice 46 Copyright (c) 2016 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with 54 respect to this document. Code Components extracted from this 55 document must include Simplified BSD License text as described in 56 Section 4.e of the Trust Legal Provisions and are provided without 57 warranty as described in the Simplified BSD License. 59 Abstract 61 This document discusses a framework of supporting multicast traffic 62 in a network that uses Network Virtualization Overlays (NVO3). Both 63 infrastructure multicast and application-specific multicast are 64 discussed. It describes the various mechanisms that can be used for 65 delivering such traffic as well as the data plane and control plane 66 considerations for each of the mechanisms. 68 Table of Contents 70 1. Introduction...................................................3 71 1.1. Infrastructure multicast..................................3 72 1.2. Application-specific multicast............................4 73 1.3. Terminology clarification.................................4 74 2. Acronyms.......................................................4 75 3. Multicast mechanisms in networks that use NVO3.................5 76 3.1. No multicast support......................................6 77 3.2. Replication at the source NVE.............................6 78 3.3. Replication at a multicast service node...................9 79 3.4. IP multicast in the underlay.............................10 80 3.5. Other schemes............................................12 81 4. Simultaneous use of more than one mechanism...................12 82 5. Other issues..................................................12 83 5.1. Multicast-agnostic NVEs..................................12 84 5.2. Multicast membership management for DC with VMs..........13 85 6. Summary.......................................................14 86 7. Security Considerations.......................................14 87 8. IANA Considerations...........................................14 88 9. References....................................................14 89 9.1. Normative References.....................................14 90 9.2. Informative References...................................15 91 10. Acknowledgments..............................................16 93 1. Introduction 95 Network virtualization using Overlays over Layer 3 (NVO3) is a 96 technology that is used to address issues that arise in building 97 large, multitenant data centers that make extensive use of server 98 virtualization [RFC7364]. 100 This document provides a framework for supporting multicast traffic, 101 in a network that uses Network Virtualization using Overlays over 102 Layer 3 (NVO3). Both infrastructure multicast and application- 103 specific multicast are considered. It describes the various 104 mechanisms and considerations that can be used for delivering such 105 traffic in networks that use NVO3. 107 The reader is assumed to be familiar with the terminology as defined 108 in the NVO3 Framework document [RFC7365] and NVO3 Architecture 109 document [RFC8014]. 111 1.1. Infrastructure multicast 113 Infrastructure multicast is a capability needed by networking 114 services, such as Address Resolution Protocol (ARP), Neighbor 115 Discovery (ND), Dynamic Host Configuration Protocol (DHCP), 116 multicast Domain Name Server (mDNS), etc.. RFC3819 Section 5 and 6 117 have detailed description for some of the infrastructure multicast 118 [RFC 3819]. It is possible to provide solutions for these that do 119 not involve multicast in the underlay network. In the case of 120 ARP/ND, a network virtualization authority (NVA) can be used for 121 distributing the mappings of IP address to MAC address to all 122 network virtualization edges (NVEs). The NVEs can then trap ARP 123 Request/ND Neighbor Solicitation messages from the TSs that are 124 attached to it and respond to them, thereby eliminating the need to 125 for broadcast/multicast of such messages. In the case of DHCP, the 126 NVE can be configured to forward these messages using a helper 127 function. 129 Of course it is possible to support all of these infrastructure 130 multicast protocols natively if the underlay provides multicast 131 transport. However, even in the presence of multicast transport, it 132 may be beneficial to use the optimizations mentioned above to reduce 133 the amount of such traffic in the network. 135 1.2. Application-specific multicast 137 Application-specific multicast traffic are originated and consumed 138 by user applications. The Application-specific multicast, which can 139 be either Source-Specific Multicast (SSM) or Any-Source Multicast 140 (ASM)[RFC 3569], has the following characteristics: 142 1. Receiver hosts are expected to subscribe to multicast content 143 using protocols such as IGMP [RFC3376] (IPv4) or MLD (IPv6). 144 Multicast sources and listeners participant in these protocols 145 using addresses that are in the Tenant System address domain. 147 2. The list of multicast listeners for each multicast group is not 148 known in advance. Therefore, it may not be possible for an NVA 149 to get the list of participants for each multicast group ahead 150 of time. 152 1.3. Terminology clarification 154 In this document, the terms host, tenant system (TS) and virtual 155 machine (VM) are used interchangeably to represent an end station 156 that originates or consumes data packets. 158 2. Acronyms 160 ASM: Any-Source Multicast 162 IGMP: Internet Group Management Protocol 164 LISP: Locator/ID Separation Protocol 166 MSN: Multicast Service Node 168 RLOC: Routing Locator 170 NVA: Network Virtualization Authority 172 NVE: Network Virtualization Edge 174 NVGRE: Network Virtualization using GRE 176 PIM: Protocol-Independent Multicast 177 SSM: Source-Specific Multicast 179 TS: Tenant system 181 VM: Virtual Machine 183 VN: Virtual Network 185 VTEP: VxLAN Tunnel End Points 187 VXLAN: Virtual eXtensible LAN 189 3. Multicast mechanisms in networks that use NVO3 191 In NVO3 environments, traffic between NVEs is transported using an 192 encapsulation such as Virtual eXtensible Local Area Network (VXLAN) 193 [RFC7348,VXLAN-GPE], Network Virtualization Using Generic Routing 194 Encapsulation (NVGRE) [RFC7637], , Geneve [Geneve], Generic UDP 195 Encapsulation (GUE) [GUE], etc. 197 What makes NVO3 different from any other network is that some NVEs, 198 especially the NVE implemented on server, might not support PIM or 199 other native multicast mechanisms. They might just encapsulate the 200 data packets from VMs with an outer unicast header. Therefore, it is 201 important for networks using NVO3 to have mechanisms to support 202 multicast as a network capability for NVEs, to map multicast traffic 203 from VMs (users/applications) to an equivalent multicast capability 204 inside the NVE, or to figure out the outer destination address if 205 NVE does not support native multicast (e.g. PIM) or IGMP. 207 Besides the need to support ARP and ND, there are several 208 applications that require the support of multicast and/or broadcast 209 in data centers [DC-MC]. With NVO3, there are many possible ways 210 that multicast may be handled in such networks. We discuss some of 211 the attributes of the following four methods: 213 1. No multicast support. 215 2. Replication at the source NVE. 217 3. Replication at a multicast service node. 219 4. IP multicast in the underlay. 221 These methods are briefly mentioned in the NVO3 Framework [FW] and 222 NVO3 architecture [RFC8014] document. This document provides more 223 details about the basic mechanisms underlying each of these methods 224 and discusses the issues and tradeoffs of each. 226 We note that other methods are also possible, such as [EDGE-REP], 227 but we focus on the above four because they are the most common. 229 3.1. No multicast support 231 In this scenario, there is no support whatsoever for multicast 232 traffic when using the overlay. This method can only work if the 233 following conditions are met: 235 1. All of the application traffic in the network is unicast 236 traffic and the only multicast/broadcast traffic is from ARP/ND 237 protocols. 239 2. An NVA is used by the NVEs to determine the mapping of a given 240 Tenant System's (TS's) MAC/IP address to its NVE. In other 241 words, there is no data plane learning. Address resolution 242 requests via ARP/ND that are issued by the TSs must be resolved 243 by the NVE that they are attached to. 245 With this approach, it is not possible to support application- 246 specific multicast. However, certain multicast/broadcast 247 applications such as DHCP can be supported by use of a helper 248 function in the NVE. 250 The main drawback of this approach, even for unicast traffic, is 251 that it is not possible to initiate communication with a TS for 252 which a mapping to an NVE does not already exist in the NVA. This 253 is a problem in the case where the NVE is implemented in a physical 254 switch and the TS is a physical end station that has not registered 255 with the NVA. 257 3.2. Replication at the source NVE 259 With this method, the overlay attempts to provide a multicast 260 service without requiring any specific support from the underlay, 261 other than that of a unicast service. A multicast or broadcast 262 transmission is achieved by replicating the packet at the source 263 NVE, and making copies, one for each destination NVE that the 264 multicast packet must be sent to. 266 For this mechanism to work, the source NVE must know, a priori, the 267 IP addresses of all destination NVEs that need to receive the 268 packet. For the purpose of ARP/ND, this would involve knowing the 269 IP addresses of all the NVEs that have TSs in the virtual network 270 (VN) of the TS that generated the request. For the support of 271 application-specific multicast traffic, a method similar to that of 272 receiver-sites registration for a particular multicast group 273 described in [LISP-Signal-Free] can be used. The registrations from 274 different receiver-sites can be merged at the NVA, which can 275 construct a multicast replication-list inclusive of all NVEs to 276 which receivers for a particular multicast group are attached. The 277 replication-list for each specific multicast group is maintained by 278 the NVA. Note: Using LISP-signal-free does not necessarily mean the 279 head-end (i.e. NVE) must do replication. If the mapping database 280 (i.e. NVA) indicates that packets are encapsulated to multicast 281 RLOCs, then there is no replication happening at the NVE. 283 The receiver-sites registration is achieved by egress NVEs 284 performing the IGMP/MLD snooping to maintain state for which 285 attached TSs have subscribed to a given IP multicast group. When 286 the members of a multicast group are outside the NVO3 domain, it is 287 necessary for NVO3 gateways to keep track of the remote members of 288 each multicast group. The NVEs and NVO3 gateways then communicate 289 the multicast groups that are of interest to the NVA. If the 290 membership is not communicated to the NVA, and if it is necessary to 291 prevent hosts attached to an NVE that have not subscribed to a 292 multicast group from receiving the multicast traffic, the NVE would 293 need to maintain multicast group membership information. 295 In the absence of IGMP/MLD snooping, the traffic would be delivered 296 to all TSs that are part of the VN. 298 In multi-homing environments, i.e., in those where a TS is attached 299 to more than one NVE, the NVA would be expected to provide 300 information to all of the NVEs under its control about all of the 301 NVEs to which such a TS is attached. The ingress NVE can choose any 302 one of the egress NVEs for the data frames destined towards the TS. 304 This method requires multiple copies of the same packet to all NVEs 305 that participate in the VN. If, for example, a tenant subnet is 306 spread across 50 NVEs, the packet would have to be replicated 50 307 times at the source NVE. This also creates an issue with the 308 forwarding performance of the NVE. 310 Note that this method is similar to what was used in Virtual Private 311 LAN Service (VPLS) [RFC4762] prior to support of Multi-Protocol 312 Label Switching (MPLS) multicast [RFC7117]. While there are some 313 similarities between MPLS Virtual Private Network (VPN) and NVO3, 314 there are some key differences: 316 - The Customer Edge (CE) to Provider Edge (PE) attachment in VPNs is 317 somewhat static, whereas in a DC that allows VMs to migrate 318 anywhere, the TS attachment to NVE is much more dynamic. 320 - The number of PEs to which a single VPN customer is attached in 321 an MPLS VPN environment is normally far less than the number of 322 NVEs to which a VN's VMs are attached in a DC. 324 When a VPN customer has multiple multicast groups, [RFC6513] 325 "Multicast VPN" combines all those multicast groups within each 326 VPN client to one single multicast group in the MPLS (or VPN) 327 core. The result is that messages from any of the multicast 328 groups belonging to one VPN customer will reach all the PE nodes 329 of the client. In other words, any messages belonging to any 330 multicast groups under customer X will reach all PEs of the 331 customer X. When the customer X is attached to only a handful of 332 PEs, the use of this approach does not result in excessive wastage 333 of bandwidth in the provider's network. 335 In a DC environment, a typical server/hypervisor based virtual 336 switch may only support 10's VMs (as of this writing). A subnet 337 with N VMs may be, in the worst case, spread across N vSwitches. 338 Using "MPLS VPN multicast" approach in such a scenario would 339 require the creation of a Multicast group in the core for this VN 340 to reach all N NVEs. If only small percentage of this client's VMs 341 participate in application specific multicast, a great number of 342 NVEs will receive multicast traffic that is not forwarded to any 343 of their attached VMs, resulting in considerable wastage of 344 bandwidth. 346 Therefore, the Multicast VPN solution may not scale in DC 347 environment with dynamic attachment of Virtual Networks to NVEs and 348 greater number of NVEs for each virtual network. 350 3.3. Replication at a multicast service node 352 With this method, all multicast packets would be sent using a 353 unicast tunnel encapsulation from the ingress NVE to a multicast 354 service node (MSN). The MSN, in turn, would create multiple copies 355 of the packet and would deliver a copy, using a unicast tunnel 356 encapsulation, to each of the NVEs that are part of the multicast 357 group for which the packet is intended. 359 This mechanism is similar to that used by the Asynchronous Transfer 360 Mode (ATM) Forum's LAN Emulation (LANE)LANE specification [LANE]. 361 The MSN is similar to the RP in PIM SM, but different in that the 362 user data traffic are carried by the NVO3 tunnels. 364 The following are the possible ways for the MSN to get the 365 membership information for each multicast group: 367 - The MSN can obtain this membership information from the IGMP/MLD 368 report messages sent by TSs in response to IGMP/MLD query messages 369 from the MSN. The IGMP/MLD query messages are sent from the MSN to 370 the NVEs, which then forward the query messages to TSs attached to 371 them. An IGMP/MLD query messages sent out by the MSN to an NVE is 372 encapsulated with the MSN address in the outer source address 373 field and the address of the NVE in the outer destination address 374 field. The encapsulated IGMP/MLD query messages also has a VNID 375 for a virtual network (VN) that TSs belong in the outer header and 376 a multicast address in the inner destination address field. Upon 377 receiving the encapsulated IGMP/MLD query message, the NVE 378 establishes a mapping "MSN address" <-> "multicast address", 379 decapsulates the received encapsulated IGMP/MLD message, and 380 multicast the decapsulated query message to TSs that belong to the 381 VN under the NVE. A IGMP/MLD report message sent by a TS includes 382 the multicast address and the address of the TS. With the proper 383 "MSN Address" <-> "Multicast-Address" mapping, the NVEs can 384 encapsulate all multicast data frames to the "Multicast-Address" 385 with the address of the MSN in the outer destination address 386 field. 388 - The MSN can obtain the membership information from the NVEs that 389 have the capability to establish multicast groups by snooping 390 native IGMP/MLD messages (p.s. the communication must be specific 391 to the multicast addresses), or by having the NVA obtain the 392 information from the NVEs, and in turn have MSN communicate with 393 the NVA. This approach requires additional protocol between MSN 394 and NVEs. 396 Unlike the method described in Section 3.2, there is no performance 397 impact at the ingress NVE, nor are there any issues with multiple 398 copies of the same packet from the source NVE to the Multicast 399 Service Node. However, there remain issues with multiple copies of 400 the same packet on links that are common to the paths from the MSN 401 to each of the egress NVEs. Additional issues that are introduced 402 with this method include the availability of the MSN, methods to 403 scale the services offered by the MSN, and the sub-optimality of the 404 delivery paths. 406 Finally, the IP address of the source NVE must be preserved in 407 packet copies created at the multicast service node if data plane 408 learning is in use. This could create problems if IP source address 409 reverse path forwarding (RPF) checks are in use. 411 3.4. IP multicast in the underlay 413 In this method, the underlay supports IP multicast and the ingress 414 NVE encapsulates the packet with the appropriate IP multicast 415 address in the tunnel encapsulation header for delivery to the 416 desired set of NVEs. The protocol in the underlay could be any 417 variant of Protocol Independent Multicast (PIM), or protocol 418 dependent multicast, such as [ISIS-Multicast]. 420 If an NVE connects to its attached TSs via a Layer 2 network, there 421 are multiple ways for NVEs to support the application specific 422 multicast: 424 - The NVE only supports the basic IGMP/MLD snooping function, let 425 the TSs routers handling the application specific multicast. This 426 scheme doesn't utilize the underlay IP multicast protocols. 428 - The NVE can act as a pseudo multicast router for the directly 429 attached VMs and support proper mapping of IGMP/MLD's messages to 430 the messages needed by the underlay IP multicast protocols. 432 With this method, there are none of the issues with the methods 433 described in Sections 3.2. 435 With PIM Sparse Mode (PIM-SM), the number of flows required would be 436 (n*g), where n is the number of source NVEs that source packets for 437 the group, and g is the number of groups. Bidirectional PIM (BIDIR- 438 PIM) would offer better scalability with the number of flows 439 required being g. Unfortunately, many vendors still do not fully 440 support BIDIR or have limitations on its implementaion. RFC6831 441 [RFC6831] has good description of using SSM as an alternative to 442 BIDIR if the VTEP/NVE devices have a way to learn of each other's IP 443 address so that they could join all SSM SPT's to create/maintain an 444 underlay SSM IP Multicast tunnel solution. 446 In the absence of any additional mechanism, e.g. using an NVA for 447 address resolution, for optimal delivery, there would have to be a 448 separate group for each tenant, plus a separate group for each 449 multicast address (used for multicast applications) within a tenant. 451 Additional considerations are that only the lower 23 bits of the IP 452 address (regardless of whether IPv4 or IPv6 is in use) are mapped to 453 the outer MAC address, and if there is equipment that prunes 454 multicasts at Layer 2, there will be some aliasing. Finally, a 455 mechanism to efficiently provision such addresses for each group 456 would be required. 458 There are additional optimizations which are possible, but they come 459 with their own restrictions. For example, a set of tenants may be 460 restricted to some subset of NVEs and they could all share the same 461 outer IP multicast group address. This however introduces a problem 462 of sub-optimal delivery (even if a particular tenant within the 463 group of tenants doesn't have a presence on one of the NVEs which 464 another one does, the former's multicast packets would still be 465 delivered to that NVE). It also introduces an additional network 466 management burden to optimize which tenants should be part of the 467 same tenant group (based on the NVEs they share), which somewhat 468 dilutes the value proposition of NVO3 which is to completely 469 decouple the overlay and physical network design allowing complete 470 freedom of placement of VMs anywhere within the data center. 472 Multicast schemes such as BIER (Bit Indexed Explicit Replication) 473 [BIER-ARCH] may be able to provide optimizations by allowing the 474 underlay network to provide optimum multicast delivery without 475 requiring routers in the core of the network to maintain per- 476 multicast group state. 478 3.5. Other schemes 480 There are still other mechanisms that may be used that attempt to 481 combine some of the advantages of the above methods by offering 482 multiple replication points, each with a limited degree of 483 replication [EDGE-REP]. Such schemes offer a trade-off between the 484 amount of replication at an intermediate node (router) versus 485 performing all of the replication at the source NVE or all of the 486 replication at a multicast service node. 488 4. Simultaneous use of more than one mechanism 490 While the mechanisms discussed in the previous section have been 491 discussed individually, it is possible for implementations to rely 492 on more than one of these. For example, the method of Section 3.1 493 could be used for minimizing ARP/ND, while at the same time, 494 multicast applications may be supported by one, or a combination of, 495 the other methods. For small multicast groups, the methods of 496 source NVE replication or the use of a multicast service node may be 497 attractive, while for larger multicast groups, the use of multicast 498 in the underlay may be preferable. 500 5. Other issues 502 5.1. Multicast-agnostic NVEs 504 Some hypervisor-based NVEs do not process or recognize IGMP/MLD 505 frames; i.e. those NVEs simply encapsulate the IGMP/MLD messages in 506 the same way as they do for regular data frames. 508 By default, TSs router periodically sends IGMP/MLD query messages to 509 all the hosts in the subnet to trigger the hosts that are interested 510 in the multicast stream to send back IGMP/MLD reports. In order for 511 the MSN to get the updated multicast group information, the MSN can 512 also send the IGMP/MLD query message comprising a client specific 513 multicast address, encapsulated in an overlay header to all the NVEs 514 to which the TSs in the VN are attached. 516 However, the MSN may not always be aware of the client specific 517 multicast addresses. In order to perform multicast filtering, the 518 MSN has to snoop the IGMP/MLD messages between TSs and their 519 corresponding routers to maintain the multicast membership. In order 520 for the MSN to snoop the IGMP/MLD messages between TSs and their 521 router, the NVA needs to configure the NVE to send copies of the 522 IGMP/MLD messages to the MSN in addition to the default behavior of 523 sending them to the TSs' routers; e.g. the NVA has to inform the 524 NVEs to encapsulate data frames with DA being 224.0.0.2 (destination 525 address of IGMP report) to TSs' router and MSN. 527 This process is similar to "Source Replication" described in Section 528 3.2, except the NVEs only replicate the message to TSs' router and 529 MSN. 531 5.2. Multicast membership management for DC with VMs 533 For data centers with virtualized servers, VMs can be added, deleted 534 or moved very easily. When VMs are added, deleted or moved, the NVEs 535 to which the VMs are attached are changed. 537 When a VM is deleted from an NVE or a new VM is added to an NVE, the 538 VM management system should notify the MSN to send the IGMP/MLD 539 query messages to the relevant NVEs (as described in Section 3.3), 540 so that the multicast membership can be updated promptly. 541 Otherwise, if there are changes of VMs attachment to NVEs, within 542 the duration of the configured default time interval that the TSs 543 routers use for IGMP/MLD queries, multicast data may not reach the 544 VM(s) that moved. 546 6. Summary 548 This document has identified various mechanisms for supporting 549 application specific multicast in networks that use NVO3. It 550 highlights the basics of each mechanism and some of the issues with 551 them. As solutions are developed, the protocols would need to 552 consider the use of these mechanisms and co-existence may be a 553 consideration. It also highlights some of the requirements for 554 supporting multicast applications in an NVO3 network. 556 7. Security Considerations 558 This draft does not introduce any new security considerations beyond 559 what is described n NVO3 Architecture (RFC8014). 561 8. IANA Considerations 563 This document requires no IANA actions. RFC Editor: Please remove 564 this section before publication. 566 9. References 568 9.1. Normative References 570 [RFC3376] Cain B. et al., "Internet Group Management Protocol, 571 Version 3", October 2002. 573 [RFC6513] Rosen, E. et al., "Multicast in MPLS/BGP IP VPNs", 574 February 2012. 576 [RFC7364] Narten, T. et al., "Problem statement: Overlays for 577 network virtualization", October 2014. 579 [RFC7365] Lasserre, M. et al., "Framework for data center (DC) 580 network virtualization", October 2014. 582 [RFC8014] Narten, T. et al.," An Architecture for Overlay Networks 583 (NVO3)", RFC8014, Dec. 2016. 585 9.2. Informative References 587 [RFC3569] S. Bhattacharyya, Ed., "An Overview of Source-Specific 588 Multicast (SSM)", July 2003. 590 [RFC3819] P. Harn et al., "Advice for Internet Subnetwork 591 Designers", July 2004. 593 [RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private 594 LAN Service (VPLS) using Label Distribution Protocol (LDP) 595 signaling," January 2007. 597 [RFC6831] Farinacci, D. et al., "The Locator/ID Seperation Protocol 598 (LISP) for Multicast Environments", Jan, 2013. 600 [RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014. 602 [RFC7348] Mahalingam, M. et al., " Virtual eXtensible Local Area 603 Network (VXLAN): A Framework for Overlaying Virtualized 604 Layer 2 Networks over Layer 3 Networks", August 2014. 606 [RFC7637] Garg P. and Wang, Y. (Eds.), "NVGRE: Network 607 Vvirtualization using Generic Routing Encapsulation", 608 September 2015. 610 [BIER-ARCH] 611 Wijnands, IJ. (Ed.) et al., "Multicast using Bit Index 612 Explicit Replication," , 613 January 2016. 615 [DC-MC] McBride, M. and Lui, H., "Multicast in the data center 616 overview," , work in 617 progress, July 2012. 619 [EDGE-REP] 620 Marques P. et al., "Edge multicast replication for BGP IP 621 VPNs," , work in 622 progress, June 2012. 624 [Geneve] 625 Gross, J. and Ganga, I. (Eds.), "Geneve: Generic Network 626 Virtualization Encapsulation", , work in progress, January 2016. 629 [GUE] 630 Herbert, T. et al., "Generic UDP Encapsulation", , work in progress, December 2015. 633 [ISIS-Multicast] 634 Yong, L. et al., "ISIS Protocol Extension for Building 635 Distribution Trees", , work in progress, October 2014. 638 [LANE] "LAN emulation over ATM," The ATM Forum, af-lane-0021.000, 639 January 1995. 641 [LISP-Signal-Free] 643 Moreno, V. and Farinacci, D., "Signal-Free LISP 644 Multicast", , 645 work in progress, April 2016. 647 [VXLAN-GPE] 649 Kreeger, L. and Elzur, U. (Eds.), "Generic Protocol 650 Extension for VXLAN", , work 651 in progress, April 2016. 653 10. Acknowledgments 655 Many thanks are due to Dino Farinacci, Erik Nordmark, Lucy Yong, 656 Nicolas Bouliane, Saumya Dikshit, Joe Touch, Olufemi Komolafe, and 657 Matthew Bocci, for their valuable comments and suggestions. 659 This document was prepared using 2-Word-v2.0.template.dot. 661 Authors' Addresses 663 Anoop Ghanwani 664 Dell 665 Email: anoop@alumni.duke.edu 667 Linda Dunbar 668 Huawei Technologies 669 5340 Legacy Drive, Suite 1750 670 Plano, TX 75024, USA 671 Phone: (469) 277 5840 672 Email: ldunbar@huawei.com 674 Mike McBride 675 Huawei Technologies 676 Email: mmcbride7@gmail.com 678 Vinay Bannai 679 Google 680 Email: vbannai@gmail.com 682 Ram Krishnan 683 Dell 684 Email: ramkri123@gmail.com