idnits 2.17.1 draft-ietf-nvo3-mcast-framework-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 500 has weird spacing: '...es, the multi...' -- The document date (October 23, 2017) is 2376 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 NVO3 working group A. Ghanwani 2 Internet Draft Dell 3 Intended status: Informational L. Dunbar 4 Expires: November 8, 2018 M. McBride 5 Huawei 6 V. Bannai 7 Google 8 R. Krishnan 9 Dell 11 October 23, 2017 13 A Framework for Multicast in Network Virtualization Overlays 14 draft-ietf-nvo3-mcast-framework-11 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. This document may not be modified, 23 and derivative works of it may not be created, except to publish it 24 as an RFC and to translate it into languages other than English. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as Internet- 29 Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six 32 months and may be updated, replaced, or obsoleted by other documents 33 at any time. It is inappropriate to use Internet-Drafts as 34 reference material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html 42 This Internet-Draft will expire on November 8, 2016. 44 Internet-Draft A framework for multicast in NVO3 46 Copyright Notice 48 Copyright (c) 2017 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 Abstract 63 This document provides a framework of supporting multicast traffic 64 in a network that uses Network Virtualization Overlays (NVO3). Both 65 infrastructure multicast and application-specific multicast are 66 discussed. It describes the various mechanisms that can be used for 67 delivering such traffic as well as the data plane and control plane 68 considerations for each of the mechanisms. 70 Table of Contents 72 1. Introduction...................................................3 73 1.1. Infrastructure multicast..................................3 74 1.2. Application-specific multicast............................4 75 1.3. Terminology clarification.................................4 76 2. Acronyms.......................................................4 77 3. Multicast mechanisms in networks that use NVO3.................5 78 3.1. No multicast support......................................6 79 3.2. Replication at the source NVE.............................7 80 3.3. Replication at a multicast service node...................9 81 3.4. IP multicast in the underlay.............................10 82 3.5. Other schemes............................................12 83 4. Simultaneous use of more than one mechanism...................12 84 5. Other issues..................................................13 85 5.1. Multicast-agnostic NVEs..................................13 86 5.2. Multicast membership management for DC with VMs..........13 87 6. Summary.......................................................14 88 7. Security Considerations.......................................14 89 8. IANA Considerations...........................................14 91 Internet-Draft A framework for multicast in NVO3 93 9. References....................................................14 94 9.1. Normative References.....................................14 95 9.2. Informative References...................................15 96 10. Acknowledgments..............................................16 98 1. Introduction 100 Network virtualization using Overlays over Layer 3 (NVO3)[RFC7365] 101 is a technology that is used to address issues that arise in 102 building large, multitenant data centers that make extensive use of 103 server virtualization [RFC7364]. 105 This document provides a framework for supporting multicast traffic, 106 in a network that uses Network Virtualization using Overlays over 107 Layer 3 (NVO3). Both infrastructure multicast and application- 108 specific multicast are considered. It describes the various 109 mechanisms and considerations that can be used for delivering such 110 traffic in networks that use NVO3. 112 The reader is assumed to be familiar with the terminology as defined 113 in the NVO3 Framework document [RFC7365] and NVO3 Architecture 114 document [RFC8014]. 116 1.1. Infrastructure multicast 118 Infrastructure multicast is a capability needed by networking 119 services, such as Address Resolution Protocol (ARP), Neighbor 120 Discovery (ND), Dynamic Host Configuration Protocol (DHCP), 121 multicast Domain Name Server (mDNS), etc. RFC3819 Section 5 and 6 122 have detailed description for some of the infrastructure multicast 123 [RFC3819]. It is possible to provide solutions for these that do 124 not involve multicast in the underlay network. In the case of 125 ARP/ND, a network virtualization authority (NVA) can be used for 126 distributing the mappings of IP address to MAC address to all 127 network virtualization edges (NVEs). The NVEs can then trap ARP 128 Request/ND Neighbor Solicitation messages from the TSs (Tenant 129 System) that are attached to it and respond to them, thereby 130 eliminating the need to for broadcast/multicast of such messages. 131 In the case of DHCP, the NVE can be configured to forward these 132 messages using a helper function. 134 Of course it is possible to support all of these infrastructure 135 multicast protocols natively if the underlay provides multicast 136 transport. However, even in the presence of multicast transport, it 137 may be beneficial to use the optimizations mentioned above to reduce 138 the amount of such traffic in the network. 140 Internet-Draft A framework for multicast in NVO3 142 1.2. Application-specific multicast 144 Application-specific multicast traffic are originated and consumed 145 by user applications. The Application-specific multicast, which can 146 be either Source-Specific Multicast (SSM) or Any-Source Multicast 147 (ASM)[RFC3569], has the following characteristics: 149 1. Receiver hosts are expected to subscribe to multicast content 150 using protocols such as IGMP [RFC3376] (IPv4) or MLD [RFC2710] 151 (IPv6). Multicast sources and listeners participant in these 152 protocols using addresses that are in the Tenant System address 153 domain. 155 2. The list of multicast listeners for each multicast group is not 156 known in advance. Therefore, it may not be possible for an NVA 157 to get the list of participants for each multicast group ahead 158 of time. 160 1.3. Terminology clarification 162 2. Acronyms & Terminology 164 In this document, the terms host, tenant system (TS) and virtual 165 machine (VM) are used interchangeably to represent an end station 166 that originates or consumes data packets. 168 ASM: Any-Source Multicast 170 IGMP: Internet Group Management Protocol 172 LISP: Locator/ID Separation Protocol 174 MSN: Multicast Service Node 176 RLOC: Routing Locator 178 NVA: Network Virtualization Authority 180 NVE: Network Virtualization Edge 182 NVGRE: Network Virtualization using GRE 184 Internet-Draft A framework for multicast in NVO3 186 PIM: Protocol-Independent Multicast 188 SSM: Source-Specific Multicast 190 TS: Tenant system 192 VM: Virtual Machine 194 VN: Virtual Network 196 VTEP: VxLAN Tunnel End Points 198 VXLAN: Virtual eXtensible LAN 200 3. Multicast mechanisms in networks that use NVO3 202 In NVO3 environments, traffic between NVEs is transported using an 203 encapsulation such as Virtual eXtensible Local Area Network (VXLAN) 204 [RFC7348,VXLAN-GPE], Network Virtualization Using Generic Routing 205 Encapsulation (NVGRE) [RFC7637], Geneve [Geneve], Generic UDP 206 Encapsulation (GUE) [GUE], etc. 208 What makes NVO3 different from any other network is that some NVEs, 209 especially the NVE implemented on server, might not support PIM or 210 other native multicast mechanisms. They might just encapsulate the 211 data packets from VMs with an outer unicast header. Therefore, it is 212 important for networks using NVO3 to have mechanisms to support 213 multicast as a network capability for NVEs, to map multicast traffic 214 from VMs (users/applications) to an equivalent multicast capability 215 inside the NVE, or to figure out the outer destination address if 216 NVE does not support native multicast (e.g. PIM) or IGMP. 218 Besides the need to support ARP and ND, there are several 219 applications that require the support of multicast and/or broadcast 220 in data centers [DC-MC]. With NVO3, there are many possible ways 221 that multicast may be handled in such networks. We discuss some of 222 the attributes of the following four methods: 224 1. No multicast support. 226 2. Replication at the source NVE. 228 3. Replication at a multicast service node. 230 4. IP multicast in the underlay. 232 Internet-Draft A framework for multicast in NVO3 234 These methods are briefly mentioned in the NVO3 Framework [RFC7365] 235 and NVO3 architecture [RFC8014] document. This document provides 236 more details about the basic mechanisms underlying each of these 237 methods and discusses the issues and trade-offs of each. 239 We note that other methods are also possible, such as [EDGE-REP], 240 but we focus on the above four because they are the most common. 242 It worth noting that when selecting a multicast replication 243 strategy, it is useful to consider the interaction with any 244 multicast congestion control that applications may be using to 245 obtain the desired system dynamics. In addition, for multicast we 246 follow the same rules for ECN as any non-multicast traffic would and 247 be in conformance with the appropriate encap draft [RFC6040] 249 3.1. No multicast support 251 In this scenario, there is no support whatsoever for multicast 252 traffic when using the overlay. This method can only work if the 253 following conditions are met: 255 1. All of the application traffic in the network is unicast 256 traffic and the only multicast/broadcast traffic is from ARP/ND 257 protocols. 259 2. An NVA is used by the NVEs to determine the mapping of a given 260 Tenant System's (TS's) MAC/IP address to its NVE. In other 261 words, there is no data plane learning. Address resolution 262 requests via ARP/ND that are issued by the TSs must be resolved 263 by the NVE that they are attached to. 265 With this approach, it is not possible to support application- 266 specific multicast. However, certain multicast/broadcast 267 applications such as DHCP can be supported by use of a helper 268 function in the NVE. 270 The main drawback of this approach, even for unicast traffic, is 271 that it is not possible to initiate communication with a TS for 272 which a mapping to an NVE does not already exist in the NVA. This 273 is a problem in the case where the NVE is implemented in a physical 274 switch and the TS is a physical end station that has not registered 275 with the NVA. 277 Internet-Draft A framework for multicast in NVO3 279 3.2. Replication at the source NVE 281 With this method, the overlay attempts to provide a multicast 282 service without requiring any specific support from the underlay, 283 other than that of a unicast service. A multicast or broadcast 284 transmission is achieved by replicating the packet at the source 285 NVE, and making copies, one for each destination NVE that the 286 multicast packet must be sent to. 288 For this mechanism to work, the source NVE must know, a priori, the 289 IP addresses of all destination NVEs that need to receive the 290 packet. For the purpose of ARP/ND, this would involve knowing the 291 IP addresses of all the NVEs that have TSs in the virtual network 292 (VN) of the TS that generated the request. For the support of 293 application-specific multicast traffic, a method similar to that of 294 receiver-sites registration for a particular multicast group 295 described in [LISP-Signal-Free] can be used. The registrations from 296 different receiver-sites can be merged at the NVA, which can 297 construct a multicast replication-list inclusive of all NVEs to 298 which receivers for a particular multicast group are attached. The 299 replication-list for each specific multicast group is maintained by 300 the NVA. Note: Using LISP-signal-free does not necessarily mean the 301 head-end (i.e. NVE) must do replication. If the mapping database 302 (i.e. NVA) indicates that packets are encapsulated to multicast 303 RLOCs, then there is no replication happening at the NVE. 305 The receiver-sites registration is achieved by egress NVEs 306 performing the IGMP/MLD snooping to maintain state for which 307 attached TSs have subscribed to a given IP multicast group. When 308 the members of a multicast group are outside the NVO3 domain, it is 309 necessary for NVO3 gateways to keep track of the remote members of 310 each multicast group. The NVEs and NVO3 gateways then communicate 311 the multicast groups that are of interest to the NVA. If the 312 membership is not communicated to the NVA, and if it is necessary to 313 prevent hosts attached to an NVE that have not subscribed to a 314 multicast group from receiving the multicast traffic, the NVE would 315 need to maintain multicast group membership information. 317 In the absence of IGMP/MLD snooping, the traffic would be delivered 318 to all TSs that are part of the VN. 320 In multi-homing environments, i.e., in those where a TS is attached 321 to more than one NVE, the NVA would be expected to provide 322 information to all of the NVEs under its control about all of the 323 NVEs to which such a TS is attached. The ingress NVE can choose any 324 one of the egress NVEs for the data frames destined towards the TS. 326 Internet-Draft A framework for multicast in NVO3 328 This method requires multiple copies of the same packet to all NVEs 329 that participate in the VN. If, for example, a tenant subnet is 330 spread across 50 NVEs, the packet would have to be replicated 50 331 times at the source NVE. Obviously, this approach creates more 332 traffic to the network that can cause congestion when the network 333 load is high. This also creates an issue with the forwarding 334 performance of the NVE. 336 Note that this method is similar to what was used in Virtual Private 337 LAN Service (VPLS) [RFC4762] prior to support of Multi-Protocol 338 Label Switching (MPLS) multicast [RFC7117]. While there are some 339 similarities between MPLS Virtual Private Network (VPN) and NVO3, 340 there are some key differences: 342 - The Customer Edge (CE) to Provider Edge (PE) attachment in VPNs is 343 somewhat static, whereas in a DC that allows VMs to migrate 344 anywhere, the TS attachment to NVE is much more dynamic. 346 - The number of PEs to which a single VPN customer is attached in 347 an MPLS VPN environment is normally far less than the number of 348 NVEs to which a VN's VMs are attached in a DC. 350 When a VPN customer has multiple multicast groups, "Multicast VPN" 351 [RFC6513] combines all those multicast groups within each VPN 352 client to one single multicast group in the MPLS (or VPN) core. 353 The result is that messages from any of the multicast groups 354 belonging to one VPN customer will reach all the PE nodes of the 355 client. In other words, any messages belonging to any multicast 356 groups under customer X will reach all PEs of the customer X. When 357 the customer X is attached to only a handful of PEs, the use of 358 this approach does not result in excessive wastage of bandwidth in 359 the provider's network. 361 In a DC environment, a typical server/hypervisor based virtual 362 switch may only support 10's VMs (as of this writing). A subnet 363 with N VMs may be, in the worst case, spread across N vSwitches. 364 Using "MPLS VPN multicast" approach in such a scenario would 365 require the creation of a Multicast group in the core for this VN 366 to reach all N NVEs. If only small percentage of this client's VMs 367 participate in application specific multicast, a great number of 368 NVEs will receive multicast traffic that is not forwarded to any 369 of their attached VMs, resulting in considerable wastage of 370 bandwidth. 372 Internet-Draft A framework for multicast in NVO3 374 Therefore, the Multicast VPN solution may not scale in DC 375 environment with dynamic attachment of Virtual Networks to NVEs and 376 greater number of NVEs for each virtual network. 378 3.3. Replication at a multicast service node 380 With this method, all multicast packets would be sent using a 381 unicast tunnel encapsulation from the ingress NVE to a multicast 382 service node (MSN). The MSN, in turn, would create multiple copies 383 of the packet and would deliver a copy, using a unicast tunnel 384 encapsulation, to each of the NVEs that are part of the multicast 385 group for which the packet is intended. 387 This mechanism is similar to that used by the Asynchronous Transfer 388 Mode (ATM) Forum's LAN Emulation (LANE) specification [LANE]. The 389 MSN is similar to the RP (Rendezvous Point) in PIM SM, but different 390 in that the user data traffic are carried by the NVO3 tunnels. 392 The following are the possible ways for the MSN to get the 393 membership information for each multicast group: 395 - The MSN can obtain this membership information from the IGMP/MLD 396 report messages sent by TSs in response to IGMP/MLD query messages 397 from the MSN. The IGMP/MLD query messages are sent from the MSN to 398 the NVEs, which then forward the query messages to TSs attached to 399 them. An IGMP/MLD query messages sent out by the MSN to an NVE is 400 encapsulated with the MSN address in the outer source address 401 field and the address of the NVE in the outer destination address 402 field. The encapsulated IGMP/MLD query messages also has a VNID 403 for a virtual network (VN) that TSs belong in the outer header and 404 a multicast address in the inner destination address field. Upon 405 receiving the encapsulated IGMP/MLD query message, the NVE 406 establishes a mapping "MSN address" <-> "multicast address", 407 decapsulates the received encapsulated IGMP/MLD message, and 408 multicast the decapsulated query message to TSs that belong to the 409 VN under the NVE. A IGMP/MLD report message sent by a TS includes 410 the multicast address and the address of the TS. With the proper 411 "MSN Address" <-> "Multicast-Address" mapping, the NVEs can 413 Internet-Draft A framework for multicast in NVO3 415 encapsulate all multicast data frames to the "Multicast-Address" 416 with the address of the MSN in the outer destination address 417 field. 419 - The MSN can obtain the membership information from the NVEs that 420 have the capability to establish multicast groups by snooping 421 native IGMP/MLD messages (p.s. the communication must be specific 422 to the multicast addresses), or by having the NVA obtain the 423 information from the NVEs, and in turn have MSN communicate with 424 the NVA. This approach requires additional protocol between MSN 425 and NVEs. 427 Unlike the method described in Section 3.2, there is no performance 428 impact at the ingress NVE, nor are there any issues with multiple 429 copies of the same packet from the source NVE to the Multicast 430 Service Node. However, there remain issues with multiple copies of 431 the same packet on links that are common to the paths from the MSN 432 to each of the egress NVEs. Additional issues that are introduced 433 with this method include the availability of the MSN, methods to 434 scale the services offered by the MSN, and the sub-optimality of the 435 delivery paths. 437 Finally, the IP address of the source NVE must be preserved in 438 packet copies created at the multicast service node if data plane 439 learning is in use. This could create problems if IP source address 440 reverse path forwarding (RPF) checks are in use. 442 3.4. IP multicast in the underlay 444 In this method, the underlay supports IP multicast and the ingress 445 NVE encapsulates the packet with the appropriate IP multicast 446 address in the tunnel encapsulation header for delivery to the 447 desired set of NVEs. The protocol in the underlay could be any 448 variant of Protocol Independent Multicast (PIM), or protocol 449 dependent multicast, such as [ISIS-Multicast]. 451 If an NVE connects to its attached TSs via a Layer 2 network, there 452 are multiple ways for NVEs to support the application specific 453 multicast: 455 Internet-Draft A framework for multicast in NVO3 457 - The NVE only supports the basic IGMP/MLD snooping function, let 458 the TSs routers handling the application specific multicast. This 459 scheme doesn't utilize the underlay IP multicast protocols. 461 - The NVE can act as a pseudo multicast router for the directly 462 attached VMs and support proper mapping of IGMP/MLD's messages to 463 the messages needed by the underlay IP multicast protocols. 465 With this method, there are none of the issues with the methods 466 described in Sections 3.2. 468 With PIM Sparse Mode (PIM-SM), the number of flows required would be 469 (n*g), where n is the number of source NVEs that source packets for 470 the group, and g is the number of groups. Bidirectional PIM (BIDIR- 471 PIM) would offer better scalability with the number of flows 472 required being g. Unfortunately, many vendors still do not fully 473 support BIDIR or have limitations on its implementation. RFC6831 474 [RFC6831] has good description of using SSM as an alternative to 475 BIDIR if the VTEP/NVE devices have a way to learn of each other's IP 476 address so that they could join all SSM SPT's to create/maintain an 477 underlay SSM IP Multicast tunnel solution. 479 In the absence of any additional mechanism, e.g. using an NVA for 480 address resolution, for optimal delivery, there would have to be a 481 separate group for each tenant, plus a separate group for each 482 multicast address (used for multicast applications) within a tenant. 484 Additional considerations are that only the lower 23 bits of the IP 485 address (regardless of whether IPv4 or IPv6 is in use) are mapped to 486 the outer MAC address, and if there is equipment that prunes 487 multicasts at Layer 2, there will be some aliasing. Finally, a 488 mechanism to efficiently provision such addresses for each group 489 would be required. 491 There are additional optimizations which are possible, but they come 492 with their own restrictions. For example, a set of tenants may be 493 restricted to some subset of NVEs and they could all share the same 494 outer IP multicast group address. This however introduces a problem 495 of sub-optimal delivery (even if a particular tenant within the 496 group of tenants doesn't have a presence on one of the NVEs which 498 Internet-Draft A framework for multicast in NVO3 500 another one does, the multicast packets would still be delivered to 501 that NVE). It also introduces an additional network management 502 burden to optimize which tenants should be part of the same tenant 503 group (based on the NVEs they share), which somewhat dilutes the 504 value proposition of NVO3 which is to completely decouple the 505 overlay and physical network design allowing complete freedom of 506 placement of VMs anywhere within the data center. 508 Multicast schemes such as BIER (Bit Indexed Explicit Replication) 509 [BIER-ARCH] may be able to provide optimizations by allowing the 510 underlay network to provide optimum multicast delivery without 511 requiring routers in the core of the network to maintain per- 512 multicast group state. 514 3.5. Other schemes 516 There are still other mechanisms that may be used that attempt to 517 combine some of the advantages of the above methods by offering 518 multiple replication points, each with a limited degree of 519 replication [EDGE-REP]. Such schemes offer a trade-off between the 520 amount of replication at an intermediate node (e.g. router) versus 521 performing all of the replication at the source NVE or all of the 522 replication at a multicast service node. 524 4. Simultaneous use of more than one mechanism 526 While the mechanisms discussed in the previous section have been 527 discussed individually, it is possible for implementations to rely 528 on more than one of these. For example, the method of Section 3.1 529 could be used for minimizing ARP/ND, while at the same time, 530 multicast applications may be supported by one, or a combination of, 531 the other methods. For small multicast groups, the methods of 532 source NVE replication or the use of a multicast service node may be 533 attractive, while for larger multicast groups, the use of multicast 534 in the underlay may be preferable. 536 Internet-Draft A framework for multicast in NVO3 538 5. Other issues 540 5.1. Multicast-agnostic NVEs 542 Some hypervisor-based NVEs do not process or recognize IGMP/MLD 543 frames; i.e. those NVEs simply encapsulate the IGMP/MLD messages in 544 the same way as they do for regular data frames. 546 By default, TSs router periodically sends IGMP/MLD query messages to 547 all the hosts in the subnet to trigger the hosts that are interested 548 in the multicast stream to send back IGMP/MLD reports. In order for 549 the MSN to get the updated multicast group information, the MSN can 550 also send the IGMP/MLD query message comprising a client specific 551 multicast address, encapsulated in an overlay header to all the NVEs 552 to which the TSs in the VN are attached. 554 However, the MSN may not always be aware of the client specific 555 multicast addresses. In order to perform multicast filtering, the 556 MSN has to snoop the IGMP/MLD messages between TSs and their 557 corresponding routers to maintain the multicast membership. In order 558 for the MSN to snoop the IGMP/MLD messages between TSs and their 559 router, the NVA needs to configure the NVE to send copies of the 560 IGMP/MLD messages to the MSN in addition to the default behavior of 561 sending them to the TSs' routers; e.g. the NVA has to inform the 562 NVEs to encapsulate data frames with DA being 224.0.0.2 (destination 563 address of IGMP report) to TSs' router and MSN. 565 This process is similar to "Source Replication" described in Section 566 3.2, except the NVEs only replicate the message to TSs' router and 567 MSN. 569 5.2. Multicast membership management for DC with VMs 571 For data centers with virtualized servers, VMs can be added, deleted 572 or moved very easily. When VMs are added, deleted or moved, the NVEs 573 to which the VMs are attached are changed. 575 When a VM is deleted from an NVE or a new VM is added to an NVE, the 576 VM management system should notify the MSN to send the IGMP/MLD 577 query messages to the relevant NVEs (as described in Section 3.3), 578 so that the multicast membership can be updated promptly. 580 Internet-Draft A framework for multicast in NVO3 582 Otherwise, if there are changes of VMs attachment to NVEs, within 583 the duration of the configured default time interval that the TSs 584 routers use for IGMP/MLD queries, multicast data may not reach the 585 VM(s) that moved. 587 6. Summary 589 This document has identified various mechanisms for supporting 590 application specific multicast in networks that use NVO3. It 591 highlights the basics of each mechanism and some of the issues with 592 them. As solutions are developed, the protocols would need to 593 consider the use of these mechanisms and co-existence may be a 594 consideration. It also highlights some of the requirements for 595 supporting multicast applications in an NVO3 network. 597 7. Security Considerations 599 This draft does not introduce any new security considerations beyond 600 what is described n NVO3 Architecture (RFC8014). 602 8. IANA Considerations 604 This document requires no IANA actions. RFC Editor: Please remove 605 this section before publication. 607 9. References 609 9.1. Normative References 611 [RFC3376] Cain B. et al. "Internet Group Management Protocol, 612 Version 3", October 2002. 614 [RFC6513] Rosen, E. et al., "Multicast in MPLS/BGP IP VPNs", 615 February 2012. 617 [RFC7364] Narten, T. et al., "Problem statement: Overlays for 618 network virtualization", October 2014. 620 [RFC7365] Lasserre, M. et al., "Framework for data center (DC) 621 network virtualization", October 2014. 623 Internet-Draft A framework for multicast in NVO3 625 [RFC8014] Narten, T. et al.," An Architecture for Overlay Networks 626 (NVO3)", RFC8014, Dec. 2016. 628 9.2. Informative References 630 [RFC2710] S. Deering et al, "Multicast Listener Discovery (MLD) for 631 IPv6", Oct 1999. 633 [RFC3569] S. Bhattacharyya, Ed., "An Overview of Source-Specific 634 Multicast (SSM)", July 2003. 636 [RFC3819] P. Harn et al., "Advice for Internet Subnetwork 637 Designers", July 2004. 639 [RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private 640 LAN Service (VPLS) using Label Distribution Protocol (LDP) 641 signaling," January 2007. 643 [RFC6040] B. Briscoe, "Tunnelling of Explicit Congestion 644 Notification", Nov 2010. 646 [RFC6831] Farinacci, D. et al., "The Locator/ID Seperation Protocol 647 (LISP) for Multicast Environments", Jan, 2013. 649 [RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014. 651 [RFC7348] Mahalingam, M. et al., " Virtual eXtensible Local Area 652 Network (VXLAN): A Framework for Overlaying Virtualized 653 Layer 2 Networks over Layer 3 Networks", August 2014. 655 [RFC7365] M. Lasserre, et al. "Framework for Data Center (DC) 656 Network Virtualization", Oct 2014. 658 [RFC7637] Garg P. and Wang, Y. (Eds.), "NVGRE: Network 659 Vvirtualization using Generic Routing Encapsulation", 660 September 2015. 662 [BIER-ARCH] Wijnands, IJ. (Ed.) et al., "Multicast using Bit Index 663 Explicit Replication," , 664 January 2016. 666 Internet-Draft A framework for multicast in NVO3 668 [DC-MC] McBride, M. and Lui, H., "Multicast in the data center 669 overview," , work in 670 progress, July 2012. 672 [EDGE-REP] Marques P. et al., "Edge multicast replication for BGP IP 673 VPNs," , work in 674 progress, June 2012. 676 [Geneve] Gross, J. and Ganga, I. (Eds.), "Geneve: Generic Network 677 Virtualization Encapsulation", , work in progress, January 2016. 680 [GUE] Herbert, T. et al., "Generic UDP Encapsulation", , work in progress, December 2015. 683 [ISIS-Multicast] Yong, L. et al., "ISIS Protocol Extension for 684 Building Distribution Trees", , work in progress, October 2014. 687 [LANE] "LAN emulation over ATM," The ATM Forum, af-lane-0021.000, 688 January 1995. 690 [LISP-Signal-Free] Moreno, V. and Farinacci, D., "Signal-Free LISP 691 Multicast", , 692 work in progress, April 2016. 694 [VXLAN-GPE] Kreeger, L. and Elzur, U. (Eds.), "Generic Protocol 695 Extension for VXLAN", , work 696 in progress, April 2016. 698 10. Acknowledgments 700 Many thanks are due to Dino Farinacci, Erik Nordmark, Lucy Yong, 701 Nicolas Bouliane, Saumya Dikshit, Joe Touch, Olufemi Komolafe, and 702 Matthew Bocci, for their valuable comments and suggestions. 704 This document was prepared using 2-Word-v2.0.template.dot. 706 Internet-Draft A framework for multicast in NVO3 708 Authors' Addresses 710 Anoop Ghanwani 711 Dell 712 Email: anoop@alumni.duke.edu 714 Linda Dunbar 715 Huawei Technologies 716 5340 Legacy Drive, Suite 1750 717 Plano, TX 75024, USA 718 Phone: (469) 277 5840 719 Email: ldunbar@huawei.com 721 Mike McBride 722 Huawei Technologies 723 Email: mmcbride7@gmail.com 725 Vinay Bannai 726 Google 727 Email: vbannai@gmail.com 729 Ram Krishnan 730 Dell 731 Email: ramkri123@gmail.com