idnits 2.17.1 draft-ietf-nvo3-mcast-framework-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 102 has weird spacing: '...lticast and a...' -- The document date (May 9, 2016) is 2902 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'FW' is mentioned on line 199, but not defined Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 NVO3 working group A. Ghanwani 2 Internet Draft Dell 3 Intended status: Informational L. Dunbar 4 Expires: November 8, 2016 M. McBride 5 Huawei 6 V. Bannai 7 Google 8 R. Krishnan 9 Dell 11 May 9, 2016 13 A Framework for Multicast in Network Virtualization Overlays 14 draft-ietf-nvo3-mcast-framework-05 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. This document may not be modified, 23 and derivative works of it may not be created, except to publish it 24 as an RFC and to translate it into languages other than English. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as Internet- 29 Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six 32 months and may be updated, replaced, or obsoleted by other documents 33 at any time. It is inappropriate to use Internet-Drafts as 34 reference material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html 42 This Internet-Draft will expire on November 8, 2016. 44 Copyright Notice 46 Copyright (c) 2016 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with 54 respect to this document. Code Components extracted from this 55 document must include Simplified BSD License text as described in 56 Section 4.e of the Trust Legal Provisions and are provided without 57 warranty as described in the Simplified BSD License. 59 Abstract 61 This document discusses a framework of supporting multicast traffic 62 in a network that uses Network Virtualization Overlays (NVO3). Both 63 infrastructure multicast and application-specific multicast are 64 discussed. It describes the various mechanisms that can be used for 65 delivering such traffic as well as the data plane and control plane 66 considerations for each of the mechanisms. 68 Table of Contents 70 1. Introduction...................................................3 71 1.1. Infrastructure multicast..................................3 72 1.2. Application-specific multicast............................4 73 1.3. Terminology clarification.................................4 74 2. Acronyms.......................................................4 75 3. Multicast mechanisms in networks that use NVO3.................5 76 3.1. No multicast support......................................5 77 3.2. Replication at the source NVE.............................6 78 3.3. Replication at a multicast service node...................8 79 3.4. IP multicast in the underlay..............................9 80 3.5. Other schemes............................................11 81 4. Simultaneous use of more than one mechanism...................11 82 5. Other issues..................................................11 83 5.1. Multicast-agnostic NVEs..................................11 84 5.2. Multicast membership management for DC with VMs..........12 85 6. Summary.......................................................12 86 7. Security Considerations.......................................13 87 8. IANA Considerations...........................................13 88 9. References....................................................13 89 9.1. Normative References.....................................13 90 9.2. Informative References...................................13 91 10. Acknowledgments..............................................15 93 1. Introduction 95 Network virtualization using Overlays over Layer 3 (NVO3) is a 96 technology that is used to address issues that arise in building 97 large, multitenant data centers that make extensive use of server 98 virtualization [RFC7364]. 100 This document provides a framework for supporting multicast traffic, 101 in a network that uses Network Virtualization using Overlays over 102 Layer 3 (NVO3). Both infrastructure multicast and application- 103 specific multicast are considered. It describes the various 104 mechanisms and considerations that can be used for delivering such 105 traffic in networks that use NVO3. 107 The reader is assumed to be familiar with the terminology as defined 108 in the NVO3 Framework document [RFC7365] and NVO3 Architecture 109 document [NVO3-ARCH]. 111 1.1. Infrastructure multicast 113 Infrastructure multicast includes protocols such as Address 114 Resolution Protocol (ARP), Neighbor Discovery (ND), Dynamic Host 115 Configuration Protocol (DHCP), multicast Domain Name Server (mDNS), 116 etc.. It is possible to provide solutions for these that do not 117 involve multicast in the underlay network. In the case of ARP/ND, a 118 network virtualization authority (NVA) can be used for distributing 119 the mappings of IP address to MAC address to all network 120 virtualization edges (NVEs). The NVEs can then trap ARP Request/ND 121 Neighbor Solicitation messages from the TSs that are attached to it 122 and respond to them, thereby eliminating the need to for 123 broadcast/multicast of such messages. In the case of DHCP, the NVE 124 can be configured to forward these messages using a helper function. 126 Of course it is possible to support all of these infrastructure 127 multicast protocols natively if the underlay provides multicast 128 transport. However, even in the presence of multicast transport, it 129 may be beneficial to use the optimizations mentioned above to reduce 130 the amount of such traffic in the network. 132 1.2. Application-specific multicast 134 Application-specific multicast traffic, which may be either Source- 135 Specific Multicast (SSM) or Any-Source Multicast (ASM)[RFC 3569], 136 has the following characteristics: 138 1. Receiver hosts are expected to subscribe to multicast content 139 using protocols such as IGMP [RFC3376] (IPv4) or MLD (IPv6). 140 Multicast sources and listeners participant in these protocols 141 using addresses that are in the Tenant System address domain. 143 2. The list of multicast listeners for each multicast group is not 144 known in advance. Therefore, it may not be possible for an NVA 145 to get the list of participants for each multicast group ahead 146 of time. 148 1.3. Terminology clarification 150 In this document, the terms host, tenant system (TS) and virtual 151 machine (VM) are used interchangeably to represent an end station 152 that originates or consumes data packets. 154 2. Acronyms 156 ASM: Any-Source Multicast 158 LISP: Locator/ID Separation Protocol 160 MSN: Multicast Service Node 162 NVA: Network Virtualization Authority 164 NVE: Network Virtualization Edge 166 NVGRE: Network Virtualization using GRE 168 SSM: Source-Specific Multicast 170 TS: Tenant system 172 VM: Virtual Machine 173 VN: Virtual Network 175 VXLAN: Virtual eXtensible LAN 177 3. Multicast mechanisms in networks that use NVO3 179 In NVO3 environments, traffic between NVEs is transported using an 180 encapsulation such as Virtual eXtensible Local Area Network (VXLAN) 181 [RFC7348,VXLAN-GPE], Network Virtualization Using Generic Routing 182 Encapsulation (NVGRE) [RFC7637], , Geneve [Geneve], Generic UDP 183 Encapsulation (GUE) [GUE], etc. 185 Besides the need to support ARP and ND, there are several 186 applications that require the support of multicast and/or broadcast 187 in data centers [DC-MC]. With NVO3, there are many possible ways 188 that multicast may be handled in such networks. We discuss some of 189 the attributes of the following four methods: 191 1. No multicast support. 193 2. Replication at the source NVE. 195 3. Replication at a multicast service node. 197 4. IP multicast in the underlay. 199 These methods are briefly mentioned in the NVO3 Framework [FW] and 200 NVO3 architecture [NVO3-ARCH] document. This document provides more 201 details about the basic mechanisms underlying each of these methods 202 and discusses the issues and tradeoffs of each. 204 We note that other methods are also possible, such as [EDGE-REP], 205 but we focus on the above four because they are the most common. 207 3.1. No multicast support 209 In this scenario, there is no support whatsoever for multicast 210 traffic when using the overlay. This method can only work if the 211 following conditions are met: 213 1. All of the application traffic in the network is unicast 214 traffic and the only multicast/broadcast traffic is from ARP/ND 215 protocols. 217 2. An NVA is used by the NVEs to determine the mapping of a given 218 Tenant System's (TS's) MAC/IP address to its NVE. In other 219 words, there is no data plane learning. Address resolution 220 requests via ARP/ND that are issued by the TSs must be resolved 221 by the NVE that they are attached to. 223 With this approach, it is not possible to support application- 224 specific multicast. However, certain multicast/broadcast 225 applications such as DHCP can be supported by use of a helper 226 function in the NVE. 228 The main drawback of this approach, even for unicast traffic, is 229 that it is not possible to initiate communication with a TS for 230 which a mapping to an NVE does not already exist with the NVA. This 231 is a problem in the case where the NVE is implemented in a physical 232 switch and the TS is a physical end station that has not registered 233 with the NVA. 235 3.2. Replication at the source NVE 237 With this method, the overlay attempts to provide a multicast 238 service without requiring any specific support from the underlay, 239 other than that of a unicast service. A multicast or broadcast 240 transmission is achieved by replicating the packet at the source 241 NVE, and making copies, one for each destination NVE that the 242 multicast packet must be sent to. 244 For this mechanism to work, the source NVE must know, a priori, the 245 IP addresses of all destination NVEs that need to receive the 246 packet. For the purpose of ARP/ND, this would involve knowing the 247 IP addresses of all the NVEs that have TSs in the virtual network 248 (VN) of the TS that generated the request. For the support of 249 application-specific multicast traffic, a method similar to that of 250 receiver-sites registration for a particular multicast group 251 described in [LISP-Signal-Free] can be used. The registrations from 252 different receiver-sites can be merged at the NVA, which can 253 construct a multicast replication-list inclusive of all NVEs to 254 which receivers for a particular multicast group are attached. The 255 replication-list for each specific multicast group is maintained by 256 the NVA. 258 The receiver-sites registration is achieved by egress NVEs 259 performing the IGMP/MLD snooping to maintain state for which 260 attached TSs have subscribed to a given IP multicast group. When 261 the members of a multicast group are outside the NVO3 domain, it is 262 necessary for NVO3 gateways to keep track of the remote members of 263 each multicast group. The NVEs and NVO3 gateways then communicate 264 the multicast groups that are of interest to the NVA. If the 265 membership is not communicated to the NVA, and if it is necessary to 266 prevent hosts attached to an NVE that have not subscribed to a 267 multicast group from receiving the multicast traffic, the NVE would 268 need to maintain multicast group membership information. 270 In the absence of IGMP/MLD snooping, the traffic would be delivered 271 to all TSs that are part of the VN. 273 In multi-homing environments, i.e., in those where a TS is attached 274 to more than one NVE, the NVA would be expected to provide 275 information to all of the NVEs under its control about all of the 276 NVEs to which such a TS is attached. The ingress NVE can choose any 277 one of the egress NVEs for the data frames destined towards the TS. 279 This method requires multiple copies of the same packet to all NVEs 280 that participate in the VN. If, for example, a tenant subnet is 281 spread across 50 NVEs, the packet would have to be replicated 50 282 times at the source NVE. This also creates an issue with the 283 forwarding performance of the NVE. 285 Note that this method is similar to what was used in Virtual Private 286 LAN Service (VPLS) [RFC4762] prior to support of Multi-Protocol 287 Label Switching (MPLS) multicast [RFC7117]. While there are some 288 similarities between MPLS Virtual Private Network (VPN) and NVO3, 289 there are some key differences: 291 - The Customer Edge (CE) to Provider Edge (PE) attachment in VPNs is 292 somewhat static, whereas in a DC that allows VMs to migrate 293 anywhere, the TS attachment to NVE is much more dynamic. 295 - The number of PEs to which a single VPN customer is attached in 296 an MPLS VPN environment is normally far less than the number of 297 NVEs to which a VN's VMs are attached in a DC. 299 When a VPN customer has multiple multicast groups, [RFC6513] 300 "Multicast VPN" combines all those multicast groups within each 301 VPN client to one single multicast group in the MPLS (or VPN) 302 core. The result is that messages from any of the multicast 303 groups belonging to one VPN customer will reach all the PE nodes 304 of the client. In other words, any messages belonging to any 305 multicast groups under customer X will reach all PEs of the 306 customer X. When the customer X is attached to only a handful of 307 PEs, the use of this approach does not result in excessive wastage 308 of bandwidth in the provider's network. 310 In a DC environment, a typical server/hypervisor based virtual 311 switch may only support 10's VMs (as of this writing). A subnet 312 with N VMs may be, in the worst case, spread across N vSwitches. 313 Using "MPLS VPN multicast" approach in such a scenario would 314 require the creation of a Multicast group in the core for this VN 315 to reach all N NVEs. If only small percentage of this client's VMs 316 participate in application specific multicast, a great number of 317 NVEs will receive multicast traffic that is not forwarded to any 318 of their attached VMs, resulting in considerable wastage of 319 bandwidth. 321 Therefore, the Multicast VPN solution may not scale in DC 322 environment with dynamic attachment of Virtual Networks to NVEs and 323 greater number of NVEs for each virtual network. 325 3.3. Replication at a multicast service node 327 With this method, all multicast packets would be sent using a 328 unicast tunnel encapsulation from the ingress NVE to a multicast 329 service node (MSN). The MSN, in turn, would create multiple copies 330 of the packet and would deliver a copy, using a unicast tunnel 331 encapsulation, to each of the NVEs that are part of the multicast 332 group for which the packet is intended. 334 This mechanism is similar to that used by the Asynchronous Transfer 335 Mode (ATM) Forum's LAN Emulation (LANE)LANE specification [LANE]. 337 The following are the possible ways for the MSN to get the 338 membership information for each multicast group: 340 - The MSN can obtain this information by snooping the IGMP/MLD 341 messages from the TSs and/or sending query messages to the TS. In 342 order for MSN to snoop the IGMP/MLD messages between TSs and their 343 corresponding routers, the NVEs that TSs are attached have to 344 encapsulate a special outer header, e.g. outer destination being 345 the multicast server node. See Section 3.3.2 for detail. 347 - The MSN can obtain the membership information from the NVEs that 348 snoop the IGMP/MLD messages. This can be done by having the MSN 349 communicate with the NVEs, or by having the NVA obtain the 350 information from the NVEs, and in turn have MSN communicate with 351 the NVA. 353 Unlike the method described in Section 3.2, there is no performance 354 impact at the ingress NVE, nor are there any issues with multiple 355 copies of the same packet from the source NVE to the multicast 356 service node. However, there remain issues with multiple copies of 357 the same packet on links that are common to the paths from the MSN 358 to each of the egress NVEs. Additional issues that are introduced 359 with this method include the availability of the MSN, methods to 360 scale the services offered by the MSN, and the sub-optimality of the 361 delivery paths. 363 Finally, the IP address of the source NVE must be preserved in 364 packet copies created at the multicast service node if data plane 365 learning is in use. This could create problems if IP source address 366 reverse path forwarding (RPF) checks are in use. 368 3.4. IP multicast in the underlay 370 In this method, the underlay supports IP multicast and the ingress 371 NVE encapsulates the packet with the appropriate IP multicast 372 address in the tunnel encapsulation header for delivery to the 373 desired set of NVEs. The protocol in the underlay could be any 374 variant of Protocol Independent Multicast (PIM), or protocol 375 dependent multicast, such as [ISIS-Multicast]. 377 If an NVE connects to its attached TSs via a Layer 2 network, there 378 are multiple ways for NVEs to support the application specific 379 multicast: 381 - The NVE only supports the basic IGMP/MLD snooping function, let 382 the TSs routers handling the application specific multicast. This 383 scheme doesn't utilize the underlay IP multicast protocols. 385 - The NVE can act as a pseudo multicast router for the directly 386 attached VMs and support proper mapping of IGMP/MLD's messages to 387 the messages needed by the underlay IP multicast protocols. 389 With this method, there are none of the issues with the methods 390 described in Sections 3.2. 392 With PIM Sparse Mode (PIM-SM), the number of flows required would be 393 (n*g), where n is the number of source NVEs that source packets for 394 the group, and g is the number of groups. Bidirectional PIM (BIDIR- 395 PIM) would offer better scalability with the number of flows 396 required being g. 398 In the absence of any additional mechanism, e.g. using an NVA for 399 address resolution, for optimal delivery, there would have to be a 400 separate group for each tenant, plus a separate group for each 401 multicast address (used for multicast applications) within a tenant. 403 Additional considerations are that only the lower 23 bits of the IP 404 address (regardless of whether IPv4 or IPv6 is in use) are mapped to 405 the outer MAC address, and if there is equipment that prunes 406 multicasts at Layer 2, there will be some aliasing. Finally, a 407 mechanism to efficiently provision such addresses for each group 408 would be required. 410 There are additional optimizations which are possible, but they come 411 with their own restrictions. For example, a set of tenants may be 412 restricted to some subset of NVEs and they could all share the same 413 outer IP multicast group address. This however introduces a problem 414 of sub-optimal delivery (even if a particular tenant within the 415 group of tenants doesn't have a presence on one of the NVEs which 416 another one does, the former's multicast packets would still be 417 delivered to that NVE). It also introduces an additional network 418 management burden to optimize which tenants should be part of the 419 same tenant group (based on the NVEs they share), which somewhat 420 dilutes the value proposition of NVO3 which is to completely 421 decouple the overlay and physical network design allowing complete 422 freedom of placement of VMs anywhere within the data center. 424 Multicast schemes such as BIER (Bit Indexed Explicit Replication) 425 [BIER-ARCH] may be able to provide optimizations by allowing the 426 underlay network to provide optimum multicast delivery without 427 requiring routers in the core of the network to maintain per- 428 multicast group state. 430 3.5. Other schemes 432 There are still other mechanisms that may be used that attempt to 433 combine some of the advantages of the above methods by offering 434 multiple replication points, each with a limited degree of 435 replication [EDGE-REP]. Such schemes offer a trade-off between the 436 amount of replication at an intermediate node (router) versus 437 performing all of the replication at the source NVE or all of the 438 replication at a multicast service node. 440 4. Simultaneous use of more than one mechanism 442 While the mechanisms discussed in the previous section have been 443 discussed individually, it is possible for implementations to rely 444 on more than one of these. For example, the method of Section 3.1 445 could be used for minimizing ARP/ND, while at the same time, 446 multicast applications may be supported by one, or a combination of, 447 the other methods. For small multicast groups, the methods of 448 source NVE replication or the use of a multicast service node may be 449 attractive, while for larger multicast groups, the use of multicast 450 in the underlay may be preferable. 452 5. Other issues 454 5.1. Multicast-agnostic NVEs 456 Some hypervisor-based NVEs do not process or recognize IGMP/MLD 457 frames; i.e. those NVEs simply encapsulate the IGMP/MLD messages in 458 the same way as they do for regular data frames. 460 By default, TSs router periodically sends IGMP/MLD query messages to 461 all the hosts in the subnet to trigger the hosts that are interested 462 in the multicast stream to send back IGMP/MLD reports. In order for 463 the MSN to get the updated multicast group information, the MSN can 464 also send the IGMP/MLD query message comprising a client specific 465 multicast address, encapsulated in an overlay header to all the NVEs 466 to which the TSs in the VN are attached. 468 However, the MSN may not always be aware of the client specific 469 multicast addresses. In order to perform multicast filtering, the 470 MSN has to snoop the IGMP/MLD messages between TSs and their 471 corresponding routers to maintain the multicast membership. In order 472 for the MSN to snoop the IGMP/MLD messages between TSs and their 473 router, the NVA needs to configure the NVE to send copies of the 474 IGMP/MLD messages to the MSN in addition to the default behavior of 475 sending them to the TSs' routers; e.g. the NVA has to inform the 476 NVEs to encapsulate data frames with DA being 224.0.0.2 (destination 477 address of IGMP report) to TSs' router and MSN. 479 This process is similar to "Source Replication" described in Section 480 3.2, except the NVEs only replicate the message to TSs' router and 481 MSN. 483 5.2. Multicast membership management for DC with VMs 485 For data centers with virtualized servers, VMs can be added, deleted 486 or moved very easily. When VMs are added, deleted or moved, the NVEs 487 to which the VMs are attached are changed. 489 When a VM is deleted from an NVE or a new VM is added to an NVE, the 490 VM management system should notify the MSN to send the IGMP/MLD 491 query messages to the relevant NVEs, so that the multicast 492 membership can be updated promptly. Otherwise, if there are changes 493 of VMs attachment to NVEs, then for the duration of the configured 494 default time interval that the TSs routers use for IGMP/MLD queries, 495 multicast data may not reach the VM(s) that moved. 497 6. Summary 499 This document has identified various mechanisms for supporting 500 application specific multicast in networks that use NVO3. It 501 highlights the basics of each mechanism and some of the issues with 502 them. As solutions are developed, the protocols would need to 503 consider the use of these mechanisms and co-existence may be a 504 consideration. It also highlights some of the requirements for 505 supporting multicast applications in an NVO3 network. 507 7. Security Considerations 509 This draft does not introduce any new security considerations beyond 510 what may be present in proposed solutions. 512 8. IANA Considerations 514 This document requires no IANA actions. RFC Editor: Please remove 515 this section before publication. 517 9. References 519 9.1. Normative References 521 [RFC7365] Lasserre, M. et al., "Framework for data center (DC) 522 network virtualization", October 2014. 524 [RFC7364] Narten, T. et al., "Problem statement: Overlays for 525 network virtualization", October 2014. 527 [NVO3-ARCH] Narten, T. et al.," An Architecture for Overlay Networks 528 (NVO3)", , work in progress, 529 April 2016. 531 [RFC3376] Cain B. et al., "Internet Group Management Protocol, 532 Version 3", October 2002. 534 [RFC6513] Rosen, E. et al., "Multicast in MPLS/BGP IP VPNs", 535 February 2012. 537 9.2. Informative References 539 [RFC7348] Mahalingam, M. et al., " Virtual eXtensible Local Area 540 Network (VXLAN): A Framework for Overlaying Virtualized 541 Layer 2 Networks over Layer 3 Networks", August 2014. 543 [RFC7637] Garg P. and Wang, Y. (Eds.), "NVGRE: Network 544 Virtualization using Generic Routing Encapsulation", 545 September 2015. 547 [DC-MC] McBride, M. and Lui, H., "Multicast in the data center 548 overview," , work in 549 progress, July 2012. 551 [ISIS-Multicast] 552 Yong, L. et al., "ISIS Protocol Extension for Building 553 Distribution Trees", , work in progress, October 2014. 556 [RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private 557 LAN Service (VPLS) using Label Distribution Protocol (LDP) 558 signaling," January 2007. 560 [RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014. 562 [LANE] "LAN emulation over ATM," The ATM Forum, af-lane-0021.000, 563 January 1995. 565 [EDGE-REP] 566 Marques P. et al., "Edge multicast replication for BGP IP 567 VPNs," , work in 568 progress, June 2012. 570 [RFC 3569] Bhattacharyya, S. (Ed.), "An Overview of Source-Specific 571 Multicast (SSM)", July 2003. 573 [LISP-Signal-Free] 574 Moreno, V. and Farinacci, D., "Signal-Free LISP 575 Multicast", , 576 work in progress, April 2016. 578 [VXLAN-GPE] 579 Kreeger, L. and Elzur, U. (Eds.), "Generic Protocol 580 Extension for VXLAN", , work 581 in progress, April 2016. 583 [Geneve] Gross, J. and Ganga, I. (Eds.), "Geneve: Generic Network 584 Virtualization Encapsulation", , work in progress, January 2016. 587 [GUE] Herbert, T. et al., "Generic UDP Encapsulation", , work in progress, December 2015. 590 [BIER-ARCH] 591 Wijnands, IJ. (Ed.) et al., "Multicast using Bit Index 592 Explicit Replication," , 593 January 2016. 595 10. Acknowledgments 597 Thanks are due to Dino Farinacci, Erik Nordmark, Lucy Yong, Nicolas 598 Bouliane, Saumya Dikshit, and Matthew Bocci, for their comments and 599 suggestions. 601 This document was prepared using 2-Word-v2.0.template.dot. 603 Authors' Addresses 605 Anoop Ghanwani 606 Dell 607 Email: anoop@alumni.duke.edu 609 Linda Dunbar 610 Huawei Technologies 611 5340 Legacy Drive, Suite 1750 612 Plano, TX 75024, USA 613 Phone: (469) 277 5840 614 Email: ldunbar@huawei.com 616 Mike McBride 617 Huawei Technologies 618 Email: mmcbride7@gmail.com 620 Vinay Bannai 621 Google 622 Email: vbannai@gmail.com 624 Ram Krishnan 625 Dell 626 Email: ramkri123@gmail.com