idnits 2.17.1 draft-ietf-nvo3-use-case-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 22, 2016) is 2770 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-17) exists of draft-ietf-nvo3-hpvr2nve-cp-req-05 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft L. Dunbar 3 Category: Informational Huawei 4 M. Toy 6 A. Isaac 7 Juniper Networks 8 V. Manral 9 Ionos Networks 11 Expires: March 2017 September 22, 2016 13 Use Cases for Data Center Network Virtualization Overlays 15 draft-ietf-nvo3-use-case-10 17 Abstract 19 This document describes Data Center (DC) Network Virtualization over 20 Layer 3 (NVO3) use cases that can be deployed in various data 21 centers and serve different applications. 23 Status of this Memo 25 This Internet-Draft is submitted to IETF in full conformance with 26 the provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF), its areas, and its working groups. Note that 30 other groups may also distribute working documents as Internet- 31 Drafts. 33 Internet-Drafts are draft documents valid for a maximum of six 34 months and may be updated, replaced, or obsoleted by other documents 35 at any time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 The list of current Internet-Drafts can be accessed at 39 http://www.ietf.org/ietf/1id-abstracts.txt. 41 The list of Internet-Draft Shadow Directories can be accessed at 42 http://www.ietf.org/shadow.html. 44 This Internet-Draft will expire on March 22, 2017. 46 Copyright Notice 48 Copyright (c) 2016 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction...................................................3 64 1.1. Terminology...............................................4 65 2. Basic Virtual Networks in a Data Center........................5 66 3. DC Virtual Network and External Network Interconnection........6 67 3.1. DC Virtual Network Access via the Internet................6 68 3.2. DC VN and SP WAN VPN Interconnection......................8 69 4. DC Applications Using NVO3.....................................9 70 4.1. Supporting Multiple Technologies..........................9 71 4.2. DC Application with Multiple Virtual Networks.............9 72 4.3. Virtual Data Center (vDC)................................10 73 5. Summary.......................................................11 74 6. Security Considerations.......................................12 75 7. IANA Considerations...........................................12 76 8. References....................................................12 77 8.1. Normative References.....................................12 78 8.2. Informative References...................................12 79 Contributors.....................................................13 80 Acknowledgements.................................................14 81 Authors' Addresses...............................................14 83 1. Introduction 85 Server Virtualization has changed the Information Technology (IT) 86 industry in terms of the efficiency, cost, and speed of providing 87 new applications and/or services such as cloud applications. However 88 traditional Data Center (DC) networks have some limits in supporting 89 cloud applications and multi tenant networks [RFC7364]. The goal of 90 Network Virtualization Overlays in the DC is to decouple the 91 communication among tenant systems from DC physical infrastructure 92 networks and to allow one physical network infrastructure to provide: 94 o Multi-tenant virtual networks and traffic isolation among the 95 virtual networks over the same physical network. 97 o Independent address spaces in individual virtual networks such as 98 MAC, IP, TCP/UDP etc. 100 o Flexible Virtual Machines (VM) and/or workload placement 101 including the ability to move them from one server to another 102 without requiring VM address and configuration changes, and the 103 ability to perform a "hot move" with no disruption to the live 104 application running on VMs. 106 These characteristics of NVO3 help address the issues that cloud 107 applications face in Data Centers [RFC7364]. 109 An NVO3 network may interconnect with another NVO3 virtual network, 110 or another physical network (i.e., not the physical network that the 111 NVO3 network is over), via a gateway. The use case examples for the 112 latter are: 1) DCs that migrate toward an NVO3 solution will be done 113 in steps, where a portion of tenant systems in a VN is on 114 virtualized servers while others exist on a LAN. 2) many DC 115 applications serve to Internet users who are on physical networks; 3) 116 some applications are CPU bound, such as Big Data analytics, and may 117 not run on virtualized resources. Some inter-VN policies can be 118 enforced at the gateway. 120 This document describes general NVO3 use cases that apply to various 121 data centers. The use cases described here represent DC provider's 122 interests and vision for their cloud services. The document groups 123 the use cases into three categories from simple to advance in term 124 of implementation. However the implementations of these use cases 125 are outside the scope of this document. These three categories are 126 highlighted below: 128 o Basic NVO3 virtual networks in a DC (Section 2). All Tenant 129 Systems (TS) in the virtual network are located within the same 130 DC. The individual virtual networks can be either Layer 2 (L2) or 131 Layer 3 (L3). The number of NVO3 virtual networks in a DC is much 132 higher than what traditional VLAN based virtual networks [IEEE 133 802.1Q] can support. This case is often referred as to the DC 134 East-West traffic. 136 o Virtual networks that span across multiple Data Centers and/or to 137 customer premises, i.e., an NVO3 virtual network where some 138 tenant systems in a DC attach to interconnect another virtual or 139 physical network outside the data center. An enterprise customer 140 may use a traditional carrier VPN or an IPsec tunnel over the 141 Internet to communicate with its systems in the DC. This is 142 described in Section 3. 144 o DC applications or services require an advanced network that 145 contains several NVO3 virtual networks that are interconnected by 146 the gateways. Three scenarios are described in Section 4: 1) 147 supporting multiple technologies; 2) constructing several virtual 148 networks as a tenant network; 3) applying NVO3 to a virtual Data 149 Center (vDC). 151 The document uses the architecture reference model defined in 152 [RFC7365] to describe the use cases. 154 1.1. Terminology 156 This document uses the terminologies defined in [RFC7365] and 157 [RFC4364]. Some additional terms used in the document are listed 158 here. 160 DMZ: Demilitarized Zone. A computer or small sub-network that sits 161 between a trusted internal network, such as a corporate private LAN, 162 and an un-trusted external network, such as the public Internet. 164 DNS: Domain Name Service [RFC1035] 166 DC Operator: A role who is responsible to construct and manage cloud 167 service instances in their life-cycle and manage DC infrastructure 168 that runs these cloud instances. 170 DC Provider: A company that uses its DC infrastructure to offer 171 cloud services to its customers. 173 NAT: Network Address Translation [RFC3022] 174 vGW: virtual Gateway; a gateway component used for an NVO3 virtual 175 network to interconnect with another virtual/physical network. 177 Note that a virtual network in this document refers to an NVO3 178 virtual network in a DC [RFC7365]. 180 2. Basic Virtual Networks in a Data Center 182 A virtual network in a DC enables communications among Tenant 183 Systems (TS). A TS can be a physical server/device or a virtual 184 machine (VM) on a server, i.e., end-device [RFC7365]. A Network 185 Virtual Edge (NVE) can be co-located with a TS, i.e., on the same 186 end-device, or reside on a different device, e.g., a top of rack 187 switch (ToR). A virtual network has a virtual network identifier 188 (can be globally unique or locally significant at NVEs). 190 Tenant Systems attached to the same NVE may belong to the same or 191 different virtual networks. An NVE provides tenant traffic 192 forwarding/encapsulation and obtains tenant systems reachability 193 information from a Network Virtualization Authority (NVA)[NVO3ARCH]. 194 DC operators can construct multiple separate virtual networks, and 195 provide each with own address space. 197 Network Virtualization Overlay in this context means that a virtual 198 network is implemented with an overlay technology, i.e., within a DC 199 that has IP infrastructure, tenant traffic is encapsulated at its 200 local NVE and carried by a tunnel to another NVE where the packet is 201 decapsulated and sent to a target tenant system. This architecture 202 decouples tenant system address space and configuration from the 203 infrastructure's, which provides great flexibility for VM placement 204 and mobility. It also means that the transit nodes in the 205 infrastructure are not aware of the existence of the virtual 206 networks and tenant systems attached to the virtual networks. The 207 tunneled packets are carried as regular IP packets and are sent to 208 NVEs. One tunnel may carry the traffic belonging to multiple virtual 209 networks; a virtual network identifier is used for traffic 210 demultiplexing. A tunnel encapsulation protocol is necessary for NVE 211 to encapsulate the packets from Tenant Systems and encode other 212 information on the tunneled packets to support NVO3 implementation. 214 A virtual network implemented by NVO3 may be an L2 or L3 domain. The 215 virtual network can carry unicast traffic and/or multicast, 216 broadcast/unknown (for L2 only) traffic from/to tenant systems. 217 There are several ways to transport virtual network BUM traffic 218 [NVO3MCAST]. 220 It is worth mentioning two distinct cases regarding to NVE location. 221 The first is where TSs and an NVE are co-located on a single end 222 host/device, which means that the NVE can be aware of the TS's state 223 at any time via an internal API. The second is where TSs and an NVE 224 are not co-located, with the NVE residing on a network device; in 225 this case, a protocol is necessary to allow the NVE to be aware of 226 the TS's state [NVO3HYVR2NVE]. 228 One virtual network can provide connectivity to many TSs that attach 229 to many different NVEs in a DC. TS dynamic placement and mobility 230 results in frequent changes of the binding between a TS and an NVE. 231 The TS reachability update mechanisms need be fast enough so that 232 the updates do not cause any communication disruption/interruption. 233 The capability of supporting many TSs in a virtual network and many 234 more virtual networks in a DC is critical for the NVO3 solution. 236 If a virtual network spans across multiple DC sites, one design is 237 to allow the network to seamlessly span across the sites without DC 238 gateway routers' termination. In this case, the tunnel between a 239 pair of NVEs can be carried within other intermediate tunnels over 240 the Internet or other WANs, or the intra DC and inter DC tunnels can 241 be stitched together to form a tunnel between the pair of NVEs that 242 are in different DC sites. Both cases will form one virtual network 243 across multiple DC sites. 245 3. DC Virtual Network and External Network Interconnection 247 Many customers (an enterprise or individuals) who utilize a DC 248 provider's compute and storage resources to run their applications 249 need to access their systems hosted in a DC through Internet or 250 Service Providers' Wide Area Networks (WAN). A DC provider can 251 construct a virtual network that provides connectivity to all the 252 resources designated for a customer and allows the customer to 253 access the resources via a virtual gateway (vGW). This, in turn, 254 becomes the case of interconnecting a DC virtual network and the 255 network at customer site(s) via the Internet or WANs. Two use cases 256 are described here. 258 3.1. DC Virtual Network Access via the Internet 260 A customer can connect to a DC virtual network via the Internet in a 261 secure way. Figure 1 illustrates this case. The DC virtual network 262 has an instance at NVE1 and NVE2 and the two NVEs are connected via 263 an IP tunnel in the Data Center. A set of tenant systems are 264 attached to NVE1 on a server. NVE2 resides on a DC Gateway device. 265 NVE2 terminates the tunnel and uses the VNID on the packet to pass 266 the packet to the corresponding vGW entity on the DC GW (the vGW is 267 the default gateway for the virtual network). A customer can access 268 their systems, i.e., TS1 or TSn, in the DC via the Internet by using 269 an IPsec tunnel [RFC4301]. The IPsec tunnel is configured between 270 the vGW and the customer gateway at the customer site. Either a 271 static route or iBGP may be used for prefix advertisement. The vGW 272 provides IPsec functionality such as authentication scheme and 273 encryption; iBGP protocol traffic is carried within the IPsec tunnel. 274 Some vGW features are listed below: 276 o The vGW maintains the TS/NVE mappings and advertises the TS 277 prefix to the customer via static route or iBGP. 279 o Some vGW functions such as firewall and load balancer can be 280 performed by locally attached network appliance devices. 282 o If the virtual network in the DC uses different address space 283 than external users, then the vGW needs to provide the NAT 284 function. 286 o More than one IPsec tunnel can be configured for redundancy. 288 o The vGW can be implemented on a server or VM. In this case, IP 289 tunnels or IPsec tunnels can be used over the DC infrastructure. 291 o DC operators need to construct a vGW for each customer. 293 Server+---------------+ 294 | TS1 TSn | 295 | |...| | 296 | +-+---+-+ | Customer Site 297 | | NVE1 | | +-----+ 298 | +---+---+ | | CGW | 299 +------+--------+ +--+--+ 300 | * 301 L3 Tunnel * 302 | * 303 DC GW +------+---------+ .--. .--. 304 | +---+---+ | ( '* '.--. 305 | | NVE2 | | .-.' * ) 306 | +---+---+ | ( * Internet ) 307 | +---+---+. | ( * / 308 | | vGW | * * * * * * * * '-' '-' 309 | +-------+ | | IPsec \../ \.--/' 310 | +--------+ | Tunnel 311 +----------------+ 312 DC Provider Site 314 Figure 1 - DC Virtual Network Access via the Internet 316 3.2. DC VN and SP WAN VPN Interconnection 318 In this case, an Enterprise customer wants to use a Service Provider 319 (SP) WAN VPN [RFC4364] [RFC7432] to interconnect its sites with a 320 virtual network in a DC site. The Service Provider constructs a VPN 321 for the enterprise customer. Each enterprise site peers with an SP 322 PE. The DC Provider and VPN Service Provider can build a DC virtual 323 network (VN) and VPN independently, and then interconnect them via a 324 local link, or a tunnel between the DC GW and WAN PE devices. The 325 control plane interconnection options between the DC and WAN are 326 described in RFC4364 [RFC4364]. Using Option A with VRF-LITE [VRF- 327 LITE], both ASBRs, i.e., DC GW and SP PE, maintain a 328 routing/forwarding table (VRF). Using Option B, the DC ASBR and SP 329 ASBR do not maintain the VRF table; they only maintain the VN and 330 VPN identifier mappings, i.e., label mapping, and swap the label on 331 the packets in the forwarding process. Both option A and B allow VN 332 and VPN using own identifier and two identifiers are mapped at DC GW. 333 With option C, the VN and VPN use the same identifier and both ASBRs 334 perform the tunnel stitching, i.e., tunnel segment mapping. Each 335 option has pros/cons [RFC4364] and has been deployed in SP networks 336 depending on the applications in use. BGP is used with these options 337 for route distribution between DCs and SP WANs. Note that if the DC 338 is the SP's Data Center, the DC GW and SP PE in this case can be 339 merged into one device that performs the interworking of the VN and 340 VPN within an AS. 342 The configurations above allow the enterprise networks to 343 communicate with the tenant systems attached to the VN in a DC 344 without interfering with the DC provider's underlying physical 345 networks and other virtual networks. The enterprise can use its own 346 address space in the VN. The DC provider can manage which VM and 347 storage elements attach to the VN. The enterprise customer manages 348 which applications run on the VMs in the VN without knowing the 349 location of the VMs in the DC. (See Section 4 for more) 351 Furthermore, in this use case, the DC operator can move the VMs 352 assigned to the enterprise from one sever to another in the DC 353 without the enterprise customer being aware, i.e., with no impact on 354 the enterprise's 'live' applications. Such advanced technologies 355 bring DC providers great benefits in offering cloud services, but 356 add some requirements for NVO3 [RFC7364] as well. 358 4. DC Applications Using NVO3 360 NVO3 technology provides DC operators with the flexibility in 361 designing and deploying different applications in an end-to-end 362 virtualization overlay environment. The operators no longer need to 363 worry about the constraints of the DC physical network configuration 364 when creating VMs and configuring a virtual network. A DC provider 365 may use NVO3 in various ways, in conjunction with other physical 366 networks and/or virtual networks in the DC for a reason. This 367 section highlights some use cases for this goal. 369 4.1. Supporting Multiple Technologies 371 Servers deployed in a large data center are often installed at 372 different times, and may have different capabilities/features. Some 373 servers may be virtualized, while others may not; some may be 374 equipped with virtual switches, while others may not. For the 375 servers equipped with Hypervisor-based virtual switches, some may 376 support VxLAN [RFC7348] encapsulation, some may support NVGRE 377 encapsulation [RFC7637], and some may not support any encapsulation. 378 To construct a tenant network among these servers and the ToR 379 switches, operators can construct one traditional VLAN network and 380 two virtual networks where one uses VxLAN encapsulation and the 381 other uses NVGRE, and interconnect these three networks via a 382 gateway or virtual GW. The GW performs packet 383 encapsulation/decapsulation translation between the networks. 385 Another case is that some software of a tenant is high CPU and 386 memory consumption, which only makes a sense to run on metal servers; 387 other software of the tenant may be good to run on VMs. However 388 provider DC infrastructure is configured to use NVO3 to connect to 389 VMs and VLAN [IEEE802.1Q] connect to metal services. The tenant 390 network requires interworking between NVO3 and traditional VLAN. 392 4.2. DC Application with Multiple Virtual Networks 394 A DC application may necessarily be constructed with multi-tier 395 zones, where each zone has different access permissions and runs 396 different applications. For example, a three-tier zone design has a 397 front zone (Web tier) with Web applications, a mid zone (application 398 tier) where service applications such as credit payment or ticket 399 booking run, and a back zone (database tier) with Data. External 400 users are only able to communicate with the Web application in the 401 front zone; the back zone can only receive traffic from the 402 application zone. In this case, communications between the zones 403 must pass through a GW/firewall. Each zone can be implemented by one 404 virtual network and a GW/firewall can be used to between two virtual 405 networks, i.e., two zones. A tunnel carrying virtual network traffic 406 has to be terminated at the GW/firewall where overlay traffic is 407 processed. 409 4.3. Virtual Data Center (vDC) 411 An Enterprise Data Center today may deploy routers, switches, and 412 network appliance devices to construct its internal network, DMZ, 413 and external network access; it may have many servers and storage 414 running various applications. With NVO3 technology, a DC Provider 415 can construct a virtual Data Center (vDC) over its physical DC 416 infrastructure and offer a virtual Data Center service to enterprise 417 customers. A vDC at the DC Provider site provides the same 418 capability as the physical DC at a customer site. A customer manages 419 its own applications running in its vDC. A DC Provider can further 420 offer different network service functions to the customer. The 421 network service functions may include firewall, DNS, load balancer, 422 gateway, etc. 424 Figure 2 below illustrates one such scenario at service abstraction 425 level. In this example, the vDC contains several L2 VNs (L2VNx, 426 L2VNy, L2VNz) to group the tenant systems together on a per- 427 application basis, and one L3 VN (L3VNa) for the internal routing. A 428 network firewall and gateway runs on a VM or server that connects to 429 L3VNa and is used for inbound and outbound traffic processing. A 430 load balancer (LB) is used in L2VNx. A VPN is also built between the 431 gateway and enterprise router. An Enterprise customer runs 432 Web/Mail/Voice applications on VMs within the vDC. The users at the 433 Enterprise site access the applications running in the vDC via the 434 VPN; Internet users access these applications via the 435 gateway/firewall at the provider DC site. 437 The Enterprise customer decides which applications should be 438 accessible only via the intranet and which should be assessable via 439 both the intranet and Internet, and configures the proper security 440 policy and gateway function at the firewall/gateway. Furthermore, an 441 enterprise customer may want multi-zones in a vDC (See section 4.2) 442 for the security and/or the ability to set different QoS levels for 443 the different applications. 445 The vDC use case requires an NVO3 solution to provide DC operators 446 with an easy and quick way to create a VN and NVEs for any vDC 447 design, to allocate TSs and assign TSs to the corresponding VN, and 448 to illustrate vDC topology and manage/configure individual elements 449 in the vDC in a secure way. 451 Internet ^ Internet 452 | 453 ^ +--+---+ 454 | | GW | 455 | +--+---+ 456 | | 457 +-------+--------+ +--+---+ 458 |Firewall/Gateway+--- VPN-----+router| 459 +-------+--------+ +-+--+-+ 460 | | | 461 ...+.... |..| 462 +-------: L3 VNa :---------+ LANs 463 +-+-+ ........ | 464 |LB | | | Enterprise Site 465 +-+-+ | | 466 ...+... ...+... ...+... 467 : L2VNx : : L2VNy : : L2VNz : 468 ....... ....... ....... 469 |..| |..| |..| 470 | | | | | | 471 Web App. Mail App. VoIP App. 473 Provider DC Site 475 Figure 2 - Virtual Data Center Abstraction View 477 5. Summary 479 This document describes some general and potential NVO3 use cases in 480 DCs. The combination of these cases will give operators the 481 flexibility and capability to design more sophisticated cases for 482 various cloud applications. 484 DC services may vary, from infrastructure as a service (IaaS), to 485 platform as a service (PaaS), to software as a service (SaaS). 486 In these services, NVO3 virtual networks are just a portion of such 487 services. 489 NVO3 uses tunnel techniques to deliver VN traffic over an IP network. 490 A tunnel encapsulation protocol is necessary. An NVO3 tunnel may in 491 turn be tunneled over other intermediate tunnels over the Internet 492 or other WANs. 494 An NVO3 virtual network in a DC may be accessed by external users in 495 a secure way. Many existing technologies can help achieve this. 497 NVO3 implementations may vary. Some DC operators prefer to use a 498 centralized controller to manage tenant system reachability in a 499 virtual network, while other operators prefer to use distribution 500 protocols to advertise the tenant system location, i.e., NVE 501 location. When a tenant network spans across multiple DCs and WANs, 502 each network administration domain may use different methods to 503 distribute the tenant system locations. Both control plane and data 504 plane interworking are necessary. 506 6. Security Considerations 508 Security is a concern. DC operators need to provide a tenant with a 509 secured virtual network, which means one tenant's traffic is 510 isolated from other tenants' traffic as well as from underlay 511 networks. DC operators also need to prevent against a tenant 512 application attacking their underlay DC network; further, they need 513 to protect against a tenant application attacking another tenant 514 application via the DC infrastructure network. For example, a tenant 515 application attempts to generate a large volume of traffic to 516 overload the DC's underlying network. An NVO3 solution has to 517 address these issues. 519 7. IANA Considerations 521 This document does not request any action from IANA. 523 8. References 525 8.1. Normative References 527 [RFC7364] Narten, T., et al "Problem Statement: Overlays for Network 528 Virtualization", RFC7364, October 2014. 530 [RFC7365] Lasserre, M., Motin, T., and et al, "Framework for DC 531 Network Virtualization", RFC7365, October 2014. 533 8.2. Informative References 535 [IEEE802.1Q] IEEE, "IEEE Standard for Local and metropolitan area 536 networks -- Media Access Control (MAC) Bridges and Virtual 537 Bridged Local Area", IEEE Std 802.1Q, 2011. 539 [NVO3HYVR2NVE] Li, Y., et al, "Hypervisor to NVE Control Plane 540 Requirements", draft-ietf-nvo3-hpvr2nve-cp-req-05, work in 541 progress. 543 [NVO3ARCH] Black, D., et al, "An Architecture for Overlay Networks 544 (NVO3)", draft-ietf-nvo3-arch-08, work in progress. 546 [NVO3MCAST] Ghanwani, A., "Framework of Supporting Applications 547 Specific Multicast in NVO3", draft-ghanwani-nvo3-app- 548 mcast-framework-02, work in progress. 550 [RFC1035] Mockapetris, P., "DOMAIN NAMES - Implementation and 551 Specification", RFC1035, November 1987. 553 [RFC3022] Srisuresh, P. and Egevang, K., "Traditional IP Network 554 Address Translator (Traditional NAT)", RFC3022, January 555 2001. 557 [RFC4301] Kent, S., "Security Architecture for the Internet 558 Protocol", rfc4301, December 2005 560 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 561 Networks (VPNs)", RFC 4364, February 2006. 563 [RFC7348] Mahalingam,M., Dutt, D., ific Multicast in etc "VXLAN: A 564 Framework for Overlaying Virtualized Layer 2 Networks over 565 Layer 3 Networks", RFC7348 August 2014. 567 [RFC7432] Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A. and 568 J. Uttaro, "BGP MPLS Based Ethernet VPN", RFC7432, 569 February 2015 571 [RFC7637] Garg, P., and Wang, Y., "NVGRE: Network Virtualization 572 using Generic Routing Encapsulation", RFC7637, Sept. 2015. 574 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 576 Contributors 578 Vinay Bannai 579 PayPal 580 2211 N. First St, 581 San Jose, CA 95131 582 Phone: +1-408-967-7784 583 Email: vbannai@paypal.com 584 Ram Krishnan 585 Brocade Communications 586 San Jose, CA 95134 587 Phone: +1-408-406-7890 588 Email: ramk@brocade.com 590 Kieran Milne 591 Juniper Networks 592 1133 Innovation Way 593 Sunnyvale, CA 94089 594 Phone: +1-408-745-2000 595 Email: kmilne@juniper.net 597 Acknowledgements 599 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 600 Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, Eric 601 Gray, David Allan, Joe Touch, and Olufemi Komolafe for the review, 602 comments, and suggestions. 604 Authors' Addresses 606 Lucy Yong 607 Huawei Technologies 609 Phone: +1-918-808-1918 610 Email: lucy.yong@huawei.com 612 Linda Dunbar 613 Huawei Technologies, 614 5340 Legacy Dr. 615 Plano, TX 75025 US 617 Phone: +1-469-277-5840 618 Email: linda.dunbar@huawei.com 620 Mehmet Toy 622 Phone : +1-856-792-2801 623 E-mail : mtoy054@yahoo.com 624 Aldrin Isaac 625 Juniper Networks 626 E-mail: aldrin.isaac@gmail.com 628 Vishwas Manral 630 Email: vishwas@ionosnetworks.com