idnits 2.17.1 draft-ietf-nvo3-use-case-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 406 has weird spacing: '... as the physi...' -- The document date (September 1, 2016) is 2794 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'IEEE802.1Q' is mentioned on line 378, but not defined == Outdated reference: A later version (-17) exists of draft-ietf-nvo3-hpvr2nve-cp-req-01 == Outdated reference: A later version (-08) exists of draft-ietf-nvo3-arch-02 -- Obsolete informational reference (is this intentional?): RFC 1631 (Obsoleted by RFC 3022) Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft L. Dunbar 3 Category: Informational Huawei 4 M. Toy 6 A. Isaac 7 Juniper Networks 8 V. Manral 9 Ionos Networks 11 Expires: March 2017 September 1, 2016 13 Use Cases for Data Center Network Virtualization Overlays 15 draft-ietf-nvo3-use-case-09 17 Abstract 19 This document describes Data Center (DC) Network Virtualization over 20 Layer 3 (NVO3) use cases that can be deployed in various data 21 centers and serve different applications. 23 Status of this Memo 25 This Internet-Draft is submitted to IETF in full conformance with 26 the provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF), its areas, and its working groups. Note that 30 other groups may also distribute working documents as Internet- 31 Drafts. 33 Internet-Drafts are draft documents valid for a maximum of six 34 months and may be updated, replaced, or obsoleted by other documents 35 at any time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 The list of current Internet-Drafts can be accessed at 39 http://www.ietf.org/ietf/1id-abstracts.txt. 41 The list of Internet-Draft Shadow Directories can be accessed at 42 http://www.ietf.org/shadow.html. 44 This Internet-Draft will expire on March 3, 2017. 46 Copyright Notice 48 Copyright (c) 2015 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction...................................................3 64 1.1. Terminology...............................................4 65 2. Basic Virtual Networks in a Data Center........................4 66 3. DC Virtual Network and External Network Interconnection........6 67 3.1. DC Virtual Network Access via the Internet................6 68 3.2. DC VN and SP WAN VPN Interconnection......................7 69 4. DC Applications Using NVO3.....................................8 70 4.1. Supporting Multiple Technologies..........................9 71 4.2. DC Application with Multiple Virtual Networks.............9 72 4.3. Virtualized Data Center (vDC)............................10 73 5. Summary.......................................................11 74 6. Security Considerations.......................................12 75 7. IANA Considerations...........................................12 76 8. References....................................................12 77 8.1. Normative References.....................................12 78 8.2. Informative References...................................12 79 Contributors.....................................................13 80 Acknowledgements.................................................14 81 Authors' Addresses...............................................14 83 1. Introduction 85 Server Virtualization has changed the Information Technology (IT) 86 industry in terms of the efficiency, cost, and speed of providing 87 new applications and/or services such as cloud applications. However 88 traditional Data Center (DC) networks have some limits in supporting 89 cloud applications and multi tenant networks [RFC7364]. The goal of 90 Network Virtualization Overlays in the DC is to decouple the 91 communication among tenant systems from DC physical infrastructure 92 networks and to allow one physical network infrastructure to provide: 94 o Multi-tenant virtual networks and traffic isolation among the 95 virtual networks over the same physical network. 97 o Independent address spaces in individual virtual networks such as 98 MAC, IP, TCP/UDP etc. 100 o Flexible Virtual Machines (VM) and/or workload placement 101 including the ability to move them from one server to another 102 without requiring VM address and configuration changes, and the 103 ability to perform a "hot move" with no disruption to the live 104 application running on VMs. 106 These characteristics of NVO3 help address the issues that cloud 107 applications face in Data Centers [RFC7364]. 109 An NVO3 network may interconnect with another NVO3 virtual network, 110 or another physical network (i.e., not the physical network that the 111 NVO3 network is over), via a gateway. The use case examples for the 112 latter are: 1) DCs that migrate toward an NVO3 solution will be done 113 in steps, where a portion of tenant systems in a VN is on 114 virtualized servers while others exist on a LAN. 2) many DC 115 applications serve to Internet users who are on physical networks; 3) 116 some applications are CPU bound, such as Big Data analytics, and may 117 not run on virtualized resources. Some inter-VN policies can be 118 enforced at the gateway. 120 This document describes general NVO3 use cases that apply to various 121 data centers. Three types of the use cases described in this 122 document are: 124 o Basic NVO3 virtual networks in a DC (Section 2). All Tenant 125 Systems (TS) in the virtual network are located within the same 126 DC. The individual virtual networks can be either Layer 2 (L2) or 127 Layer 3 (L3). The number of NVO3 virtual networks in a DC is much 128 higher than what traditional VLAN based virtual networks [IEEE 129 802.1Q] can support. This case is often referred as to the DC 130 East-West traffic. 132 o Virtual networks that span across multiple Data Centers and/or to 133 customer premises, i.e., an NVO3 virtual network where some 134 tenant systems in a DC attach to interconnect another virtual or 135 physical network outside the data center. An enterprise customer 136 may use a traditional carrier VPN or an IPsec tunnel over the 137 Internet to communicate with its systems in the DC. This is 138 described in Section 3. 140 o DC applications or services require an advanced network that 141 contains several NVO3 virtual networks that are interconnected by 142 the gateways. Three scenarios are described in Section 4: 1) 143 using NVO3 and other network technologies to build a tenant 144 network; 2) constructing several virtual networks as a tenant 145 network; 3) applying NVO3 to a virtualized DC (vDC). 147 The document uses the architecture reference model defined in 148 [RFC7365] to describe the use cases. 150 1.1. Terminology 152 This document uses the terminologies defined in [RFC7365] and 153 [RFC4364]. Some additional terms used in the document are listed 154 here. 156 DMZ: Demilitarized Zone. A computer or small sub-network that sits 157 between a trusted internal network, such as a corporate private LAN, 158 and an un-trusted external network, such as the public Internet. 160 DNS: Domain Name Service [RFC1035] 162 NAT: Network Address Translation [RFC1631] 164 Note that a virtual network in this document refers to an NVO3 165 virtual network in a DC [RFC7365]. 167 2. Basic Virtual Networks in a Data Center 169 A virtual network in a DC enables communications among Tenant 170 Systems (TS). A TS can be a physical server/device or a virtual 171 machine (VM) on a server, i.e., end-device [RFC7365]. A Network 172 Virtual Edge (NVE) can be co-located with a TS, i.e., on the same 173 end-device, or reside on a different device, e.g., a top of rack 174 switch (ToR). A virtual network has a virtual network identifier 175 (can be globally unique or locally significant at NVEs). 177 Tenant Systems attached to the same NVE may belong to the same or 178 different virtual networks. An NVE provides tenant traffic 179 forwarding/encapsulation and obtains tenant systems reachability 180 information from a Network Virtualization Authority (NVA)[NVO3ARCH]. 181 DC operators can construct multiple separate virtual networks, and 182 provide each with own address space. 184 Network Virtualization Overlay in this context means that a virtual 185 network is implemented with an overlay technology, i.e., within a DC 186 that has IP infrastructure, tenant traffic is encapsulated at its 187 local NVE and carried by a tunnel to another NVE where the packet is 188 decapsulated and sent to a target tenant system. This architecture 189 decouples tenant system address space and configuration from the 190 infrastructure's, which provides great flexibility for VM placement 191 and mobility. It also means that the transit nodes in the 192 infrastructure are not aware of the existence of the virtual 193 networks and tenant systems attached to the virtual networks. The 194 tunneled packets are carried as regular IP packets and are sent to 195 NVEs. One tunnel may carry the traffic belonging to multiple virtual 196 networks; a virtual network identifier is used for traffic 197 demultiplexing. A tunnel encapsulation protocol is necessary for NVE 198 to encapsulate the packets from Tenant Systems and encode other 199 information on the tunneled packets to support NVO3 implementation. 201 A virtual network implemented by NVO3 may be an L2 or L3 domain. The 202 virtual network can carry unicast traffic and/or multicast, 203 broadcast/unknown (for L2 only) traffic from/to tenant systems. 204 There are several ways to transport virtual network BUM traffic 205 [NVO3MCAST]. 207 It is worth mentioning two distinct cases regarding to NVE location. 208 The first is where TSs and an NVE are co-located on a single end 209 host/device, which means that the NVE can be aware of the TS's state 210 at any time via an internal API. The second is where TSs and an NVE 211 are not co-located, with the NVE residing on a network device; in 212 this case, a protocol is necessary to allow the NVE to be aware of 213 the TS's state [NVO3HYVR2NVE]. 215 One virtual network can provide connectivity to many TSs that attach 216 to many different NVEs in a DC. TS dynamic placement and mobility 217 results in frequent changes of the binding between a TS and an NVE. 219 The TS reachability update mechanisms need be fast enough so that 220 the updates do not cause any communication disruption/interruption. 221 The capability of supporting many TSs in a virtual network and many 222 more virtual networks in a DC is critical for the NVO3 solution. 224 If a virtual network spans across multiple DC sites, one design is 225 to allow the network to seamlessly span across the sites without DC 226 gateway routers' termination. In this case, the tunnel between a 227 pair of NVEs can be carried within other intermediate tunnels over 228 the Internet or other WANs, or the intra DC and inter DC tunnels can 229 be stitched together to form a tunnel between the pair of NVEs that 230 are in different DC sites. Both cases will form one virtual network 231 across multiple DC sites. 233 3. DC Virtual Network and External Network Interconnection 235 Many customers (an enterprise or individuals) who utilize a DC 236 provider's compute and storage resources to run their applications 237 need to access their systems hosted in a DC through Internet or 238 Service Providers' Wide Area Networks (WAN). A DC provider can 239 construct a virtual network that provides connectivity to all the 240 resources designated for a customer and allows the customer to 241 access the resources via a virtual gateway (vGW). This, in turn, 242 becomes the case of interconnecting a DC virtual network and the 243 network at customer site(s) via the Internet or WANs. Two use cases 244 are described here. 246 3.1. DC Virtual Network Access via the Internet 248 A customer can connect to a DC virtual network via the Internet in a 249 secure way. Figure 1 illustrates this case. The DC virtual network 250 has an instance at NVE1 and NVE2 and the two NVEs are connected via 251 an IP tunnel in the Data Center. A set of tenant systems are 252 attached to NVE1 on a server. NVE2 resides on a DC Gateway device. 253 NVE2 terminates the tunnel and uses the VNID on the packet to pass 254 the packet to the corresponding vGW entity on the DC GW (the vGW is 255 the default gateway for the virtual network). A customer can access 256 their systems, i.e., TS1 or TSn, in the DC via the Internet by using 257 an IPsec tunnel [RFC4301]. The IPsec tunnel is configured between 258 the vGW and the customer gateway at the customer site. Either a 259 static route or iBGP may be used for prefix advertisement. The vGW 260 provides IPsec functionality such as authentication scheme and 261 encryption; iBGP protocol traffic is carried within the IPsec tunnel. 262 Some vGW features are listed below: 264 o The vGW maintains the TS/NVE mappings and advertises the TS 265 prefix to the customer via static route or iBGP. 267 o Some vGW functions such as firewall and load balancer can be 268 performed by locally attached network appliance devices. 270 o If the virtual network in the DC uses different address space 271 than external users, then the vGW needs to provide the NAT 272 function. 274 o More than one IPsec tunnel can be configured for redundancy. 276 o The vGW can be implemented on a server or VM. In this case, IP 277 tunnels or IPsec tunnels can be used over the DC infrastructure. 279 o DC operators need to construct a vGW for each customer. 281 Server+---------------+ 282 | TS1 TSn | 283 | |...| | 284 | +-+---+-+ | Customer Site 285 | | NVE1 | | +-----+ 286 | +---+---+ | | CGW | 287 +------+--------+ +--+--+ 288 | * 289 L3 Tunnel * 290 | * 291 DC GW +------+---------+ .--. .--. 292 | +---+---+ | ( '* '.--. 293 | | NVE2 | | .-.' * ) 294 | +---+---+ | ( * Internet ) 295 | +---+---+. | ( * / 296 | | vGW | * * * * * * * * '-' '-' 297 | +-------+ | | IPsec \../ \.--/' 298 | +--------+ | Tunnel 299 +----------------+ 301 DC Provider Site 303 Figure 1 - DC Virtual Network Access via the Internet 305 3.2. DC VN and SP WAN VPN Interconnection 307 In this case, an Enterprise customer wants to use a Service Provider 308 (SP) WAN VPN [RFC4364] [RFC7432] to interconnect its sites with a 309 virtual network in a DC site. The Service Provider constructs a VPN 310 for the enterprise customer. Each enterprise site peers with an SP 311 PE. The DC Provider and VPN Service Provider can build a DC virtual 312 network (VN) and VPN independently, and then interconnect them via a 313 local link, or a tunnel between the DC GW and WAN PE devices. The 314 control plane interconnection options between the DC and WAN are 315 described in RFC4364 [RFC4364]. Using Option A with VRF-LITE [VRF- 316 LITE], both ASBRs, i.e., DC GW and SP PE, maintain a 317 routing/forwarding table (VRF). Using Option B, the DC ASBR and SP 318 ASBR do not maintain the VRF table; they only maintain the VN and 319 VPN identifier mappings, i.e., label mapping, and swap the label on 320 the packets in the forwarding process. Both option A and B allow VN 321 and VPN using own identifier and two identifiers are mapped at DC GW. 322 With option C, the VN and VPN use the same identifier and both ASBRs 323 perform the tunnel stitching, i.e., tunnel segment mapping. Each 324 option has pros/cons [RFC4364] and has been deployed in SP networks 325 depending on the applications in use. BGP is used with these options 326 for route distribution between DCs and SP WANs. Note that if the DC 327 is the SP's Data Center, the DC GW and SP PE in this case can be 328 merged into one device that performs the interworking of the VN and 329 VPN within an AS. 331 The configurations above allow the enterprise networks to 332 communicate with the tenant systems attached to the VN in a DC 333 without interfering with the DC provider's underlying physical 334 networks and other virtual networks. The enterprise can use its own 335 address space in the VN. The DC provider can manage which VM and 336 storage elements attach to the VN. The enterprise customer manages 337 which applications run on the VMs in the VN without knowing the 338 location of the VMs in the DC. (See Section 4 for more) 340 Furthermore, in this use case, the DC operator can move the VMs 341 assigned to the enterprise from one sever to another in the DC 342 without the enterprise customer being aware, i.e., with no impact on 343 the enterprise's 'live' applications. Such advanced technologies 344 bring DC providers great benefits in offering cloud services, but 345 add some requirements for NVO3 [RFC7364] as well. 347 4. DC Applications Using NVO3 349 NVO3 technology provides DC operators with the flexibility in 350 designing and deploying different applications in an end-to-end 351 virtualization overlay environment. Operators no longer need to 352 worry about the constraints of the DC physical network configuration 353 when creating VMs and configuring a virtual network. A DC provider 354 may use NVO3 in various ways, in conjunction with other physical 355 networks and/or virtual networks in the DC for a reason. This 356 section highlights some use cases for this goal. 358 4.1. Supporting Multiple Technologies 360 Servers deployed in a large data center are often installed at 361 different times, and may have different capabilities/features. Some 362 servers may be virtualized, while others may not; some may be 363 equipped with virtual switches, while others may not. For the 364 servers equipped with Hypervisor-based virtual switches, some may 365 support VxLAN [RFC7348] encapsulation, some may support NVGRE 366 encapsulation [RFC7637], and some may not support any encapsulation. 367 To construct a tenant network among these servers and the ToR 368 switches, operators can construct one traditional VLAN network and 369 two virtual networks where one uses VxLAN encapsulation and the 370 other uses NVGRE, and interconnect these three networks via a 371 gateway or virtual GW. The GW performs packet 372 encapsulation/decapsulation translation between the networks. 374 Another case is that some software of a tenant is high CPU and 375 memory consumption, which only makes a sense to run on metal servers; 376 other software of the tenant may be good to run on VMs. However 377 provider DC infrastructure is configured to use NVO3 to connect to 378 VMs and VLAN [IEEE802.1Q] connect to metal services. The tenant 379 network requires interworking between NVO3 and traditional VLAN. 381 4.2. DC Application with Multiple Virtual Networks 383 A DC application may necessarily be constructed with multi-tier 384 zones, where each zone has different access permissions and runs 385 different applications. For example, a three-tier zone design has a 386 front zone (Web tier) with Web applications, a mid zone (application 387 tier) where service applications such as credit payment or ticket 388 booking run, and a back zone (database tier) with Data. External 389 users are only able to communicate with the Web application in the 390 front zone; the back zone can only receive traffic from the 391 application zone. In this case, communications between the zones 392 must pass through a GW/firewall. Each zone can be implemented by one 393 virtual network and a GW/firewall can be used to between two virtual 394 networks, i.e., two zones. A tunnel carrying virtual network traffic 395 has to be terminated at the GW/firewall where overlay traffic is 396 processed. 398 4.3. Virtualized Data Center (vDC) 400 An Enterprise Data Center today may deploy routers, switches, and 401 network appliance devices to construct its internal network, DMZ, 402 and external network access; it may have many servers and storage 403 running various applications. With NVO3 technology, a DC Provider 404 can construct a virtualized DC over its physical DC infrastructure 405 and offer a virtual DC service to enterprise customers. A vDC at the 406 DC Provider site provides the same capability as the physical DC at 407 the customer site. A customer manages their own applications running 408 in their vDC. A DC Provider can further offer different network 409 service functions to the customer. The network service functions may 410 include firewall, DNS, load balancer, gateway, etc. 412 Figure 3 below illustrates one such scenario. For simplicity, it 413 only shows the L3 VN or L2 VN in abstraction. In this example, the 414 DC Provider operators create several L2 VNs (L2VNx, L2VNy, L2VNz) to 415 group the tenant systems together on a per-application basis, and 416 one L3 VN (L3VNa) for the internal routing. A network firewall and 417 gateway runs on a VM or server that connects to L3VNa and is used 418 for inbound and outbound traffic processing. A load balancer (LB) is 419 used in L2VNx. A VPN is also built between the gateway and 420 enterprise router. The Enterprise customer runs Web/Mail/Voice 421 applications on VMs at the provider DC site which may be spread 422 across many servers. The users at the Enterprise site access the 423 applications running in the provider DC site via the VPN; Internet 424 users access these applications via the gateway/firewall at the 425 provider DC. 427 The Enterprise customer decides which applications should be 428 accessible only via the intranet and which should be assessable via 429 both the intranet and Internet, and configures the proper security 430 policy and gateway function at the firewall/gateway. Furthermore, an 431 enterprise customer may want multi-zones in a vDC (See section 4.1) 432 for the security and/or the ability to set different QoS levels for 433 the different applications. 435 The vDC use case requires the NVO3 solution to provide DC operators 436 with an easy and quick way to create a VN and NVEs for any vDC 437 design, to allocate TSs and assign TSs to the corresponding VN, and 438 to illustrate vDC topology and manage/configure individual elements 439 in the vDC in a secure way. 441 Internet ^ Internet 442 | 444 ^ +--+---+ 445 | | GW | 446 | +--+---+ 447 | | 448 +-------+--------+ +--+---+ 449 |Firewall/Gateway+--- VPN-----+router| 450 +-------+--------+ +-+--+-+ 451 | | | 452 ...+.... |..| 453 +-------: L3 VNa :---------+ LANs 454 +-+-+ ........ | 455 |LB | | | Enterprise Site 456 +-+-+ | | 457 ...+... ...+... ...+... 458 : L2VNx : : L2VNy : : L2VNz : 459 ....... ....... ....... 460 |..| |..| |..| 461 | | | | | | 462 Web App. Mail App. VoIP App. 464 Provider DC Site 466 Figure 2 - Virtual Data Center (vDC) 468 5. Summary 470 This document describes some general and potential NVO3 use cases in 471 DCs. The combination of these cases will give operators the 472 flexibility and capability to design more sophisticated cases for 473 various cloud applications. 475 DC services may vary, from infrastructure as a service (IaaS), to 476 platform as a service (PaaS), to software as a service (SaaS). 477 In these services, NVO3 virtual networks are just a portion of such 478 services. 480 NVO3 uses tunnel techniques to deliver VN traffic over an IP network. 481 A tunnel encapsulation protocol is necessary. An NVO3 tunnel may in 482 turn be tunneled over other intermediate tunnels over the Internet 483 or other WANs. 485 An NVO3 virtual network in a DC may be accessed by external users in 486 a secure way. Many existing technologies can help achieve this. 488 NVO3 implementations may vary. Some DC operators prefer to use a 489 centralized controller to manage tenant system reachability in a 490 virtual network, while other operators prefer to use distribution 491 protocols to advertise the tenant system location, i.e., NVE 492 location. When a tenant network spans across multiple DCs and WANs, 493 each network administration domain may use different methods to 494 distribute the tenant system locations. Both control plane and data 495 plane interworking are necessary. 497 6. Security Considerations 499 Security is a concern. DC operators need to provide a tenant with a 500 secured virtual network, which means one tenant's traffic is 501 isolated from other tenants' traffic as well as from underlay 502 networks. DC operators also need to prevent against a tenant 503 application attacking their underlay DC network; further, they need 504 to protect against a tenant application attacking another tenant 505 application via the DC infrastructure network. For example, a tenant 506 application attempts to generate a large volume of traffic to 507 overload the DC's underlying network. An NVO3 solution has to 508 address these issues. 510 7. IANA Considerations 512 This document does not request any action from IANA. 514 8. References 516 8.1. Normative References 518 [RFC7364] Narten, T., et al "Problem Statement: Overlays for Network 519 Virtualization", RFC7364, October 2014. 521 [RFC7365] Lasserre, M., Motin, T., and et al, "Framework for DC 522 Network Virtualization", RFC7365, October 2014. 524 8.2. Informative References 526 [IEEE 802.1Q] IEEE, "IEEE Standard for Local and metropolitan area 527 networks -- Media Access Control (MAC) Bridges and Virtual 528 Bridged Local Area", IEEE Std 802.1Q, 2011. 530 [NVO3HYVR2NVE] Li, Y., et al, "Hypervisor to NVE Control Plane 531 Requirements", draft-ietf-nvo3-hpvr2nve-cp-req-01, work in 532 progress. 534 [NVO3ARCH] Black, D., et al, "An Architecture for Overlay Networks 535 (NVO3)", draft-ietf-nvo3-arch-02, work in progress. 537 [NVO3MCAST] Ghanwani, A., "Framework of Supporting Applications 538 Specific Multicast in NVO3", draft-ghanwani-nvo3-app- 539 mcast-framework-02, work in progress. 541 [RFC1035] Mockapetris, P., "DOMAIN NAMES - Implementation and 542 Specification", RFC1035, November 1987. 544 [RFC1631] Egevang, K., Francis, P., "The IP network Address 545 Translator (NAT)", RFC1631, May 1994. 547 [RFC4301] Kent, S., "Security Architecture for the Internet 548 Protocol", rfc4301, December 2005 550 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 551 Networks (VPNs)", RFC 4364, February 2006. 553 [RFC7348] Mahalingam,M., Dutt, D., ific Multicast in etc "VXLAN: A 554 Framework for Overlaying Virtualized Layer 2 Networks over 555 Layer 3 Networks", RFC7348 August 2014. 557 [RFC7432] Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A. and 558 J. Uttaro, "BGP MPLS Based Ethernet VPN", RFC7432, 559 February 2015 561 [RFC7637] Garg, P., and Wang, Y., "NVGRE: Network Virtualization 562 using Generic Routing Encapsulation", RFC7637, Sept. 2015. 564 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 566 Contributors 568 Vinay Bannai 569 PayPal 570 2211 N. First St, 571 San Jose, CA 95131 572 Phone: +1-408-967-7784 573 Email: vbannai@paypal.com 575 Ram Krishnan 576 Brocade Communications 577 San Jose, CA 95134 578 Phone: +1-408-406-7890 579 Email: ramk@brocade.com 581 Kieran Milne 582 Juniper Networks 583 1133 Innovation Way 584 Sunnyvale, CA 94089 585 Phone: +1-408-745-2000 586 Email: kmilne@juniper.net 588 Acknowledgements 590 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 591 Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, Eric 592 Gray, David Allan, and Joe Touch for the review, comments, and 593 suggestions. 595 Authors' Addresses 597 Lucy Yong 598 Huawei Technologies 600 Phone: +1-918-808-1918 601 Email: lucy.yong@huawei.com 603 Linda Dunbar 604 Huawei Technologies, 605 5340 Legacy Dr. 606 Plano, TX 75025 US 608 Phone: +1-469-277-5840 609 Email: linda.dunbar@huawei.com 611 Mehmet Toy 613 Phone : +1-856-792-2801 614 E-mail : mtoy054@yahoo.com 616 Aldrin Isaac 617 Juniper Networks 618 E-mail: aldrin.isaac@gmail.com 619 Vishwas Manral 621 Email: vishwas@ionosnetworks.com