idnits 2.17.1 draft-ietf-nvo3-use-case-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (August 4, 2015) is 3159 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-17) exists of draft-ietf-nvo3-hpvr2nve-cp-req-01 == Outdated reference: A later version (-08) exists of draft-sridharan-virtualization-nvgre-07 == Outdated reference: A later version (-08) exists of draft-ietf-nvo3-arch-02 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft Huawei 3 Category: Informational M. Toy 4 Comcast 5 A. Isaac 6 Bloomberg 7 V. Manral 8 Ionos Networks 9 L. Dunbar 10 Huawei 12 Expires: February 2016 August 4, 2015 14 Use Cases for Data Center Network Virtualization Overlays 16 draft-ietf-nvo3-use-case-06 18 Abstract 20 This document describes Data Center (DC) Network Virtualization over 21 Layer 3 (NVO3) use cases that can be deployed in various data 22 centers and serve to different applications. 24 Status of this Memo 26 This Internet-Draft is submitted to IETF in full conformance with 27 the provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF), its areas, and its working groups. Note that 31 other groups may also distribute working documents as Internet- 32 Drafts. 34 Internet-Drafts are draft documents valid for a maximum of six 35 months and may be updated, replaced, or obsoleted by other documents 36 at any time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 The list of current Internet-Drafts can be accessed at 40 http://www.ietf.org/ietf/1id-abstracts.txt. 42 The list of Internet-Draft Shadow Directories can be accessed at 43 http://www.ietf.org/shadow.html. 45 This Internet-Draft will expire on February 5, 2016. 47 Copyright Notice 49 Copyright (c) 2015 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with 57 respect to this document. Code Components extracted from this 58 document must include Simplified BSD License text as described in 59 Section 4.e of the Trust Legal Provisions and are provided without 60 warranty as described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction...................................................3 65 1.1. Terminology...............................................4 66 2. Basic Virtual Networks in a Data Center........................4 67 3. DC Virtual Network and External Network Interconnection........6 68 3.1. DC Virtual Network Access via Internet....................6 69 3.2. DC VN and SP WAN VPN Interconnection......................7 70 4. DC Applications Using NVO3.....................................8 71 4.1. Supporting Multiple Technologies and Applications.........8 72 4.2. Tenant Network with Multiple Subnets......................9 73 4.3. Virtualized Data Center (vDC)............................11 74 5. Summary.......................................................12 75 6. Security Considerations.......................................13 76 7. IANA Considerations...........................................13 77 8. References....................................................13 78 8.1. Normative References.....................................13 79 8.2. Informative References...................................13 80 Contributors.....................................................14 81 Acknowledgements.................................................15 82 Authors' Addresses...............................................15 84 1. Introduction 86 Server Virtualization has changed the Information Technology (IT) 87 industry in terms of the efficiency, cost, and speed of providing a 88 new applications and/or services. However traditional Data Center 89 (DC) networks have some limits in supporting cloud applications and 90 multi tenant networks [RFC7364]. The goal of Network Virtualization 91 Overlays in DC is to decouple the communication among tenant systems 92 from DC physical infrastructure networks and to allow one physical 93 network infrastructure to provide: 95 o Multi-tenant virtual networks and traffic isolation among the 96 virtual networks over the same physical network. 98 o Independent address spaces in individual virtual networks such as 99 MAC, IP, TCP/UDP etc. 101 o Flexible Virtual Machines (VM) and/or workload placement 102 including the ability to move them from server to server without 103 requiring VM address and configuration change and the ability 104 doing a hot move with no disruption to the live application 105 running on VMs. 107 These characteristics of NVO3 help address the issues that cloud 108 applications face in Data Centers [RFC7364]. 110 An NVO3 network can interconnect with another physical network, i.e., 111 not the physical network that the NVO3 network is over. For example: 112 1) DCs that migrate toward NVO3 solution will be done in steps; 2) 113 many DC applications serve to Internet cloud users who are on 114 physical networks; 3) some applications are CPU bound such as Big 115 Data analytics and may not run on virtualized resources. 117 This document describes general NVO3 use cases that apply to various 118 data centers. Three types of the use cases described here are: 120 o Basic NVO3 virtual networks in a DC (Section 2). All Tenant 121 Systems (TS) in virtual networks are located within one DC. The 122 individual virtual networks can be either Layer 2 (L2) or Layer 3 123 (L3). The number of virtual networks supported by NVO3 in a DC is 124 much higher than what traditional VLAN based virtual networks 125 [IEEE 802.1Q] can support. This case is often referred as to the 126 DC East-West traffic. 128 o Virtual networks that span across multiple Data Centers and/or to 129 customer premises, i.e., a virtual network that connects some 130 tenant systems in a DC interconnects another virtual or physical 131 network outside the data center. An enterprise customer may use a 132 traditional carrier VPN or an IPsec tunnel over Internet to 133 communicate its systems in the DC. This is described in Section 3. 135 o DC applications or services that may use NVO3 (Section 4). Three 136 scenarios are described: 1) use NVO3 and other network 137 technologies to build a tenant network; 2) construct several 138 virtual networks as a tenant network; 3) apply NVO3 to a 139 virtualized DC (vDC). 141 The document uses the architecture reference model defined in 142 [RFC7365] to describe the use cases. 144 1.1. Terminology 146 This document uses the terminologies defined in [RFC7365] and 147 [RFC4364]. Some additional terms used in the document are listed 148 here. 150 DMZ: Demilitarized Zone. A computer or small sub-network that sits 151 between a trusted internal network, such as a corporate private LAN, 152 and an un-trusted external network, such as the public Internet. 154 DNS: Domain Name Service 156 NAT: Network Address Translation 158 Note that a virtual network in this document is a virtual network in 159 DC that is implemented with NVO3 technology. 161 2. Basic Virtual Networks in a Data Center 163 A virtual network in a DC enables a communication among Tenant 164 Systems (TS). A TS can be a physical server/device or a virtual 165 machine (VM) on a server, i.e., end-device [RFC7365]. A Network 166 Virtual Edge (NVE) can be co-located with a TS, i.e., on a same end- 167 device, or reside on a different device, e.g., a top of rack switch 168 (ToR). A virtual network has a virtual network identifier (can be 169 global unique or local significant at NVEs). 171 Tenant Systems attached to the same NVE may belong to the same or 172 different virtual networks. An NVE provides tenant traffic 173 forwarding/encapsulation and obtains tenant systems reachability 174 information from Network Virtualization Authority (NVA)[NVO3ARCH]. 176 DC operators can construct many virtual networks that have no 177 communication in between at all. In this case, each virtual network 178 can have its own address spaces such as MAC and IP. DC operators can 179 also construct multiple virtual networks in a way so that the 180 policies are enforced when the TSs in one virtual network 181 communicate with the TSs in other virtual networks. This is referred 182 to as Distributed Gateway [NVO3ARCH]. 184 A Tenant System can be configured with one or multiple addresses and 185 participate in multiple virtual networks, i.e., use the same or 186 different address in different virtual networks. For examples, a 187 Tenant System can be a NAT GW or a firewall and connect to more than 188 one virtual network. 190 Network Virtualization Overlay in this context means that a virtual 191 network is implemented with an overlay technology, i.e., tenant 192 traffic is encapsulated at its local NVE and carried by a tunnel 193 over DC IP network to another NVE where the packet is decapsulated 194 prior to sending to a target tenant system. This architecture 195 decouples tenant system address space and configuration from the 196 infrastructure's, which brings a great flexibility for VM placement 197 and mobility. The technology results the transit nodes in the 198 infrastructure not aware of the existence of the virtual networks. 199 One tunnel may carry the traffic belonging to different virtual 200 networks; a virtual network identifier is used for traffic 201 demultiplexing. 203 A virtual network may be an L2 or L3 domain. The TSs attached to an 204 NVE can belong to different virtual networks that are either in L2 205 or L3. A virtual network can carry unicast traffic and/or 206 broadcast/multicast/unknown traffic from/to tenant systems. There 207 are several ways to transport virtual network BUM traffic 208 [NVO3MCAST]. 210 It is worth to mention two distinct cases regarding to NVE location. 211 The first is that TSs and an NVE are co-located on a same end device, 212 which means that the NVE can be aware of the TS state at any time 213 via internal API. The second is that TSs and an NVE reside on 214 different devices that connect via a wire; in this case, a protocol 215 is necessary for NVE to know TS state [NVO3HYVR2NVE]. 217 One virtual network can provide connectivity to many TSs that attach 218 to many different NVEs in a DC. TS dynamic placement and mobility 219 results in frequent changes of the binding between a TS and an NVE. 220 The TS reachbility update mechanisms need be fast enough so that the 221 updates do not cause any service interruption. The capability of 222 supporting many TSs in a virtual network and many more virtual 223 networks in a DC is critical for NVO3 solution. 225 If a virtual network spans across multiple DC sites, one design is 226 to allow the network seamlessly to span across the sites without DC 227 gateway routers' termination. In this case, the tunnel between a 228 pair of NVEs can be carried within other intermediate tunnels of the 229 Internet or other WANs, or the intra DC and inter DC tunnels can be 230 stitched together to form a tunnel between the pair of NVEs that are 231 in different DC sites. Both cases will form one virtual network 232 across multiple DC sites. 234 3. DC Virtual Network and External Network Interconnection 236 For customers (an enterprise or individuals) who utilize DC 237 provider's compute and storage resources to run their applications, 238 they need to access their systems hosted in a DC through Internet or 239 Service Providers' Wide Area Networks (WAN). A DC provider can 240 construct a virtual network that provides the connectivity to all 241 the resources designated for a customer and allows the customer to 242 access their resources via a virtual gateway (vGW). This, in turn, 243 becomes the case of interconnecting a DC virtual network and the 244 network at customer site(s) via Internet or WANs. Two use cases are 245 described here. 247 3.1. DC Virtual Network Access via Internet 249 A customer can connect to a DC virtual network via Internet in a 250 secure way. Figure 1 illustrates this case. A virtual network is 251 configured on NVE1 and NVE2 and two NVEs are connected via an IP 252 tunnel in the Data Center. A set of tenant systems are attached to 253 NVE1 on a server. The NVE2 resides on a DC Gateway device. NVE2 254 terminates the tunnel and uses the VNID on the packet to pass the 255 packet to the corresponding vGW entity on the DC GW. A customer can 256 access their systems, i.e., TS1 or TSn, in the DC via Internet by 257 using IPsec tunnel [RFC4301]. The IPsec tunnel is configured between 258 the vGW and the customer gateway at customer site. Either static 259 route or iBGP may be used for routes update. The vGW provides IPsec 260 functionality such as authentication scheme and encryption; iBGP 261 protocol is carried within the IPsec tunnel. Some vGW features are 262 listed below: 264 o Some vGW functions such as firewall and load balancer can be 265 performed by locally attached network appliance devices. 267 o The virtual network in DC may use different address space than 268 external users, then vGW needs to provide the NAT function. 270 o More than one IPsec tunnels can be configured for the redundancy. 272 o vGW can be implemented on a server or VM. In this case, IP 273 tunnels or IPsec tunnels can be used over DC infrastructure. 275 o DC operators need to construct a vGW for each customer. 277 Server+---------------+ 278 | TS1 TSn | 279 | |...| | 280 | +-+---+-+ | Customer Site 281 | | NVE1 | | +-----+ 282 | +---+---+ | | CGW | 283 +------+--------+ +--+--+ 284 | * 285 L3 Tunnel * 286 | * 287 DC GW +------+---------+ .--. .--. 288 | +---+---+ | ( '* '.--. 289 | | NVE2 | | .-.' * ) 290 | +---+---+ | ( * Internet ) 291 | +---+---+. | ( * / 292 | | vGW | * * * * * * * * '-' '-' 293 | +-------+ | | IPsec \../ \.--/' 294 | +--------+ | Tunnel 295 +----------------+ 297 DC Provider Site 299 Figure 1 - DC Virtual Network Access via Internet 301 3.2. DC VN and SP WAN VPN Interconnection 303 In this case, an Enterprise customer wants to use Service Provider 304 (SP) WAN VPN [RFC4364] [RFC7432] to interconnect its sites and a 305 virtual network in DC site. Service Provider constructs a VPN for 306 the enterprise customer. Each enterprise site peers with a SP PE. 307 The DC Provider and VPN Service Provider can build a DC virtual 308 network (VN) and VPN independently and interconnects the VN and VPN 309 via a local link or a tunnel between DC GW and WAN PE devices. The 310 control plan interconnection options between the VN and VPN are 311 described in RFC4364 [RFC4364]. In Option A with VRF-LITE [VRF-LITE], 312 both ASBRs, i.e., DC GW and SP PE, maintain a routing/forwarding 313 table, and perform the table lookup in forwarding. In Option B, DC 314 ASBR and SP ASBR do not maintain the forwarding table, it only 315 maintains the VN and VPN identifier mapping, and swap the 316 identifiers on the packet in the forwarding process. Both option A 317 and B requires tunnel termination. In option C, the VN and VPN use 318 the same identifier, and Both ASBRs perform the tunnel stitching, 319 i.e., change the tunnel end points. Each option has pros/cons (see 320 RFC4364) and has been deployed in SP networks depending on the 321 applications. The BGP protocols can be used in these options for 322 route distribution. Note that if the DC is the SP Data Center, the 323 DC GW and SP PE in this case can be merged into one device that 324 performs the interworking of the VN and VPN. 326 This configuration allows the enterprise networks communicating to 327 the tenant systems attached to the VN in a DC provider site without 328 interfering with DC provider underlying physical networks and other 329 virtual networks in the DC. The enterprise can use its own address 330 space in the VN. The DC provider can manage which VM and storage 331 attaching to the VN. The enterprise customer manages what 332 applications to run on the VMs in the VN without the knowledge of 333 VMs location in the DC. (See Section 4 for more) 335 Furthermore, in this use case, the DC operator can move the VMs 336 assigned to the enterprise from one sever to another in the DC 337 without the enterprise customer awareness, i.e., no impact on the 338 enterprise 'live' applications running these resources. Such 339 advanced technologies bring DC providers great benefits in offering 340 cloud applications but add some requirements for NVO3 [RFC7364] as 341 well. 343 4. DC Applications Using NVO3 345 NVO3 technology brings DC operators the flexibility in designing and 346 deploying different applications in an end-to-end virtualization 347 overlay environment, where the operators no longer need to worry 348 about the constraints of the DC physical network configuration when 349 creating VMs and configuring a virtual network. DC provider may use 350 NVO3 in various ways and also use it in the conjunction with other 351 physical networks in DC for a reason. This section just highlights 352 some use cases. 354 4.1. Supporting Multiple Technologies and Applications 356 Most likely servers deployed in a large data center are rolled in at 357 different times and may have different capacities/features. Some 358 servers may be virtualized, some may not; some may be equipped with 359 virtual switches, some may not. For the ones equipped with 360 hypervisor based virtual switches, some may support VxLAN [RFC7348] 361 encapsulation, some may support NVGRE encapsulation [NVGRE], and 362 some may not support any types of encapsulation. To construct a 363 tenant network among these servers and the ToR switches, operators 364 can construct one NVO3 virtual network and one traditional VLAN 365 network; or two virtual networks that one uses VxLAN encapsulation 366 and another uses NVGRE. 368 In these cases, a gateway device or virtual GW is used to 369 participate in two virtual networks. It performs the packet 370 encapsulation/decapsulation translation and may also perform address 371 translation, and etc. 373 A data center may be also constructed with multi-tier zones. Each 374 zone has different access permissions and runs different 375 applications. For example, the three-tier zone design has a front 376 zone (Web tier) with Web applications, a mid zone (application tier) 377 with service applications such as payment and booking, and a back 378 zone (database tier) with Data. External users are only able to 379 communicate with the Web application in the front zone. In this case, 380 the communication between the zones must pass through the security 381 GW/firewall. One virtual network can be configured in each zone and 382 a GW is used to interconnect two virtual networks, i.e., two zones. 383 If individual zones use the different implementations, the GW needs 384 to support these implementations as well. 386 4.2. Tenant Network with Multiple Subnets 388 A tenant network may contain multiple subnets. The DC physical 389 network needs to support the connectivity for many tenant networks. 390 The inter-subnet policies may be placed at some designated gateway 391 devices only. Such design requires the inter-subnet traffic to be 392 sent to one of the gateways first for the policy checking, which may 393 cause traffic hairpin at the gateway in a DC. It is desirable that 394 an NVE can hold some policies and be able to forward inter-subnet 395 traffic directly. To reduce NVE burden, the hybrid design may be 396 deployed, i.e., an NVE can perform forwarding for the selected 397 inter-subnets and the designated GW performs for the rest. For 398 example, each NVE performs inter-subnet forwarding for a tenant, and 399 the designated GW is used for inter-subnet traffic from/to the 400 different tenant networks. 402 A tenant network may span across multiple Data Centers that are in 403 difference locations. DC operators may configure an L2 VN within 404 each DC and an L3 VN between DCs for a tenant network. For this 405 configuration, the virtual L2/L3 gateway can be implemented on DC GW 406 device. Figure 2 illustrates this configuration. 408 Figure 2 depicts two DC sites. The site A constructs one L2 VN, say 409 L2VNa, on NVE1, NVE2, and NVE3. NVE1 and NVE2 reside on the servers 410 which host multiple tenant systems. NVE3 resides on the DC GW device. 411 The site Z has similar configuration with L2VNz on NVE3, NVE4, and 412 NVE6. One L3 VN, say L3VNx, is configured on the NVE5 at site A and 413 the NVE6 at site Z. An internal Virtual Interface of Routing and 414 Bridging (VIRB) is used between L2VNI and L3VNI on NVE5 and NVE6, 415 respectively. The L2VNI is the MAC/NVE mapping table and the L3VNI 416 is the IP prefix/NVE mapping table. A packet to the NVE5 from L2VNa 417 will be decapsulated and converted into an IP packet and then 418 encapsulated and sent to the site Z. The policies can be checked at 419 VIRB. 421 Note that the L2VNa, L2VNz, and L3VNx in Figure 2 are NVO3 virtual 422 networks. 424 NVE5/DCGW+------------+ +-----------+ NVE6/DCGW 425 | +-----+ | '''''''''''''''' | +-----+ | 426 | |L3VNI+----+' L3VNx '+---+L3VNI| | 427 | +--+--+ | '''''''''''''''' | +--+--+ | 428 | |VIRB | | VIRB| | 429 | +--+---+ | | +---+--+ | 430 | |L2VNIs| | | |L2VNIs| | 431 | +--+---+ | | +---+--+ | 432 +----+-------+ +------+----+ 433 ''''|'''''''''' ''''''|''''''' 434 ' L2VNa ' ' L2VNz ' 435 NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S 436 +-----+---+ +----+----+ +------+--+ +----+----+ 437 | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | 438 | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | 439 | ++---++ | | ++---++ | | ++---++ | | ++---++ | 440 +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ 441 |...| |...| |...| |...| 443 Tenant Systems Tenant Systems 445 DC Site A DC Site Z 447 Figure 2 - Tenant Virtual Network with Bridging/Routing 449 4.3. Virtualized Data Center (vDC) 451 An Enterprise Data Center today may deploy routers, switches, and 452 network appliance devices to construct its internal network, DMZ, 453 and external network access; it may have many servers and storage 454 running various applications. With NVO3 technology, a DC Provider 455 can construct a virtualized DC over its DC infrastructure and offer 456 a virtual DC service to enterprise customers. A vDC at DC Provider 457 site provides the same capability as a physical DC at the customer 458 site. A customer manages what and how applications to run in its 459 vDC. DC Provider can further offer different network service 460 functions to a vDC. The network service functions may include 461 firewall, DNS, load balancer, gateway, and etc. 463 Figure 3 below illustrates one scenario. For the simple 464 illustration, it only shows the L3 VN or L2 VN in abstraction. In 465 this example, DC Provider operators create several L2 VNs (L2VNx, 466 L2VNy, L2VNz) to group the tenant systems together per application 467 basis, create one L3 VN, e.g., VNa for the internal routing. A 468 network function, firewall and gateway, runs on a VM or server that 469 connects to the L3VNa and is used for inbound and outbound traffic 470 process. A load balancer (LB) is used in L2 VNx. A VPN is also built 471 between the gateway and enterprise router. Enterprise customer runs 472 Web/Mail/Voice applications on VMs at the provider DC site that can 473 spread out on many servers; the users at Enterprise site access the 474 applications running in the provider DC site via the VPN; Internet 475 users access these applications via the gateway/firewall at the 476 provider DC. 478 Enterprise customer decides which applications are accessed by 479 intranet only and which by both intranet and extranet and configures 480 the proper security policy and gateway function at firewall/gateway. 481 Furthermore an enterprise customer may want multi-zones in a vDC 482 (See section 4.1) for the security and/or set different QoS levels 483 for the different applications. 485 The vDC use case requires the NVO3 solution to provide the DC 486 operators an easy and quick way to create a VN and NVEs for any vDC 487 design, to allocate TSs and assign TSs to the corresponding VN, and 488 to illustrate vDC topology and manage/configure individual elements 489 in the vDC via the vDC topology. 491 Internet ^ Internet 492 | 493 ^ +--+---+ 494 | | GW | 495 | +--+---+ 496 | | 497 +-------+--------+ +--+---+ 498 |Firewall/Gateway+--- VPN-----+router| 499 +-------+--------+ +-+--+-+ 500 | | | 501 ...+.... |..| 502 +-------: L3 VNa :---------+ LANs 503 +-+-+ ........ | 504 |LB | | | Enterprise Site 505 +-+-+ | | 506 ...+... ...+... ...+... 507 : L2VNx : : L2VNy : : L2VNx : 508 ....... ....... ....... 509 |..| |..| |..| 510 | | | | | | 511 Web Apps Mail Apps VoIP Apps 513 Provider DC Site 515 Figure 3 - Virtual Data Center (vDC) 517 5. Summary 519 This document describes some general potential use cases of NVO3 in 520 DCs. The combination of these cases will give operators flexibility 521 and capability to design more sophisticated cases for various cloud 522 applications. 524 DC services may vary from infrastructure as a service (IaaS), 525 platform as a service (PaaS), to software as a service (SaaS), in 526 which NVO3 virtual networks are just a portion of such services. 528 NVO3 uses tunnel technique so that two NVEs appear as one hop to 529 each other in a virtual network. Many tunneling technologies can 530 serve this function. The tunneling may in turn be tunneled over 531 other intermediate tunnels over the Internet or other WANs. 533 A DC virtual network may be accessed by external users in a secure 534 way. Many existing technologies can help achieve this. 536 NVO3 implementations may vary. Some DC operators prefer to use 537 centralized controller to manage tenant system reachbility in a 538 virtual network, other prefer to use distributed protocols to 539 advertise the tenant system location, i.e., NVE location. When a 540 tenant network spans across multiple DCs and WANs, each network 541 administration domain may use different methods to distribute the 542 tenant system locations. Both control plane and data plane 543 interworking are necessary. 545 6. Security Considerations 547 Security is a concern. DC operators need to provide a tenant a 548 secured virtual network, which means one tenant's traffic isolated 549 from other tenant's traffic and non-tenant's traffic; they also need 550 to prevent DC underlying network from any tenant application 551 attacking through the tenant virtual network or one tenant 552 application attacking another tenant application via DC 553 infrastructure network. For example, a tenant application attempts 554 to generate a large volume of traffic to overload DC underlying 555 network. The NVO3 solution has to address these issues. 557 7. IANA Considerations 559 This document does not request any action from IANA. 561 8. References 563 8.1. Normative References 565 [RFC7364] Narten, T., et al "Problem Statement: Overlays for Network 566 Virtualization", RFC7364, October 2014. 568 [RFC7365] Lasserre, M., Motin, T., and et al, "Framework for DC 569 Network Virtualization", RFC7365, October 2014. 571 8.2. Informative References 573 [IEEE 802.1Q] IEEE, "IEEE Standard for Local and metropolitan area 574 networks -- Media Access Control (MAC) Bridges and Virtual 575 Bridged Local Area", IEEE Std 802.1Q, 2011. 577 [NVO3HYVR2NVE] Li, Y., et al, "Hypervisor to NVE Control Plane 578 Requirements", draft-ietf-nvo3-hpvr2nve-cp-req-01, work in 579 progress. 581 [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic 582 Routing Encapsulation", draft-sridharan-virtualization- 583 nvgre-07, work in progress. 585 [NVO3ARCH] Black, D., et al, "An Architecture for Overlay Networks 586 (NVO3)", draft-ietf-nvo3-arch-02, work in progress. 588 [NVO3MCAST] Ghanwani, A., "Framework of Supporting Applications 589 Specific Multicast in NVO3", draft-ghanwani-nvo3-app- 590 mcast-framework-02, work in progress. 592 [RFC4301] Kent, S., "Security Architecture for the Internet 593 Protocol", rfc4301, December 2005 595 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 596 Networks (VPNs)", RFC 4364, February 2006. 598 [RFC7348] Mahalingam,M., Dutt, D., ific Multicast in etc "VXLAN: A 599 Framework for Overlaying Virtualized Layer 2 Networks over 600 Layer 3 Networks", RFC7348 August 2014. 602 [RFC7432] Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A. and 603 J. Uttaro, "BGP MPLS Based Ethernet VPN", RFC7432, 604 February 2015 606 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 608 Contributors 610 Vinay Bannai 611 PayPal 612 2211 N. First St, 613 San Jose, CA 95131 614 Phone: +1-408-967-7784 615 Email: vbannai@paypal.com 617 Ram Krishnan 618 Brocade Communications 619 San Jose, CA 95134 620 Phone: +1-408-406-7890 621 Email: ramk@brocade.com 623 Acknowledgements 625 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 626 Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, and 627 Eric Gray for the review, comments, and suggestions. 629 Authors' Addresses 631 Lucy Yong 632 Huawei Technologies 634 Phone: +1-918-808-1918 635 Email: lucy.yong@huawei.com 637 Mehmet Toy 638 Comcast 639 1800 Bishops Gate Blvd., 640 Mount Laurel, NJ 08054 642 Phone : +1-856-792-2801 643 E-mail : mehmet_toy@cable.comcast.com 645 Aldrin Isaac 646 Bloomberg 647 E-mail: aldrin.isaac@gmail.com 649 Vishwas Manral 650 Ionas Networks 652 Email: vishwas@ionosnetworks.com 654 Linda Dunbar 655 Huawei Technologies, 656 5340 Legacy Dr. 657 Plano, TX 75025 US 659 Phone: +1-469-277-5840 660 Email: linda.dunbar@huawei.com