idnits 2.17.1 draft-ietf-nvo3-use-case-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 2 instances of too long lines in the document, the longest one being 126 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (January 8, 2014) is 3754 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ITU-T Y.1731' is mentioned on line 550, but not defined == Outdated reference: A later version (-08) exists of draft-sridharan-virtualization-nvgre-03 == Outdated reference: A later version (-08) exists of draft-ietf-nvo3-arch-00 == Outdated reference: A later version (-09) exists of draft-ietf-nvo3-framework-04 == Outdated reference: A later version (-01) exists of draft-ghanwani-nvo3-mcast-issues-00 == Outdated reference: A later version (-09) exists of draft-mahalingam-dutt-dcops-vxlan-06 Summary: 1 error (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft Huawei 3 Category: Informational M. Toy 4 Comcast 5 A. Isaac 6 Bloomberg 7 V. Manral 8 Hewlett-Packard 9 L. Dunbar 10 Huawei 12 Expires: July 2014 January 8, 2014 14 Use Cases for DC Network Virtualization Overlays 16 draft-ietf-nvo3-use-case-03 18 Abstract 20 This document describes DC Network Virtualization (NVO3) use cases 21 that may be potentially deployed in various data centers and apply 22 to different applications. 24 Status of this Memo 26 This Internet-Draft is submitted to IETF in full conformance with 27 the provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF), its areas, and its working groups. Note that 31 other groups may also distribute working documents as Internet- 32 Drafts. 34 Internet-Drafts are draft documents valid for a maximum of six 35 months and may be updated, replaced, or obsoleted by other documents 36 at any time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 The list of current Internet-Drafts can be accessed at 40 http://www.ietf.org/ietf/1id-abstracts.txt. 42 The list of Internet-Draft Shadow Directories can be accessed at 43 http://www.ietf.org/shadow.html. 45 This Internet-Draft will expire on July, 2014. 47 Copyright Notice 49 Copyright (c) 2013 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with 57 respect to this document. Code Components extracted from this 58 document must include Simplified BSD License text as described in 59 Section 4.e of the Trust Legal Provisions and are provided without 60 warranty as described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction...................................................3 65 1.1. Contributors..............................................4 66 1.2. Terminology...............................................4 67 2. Basic Virtual Networks in a Data Center........................5 68 3. Interconnecting DC Virtual Network and External Networks.......6 69 3.1. DC Virtual Network Access via Internet....................6 70 3.2. DC VN and Enterprise Sites interconnected via SP WAN......7 71 4. DC Applications Using NVO3.....................................9 72 4.1. Supporting Multi Technologies and Applications in a DC....9 73 4.2. Tenant Network with Multi-Subnets or across multi DCs.....9 74 4.3. Virtualized Data Center (vDC)............................11 75 5. OAM Considerations............................................13 76 6. Summary.......................................................13 77 7. Security Considerations.......................................14 78 8. IANA Considerations...........................................14 79 9. Acknowledgements..............................................14 80 10. References...................................................14 81 10.1. Normative References....................................14 82 10.2. Informative References..................................15 83 Authors' Addresses...............................................15 85 1. Introduction 87 Server Virtualization has changed IT industry in terms of efficiency, 88 cost, and the speed in providing a new applications and/or services. 89 However, today's data center networks have limited support for cloud 90 applications and multi tenant networks.[NVO3PRBM] The goal of DC 91 Network Virtualization Overlays, i.e. NVO3, is to decouple the 92 communication among tenant systems from DC physical networks and to 93 allow one physical network infrastructure to provide: 1) multi- 94 tenant virtual networks and traffic isolation among the virtual 95 networks over the same physical network; 2) independent address 96 spaces in individual virtual networks such as MAC, IP, TCP/UDP etc; 97 3) Flexible VM placement including the ability to move from one 98 server to another without requiring VM address and configuration 99 change and the ability doing a hot move in which no disruption to 100 the live application on the VM. These characteristics will help 101 address the issues in today's cloud applications [NVO3PRBM]. 103 Although NVO3 enables a true network virtualization environment, the 104 NVO3 solution has to address the communication between a virtual 105 network and a physical network. This is because 1) many DCs that 106 need to provide network virtualization are currently running over 107 physical networks, the migration will be in steps; 2) a lot of DC 108 applications are served to Internet users which run directly on 109 physical networks; 3) some applications are CPU bound like Big Data 110 analytics and may not need the virtualization capability. 112 This document is to describe general NVO3 use cases that apply to 113 various data centers. Three types of the use cases described here 114 are: 116 o Basic virtual networks in DC. A virtual network connects many 117 tenant systems in a Data Center site (or more) and forms one L2 118 or L3 communication domain. Many virtual networks are over same 119 DC physical network. The case may be used for DC internal 120 applications that constitute the DC East-West traffic. 122 o DC virtual network access from external. A DC provider offers a 123 secure DC service to an enterprise customer and/or Internet users. 124 An enterprise customer may use a traditional VPN provided by a 125 carrier or an IPsec tunnel over Internet connecting to a virtual 126 network within a provider DC site. This mainly constitutes DC 127 North-South traffic. 129 o DC applications or services that may use NVO3. Three scenarios 130 are described: 1) use NVO3 and other network technologies to 131 build a tenant network; 2) construct several virtual networks as 132 a tenant network; 3) apply NVO3 to a virtualized DC (vDC). 134 The document uses the architecture reference model defined in 135 [NVO3FRWK] to describe the use cases. 137 1.1. Contributors 139 Vinay Bannai 140 PayPal 141 2211 N. First St, 142 San Jose, CA 95131 143 Phone: +1-408-967-7784 144 Email: vbannai@paypal.com 146 Ram Krishnan 147 Brocade Communications 148 San Jose, CA 95134 149 Phone: +1-408-406-7890 150 Email: ramk@brocade.com 152 1.2. Terminology 154 This document uses the terminologies defined in [NVO3FRWK], 155 [RFC4364]. Some additional terms used in the document are listed 156 here. 158 CPE: Customer Premise Equipment 160 DMZ: Demilitarized Zone. A computer or small subnetwork that sits 161 between a trusted internal network, such as a corporate private LAN, 162 and an un-trusted external network, such as the public Internet. 164 DNS: Domain Name Service 166 NAT: Network Address Translation 168 VIRB: Virtual Integrated Routing/Bridging 170 Note that a virtual network in this document is an overlay virtual 171 network instance. 173 2. Basic Virtual Networks in a Data Center 175 A virtual network may exist within a DC. The network enables a 176 communication among Tenant Systems (TS). A TS may be a physical 177 server/device or a virtual machine (VM) on a server. A network 178 virtual edge (NVE) may be co-located with a TS, i.e. on a same end- 179 device, or reside on a different device, e.g. a top of rack switch 180 (ToR). A virtual network has a unique virtual network identifier 181 (may be local or global unique) for an NVE to properly differentiate 182 it from other virtual networks. 184 Tenant Systems attached to the same NVE may belong to the same or 185 different virtual network. The multiple virtual networks can be 186 constructed in a way so that the policies are enforced when the TSs 187 in one virtual network communicate with the TSs in other virtual 188 networks. An NVE provides tenant traffic forwarding/encapsulation 189 and obtains tenant systems reachability information from Network 190 Virtualization Authority (NVA)[NVO3ARCH]. Furthermore in a DC 191 operators may construct many tenant networks that have no 192 communication in between at all. In this case, each tenant network 193 may use its own address spaces such as MAC and IP. One tenant 194 network may have one or more virtual networks. 196 A Tenant System may also be configured with one or multiple 197 addresses and participate in multiple virtual networks, i.e. use the 198 same or different address in different virtual networks. For 199 examples, a TS may be a NAT GW or a firewall and connect to more 200 than one virtual network. 202 Network Virtualization Overlay in this context means that a virtual 203 network is implemented with an overlay technology, i.e. traffic from 204 an NVE to another is sent via a tunnel between a pair of 205 NVEs.[NVO3FRWK] This architecture decouples tenant system address 206 scheme and configuration from the infrastructure's, which brings a 207 great flexibility for VM placement and mobility. This also makes the 208 transit nodes in the infrastructure not aware of the existence of 209 the virtual networks. One tunnel may carry the traffic belonging to 210 different virtual networks; a virtual network identifier is used for 211 traffic demultiplexing. 213 A virtual network may be an L2 or L3 domain. The TSs attached to an 214 NVE may belong to different virtual networks that may be in L2 or 215 L3. A virtual network may carry unicast traffic and/or 216 broadcast/multicast/unknown traffic from/to tenant systems. There 217 are several ways to transport BUM traffic.[NVO3MCAST] 219 It is worth to mention two distinct cases here. The first is that 220 TSs and NVE are co-located on a same end device, which means that 221 the NVE can be made aware of the TS state at any time via internal 222 API. The second is that TSs and NVE are remotely connected, i.e. 223 connected via a switched network or point-to-point link. In this 224 case, a protocol is necessary for NVE to know TS state. 226 One virtual network may connect many TSs that attach to many 227 different NVEs. TS dynamic placement and mobility results in 228 frequent changes in the TS and NVE bindings. The TS reachbility 229 update mechanism need be fast enough to not cause any service 230 interruption. The capability of supporting many TSs in a virtual 231 network and many more virtual networks in a DC is critical for NVO3 232 solution. 234 If a virtual network spans across multiple DC sites, one design is 235 to allow the network seamlessly to span across the sites without DC 236 gateway routers' termination. In this case, the tunnel between a 237 pair of NVEs may in turn be tunneled over other intermediate tunnels 238 over the Internet or other WANs, or the intra DC and inter DC 239 tunnels are stitched together to form an end-to-end virtual network 240 across DCs. 242 3. Interconnecting DC Virtual Network and External Networks 244 For customers (an enterprise or individuals) who utilize the DC 245 provider's compute and storage resources to run their applications, 246 they need to access their systems hosted in a DC through Internet or 247 Service Providers' WANs. A DC provider may construct a virtual 248 network that connect all the resources designated for a customer and 249 allow the customer to access their resources via a virtual gateway 250 (vGW). This, in turn, becomes the case of interconnecting a DC 251 virtual network and the network at customer site(s) via Internet or 252 WANs. Two cases are described here. 254 3.1. DC Virtual Network Access via Internet 256 A customer can connect to a DC virtual network via Internet in a 257 secure way. Figure 1 illustrates this case. A virtual network is 258 configured on NVE1 and NVE2 and two NVEs are connected via an L3 259 tunnel in the Data Center. A set of tenant systems are attached to 260 NVE1 on a server. The NVE2 resides on a DC Gateway device. NVE2 261 terminates the tunnel and uses the VNID on the packet to pass the 262 packet to the corresponding vGW entity on the DC GW. A customer can 263 access their systems, i.e. TS1 or TSn, in the DC via Internet by 264 using IPsec tunnel [RFC4301]. The IPsec tunnel is configured between 265 the vGW and the customer gateway at customer site. Either static 266 route or BGP may be used for peer routes. The vGW provides IPsec 267 functionality such as authentication scheme and encryption. Note 268 that: 1) some vGW functions such as firewall and load balancer may 269 also be performed by locally attached network appliance devices; 2) 270 The virtual network in DC may use different address space than 271 external users, then vGW need to provide the NAT function; 3) more 272 than one IPsec tunnels can be configured for the redundancy; 4) vGW 273 may be implemented on a server or VM. In this case, IP tunnels or 274 IPsec tunnels may be used over DC infrastructure. 276 Server+---------------+ 277 | TS1 TSn | 278 | |...| | 279 | +-+---+-+ | Customer Site 280 | | NVE1 | | +-----+ 281 | +---+---+ | | CGW | 282 +------+--------+ +--+--+ 283 | * 284 L3 Tunnel * 285 | * 286 DC GW +------+---------+ .--. .--. 287 | +---+---+ | ( '* '.--. 288 | | NVE2 | | .-.' * ) 289 | +---+---+ | ( * Internet ) 290 | +---+---+. | ( * / 291 | | vGW | * * * * * * * * '-' '-' 292 | +-------+ | | IPsec \../ \.--/' 293 | +--------+ | Tunnel 294 +----------------+ 296 DC Provider Site 298 Figure 1 DC Virtual Network Access via Internet 300 3.2. DC VN and Enterprise Sites interconnected via SP WAN 302 An enterprise company may lease the VM and storage resources hosted 303 in the 3rd party DC to run its applications. For example, the rd company may run its web applications at 3 party sites but run 304 backend applications in own DCs. The Web applications and backend rd applications need to communicate privately. The 3 party DC may 305 construct one or more virtual networks to connect all VMs and 306 storage running the Enterprise Web applications. The company may buy 307 a p2p private tunnel such as VPWS from a SP to interconnect its site 308 and the virtual network at the 3rd party site. A protocol is 309 necessary for exchanging the reachability between two peering points 310 and the traffic are carried over the tunnel. If an enterprise has 311 multiple sites, it may buy multiple p2p tunnels to form a mesh 312 interconnection among the sites and the third party site. This 313 requires each site peering with all other sites for route 314 distribution. 316 Another way to achieve multi-site interconnection is to use Service 317 Provider (SP) VPN services, in which each site only peers with SP PE 318 site. A DC Provider and VPN SP may build a DC virtual network (VN) 319 and VPN independently. The VPN interconnects several enterprise 320 sites and the DC virtual network at DC site, i.e. VPN site. The DC 321 VN and SP VPN interconnect via a local link or a tunnel. The control 322 plan interconnection options are described in RFC4364 [RFC4364]. In 323 Option A with VRF-LITE [VRF-LITE], both DC GW and SP PE maintain a 324 routing/forwarding table, and perform the table lookup in forwarding. 325 In Option B, DC GW and SP PE do not maintain the forwarding table, 326 it only maintains the VN and VPN identifier mapping, and swap the 327 identifier on the packet in the forwarding process. Both option A 328 and B requires tunnel termination. In option C, DC GW and SP PE use 329 the same identifier for VN and VPN, and just perform the tunnel 330 stitching, i.e. change the tunnel end points. Each option has 331 pros/cons (see RFC4364) and has been deployed in SP networks 332 depending on the applications. The BGP protocols may be used in 333 these options for route distribution. Note that if the provider DC 334 is the SP Data Center, the DC GW and PE in this case may be on one 335 device. 337 This configuration allows the enterprise networks communicating to 338 the tenant systems attached to the VN in a provider DC without 339 interfering with DC provider underlying physical networks and other 340 virtual networks in the DC. The enterprise may use its own address 341 space on the tenant systems in the VN. The DC provider can manage 342 which VM and storage attachment to the VN. The enterprise customer 343 manages what applications to run on the VMs in the VN. See Section 4 344 for more. 346 The interesting feature in this use case is that the VN and compute 347 resource are managed by the DC provider. The DC operator can place 348 them at any server without notifying the enterprise and WAN SP 349 because the DC physical network is completely isolated from the 350 carrier and enterprise network. Furthermore, the DC operator may 351 move the VMs assigned to the enterprise from one sever to another in 352 the DC without the enterprise customer awareness, i.e. no impact on 353 the enterprise 'live' applications running these resources. Such 354 advanced features bring DC providers great benefits in serving cloud 355 applications but also add some requirements for NVO3 [NVO3PRBM]. 357 4. DC Applications Using NVO3 359 NVO3 brings DC operators the flexibility in designing and deploying 360 different applications in an end-to-end virtualization overlay 361 environment, where the operators no longer need to worry about the 362 constraints of the DC physical network configuration when creating 363 VMs and configuring a virtual network. DC provider may use NVO3 in 364 various ways and also use it in the conjunction with physical 365 networks in DC for many reasons. This section just highlights some 366 use cases. 368 4.1. Supporting Multi Technologies and Applications in a DC 370 Most likely servers deployed in a large data center are rolled in at 371 different times and may have different capacities/features. Some 372 servers may be virtualized, some may not; some may be equipped with 373 virtual switches, some may not. For the ones equipped with 374 hypervisor based virtual switches, some may support VxLAN [VXLAN] 375 encapsulation, some may support NVGRE encapsulation [NVGRE], and 376 some may not support any types of encapsulation. To construct a 377 tenant network among these servers and the ToR switches, it may 378 construct one virtual network and one traditional VLAN network; or 379 two virtual networks that one uses VxLAN encapsulation and another 380 uses NVGRE. 382 In these cases, a gateway device or virtual GW is used to 383 participate in multiple virtual networks. It performs the packet 384 encapsulation/decapsulation and may also perform address mapping or 385 translation, and etc. 387 A data center may be also constructed with multi-tier zones. Each 388 zone has different access permissions and run different applications. 389 For example, the three-tier zone design has a front zone (Web tier) 390 with Web applications, a mid zone (application tier) with service 391 applications such as payment and booking, and a back zone (database 392 tier) with Data. External users are only able to communicate with 393 the Web application in the front zone. In this case, the 394 communication between the zones must pass through the security 395 GW/firewall. One virtual network may be configured in each zone and 396 a GW is used to interconnect two virtual networks. If individual 397 zones use the different implementations, the GW needs to support 398 these implementations as well. 400 4.2. Tenant Network with Multi-Subnets or across multi DCs 402 A tenant network may contain multiple subnets. The DC physical 403 network needs support the connectivity for many tenant networks. The 404 inter-subnets policies may be placed at some designated gateway 405 devices only. Such design requires the inter-subnet traffic to be 406 sent to one of the gateways first for the policy checking, which may 407 cause traffic hairpin at the gateway in a DC. It is desirable that 408 an NVE can hold some policies and be able to forward inter-subnet 409 traffic directly. To reduce NVE burden, the hybrid design may be 410 deployed, i.e. an NVE can perform forwarding for the selected inter- 411 subnets and the designated GW performs for the rest. For example, 412 each NVE performs inter-subnet forwarding for a tenant, and the 413 designated GW is used for inter-subnet traffic from/to the different 414 tenant networks. 416 A tenant network may span across multiple Data Centers in distance. 417 DC operators may configure an L2 VN within each DC and an L3 VN 418 between DCs for a tenant network. For this configuration, the 419 virtual L2/L3 gateway can be implemented on DC GW device. Figure 2 420 illustrates this configuration. 422 Figure 2 depicts two DC sites. The site A constructs one L2 VN, say 423 L2VNa, on NVE1, NVE2, and NVE3. NVE1 and NVE2 reside on the servers 424 which host multiple tenant systems. NVE3 resides on the DC GW device. 425 The site Z has similar configuration with L2VNz on NVE3, NVE4, and 426 NVE6. One L3 VN, say L3VNx, is configured on the NVE5 at site A and 427 the NVE6 at site Z. An internal Virtual Interface of Routing and 428 Bridging (VIRB) is used between L2VNI and L3VNI on NVE5 and NVE6, 429 respectively. The L2VNI is the MAC/NVE mapping table and the L3VNI 430 is the IP prefix/NVE mapping table. A packet to the NVE5 from L2VNa 431 will be decapsulated and converted into an IP packet and then 432 encapsulated and sent to the site Z. The policies can be checked at 433 VIRB. 435 Note that the L2VNa, L2VNz, and L3VNx in Figure 2 are overlay 436 virtual networks. 438 NVE5/DCGW+------------+ +-----------+ NVE6/DCGW 439 | +-----+ | '''''''''''''''' | +-----+ | 440 | |L3VNI+----+' L3VNx '+---+L3VNI| | 441 | +--+--+ | '''''''''''''''' | +--+--+ | 442 | |VIRB | | VIRB| | 443 | +--+---+ | | +---+--+ | 444 | |L2VNIs| | | |L2VNIs| | 445 | +--+---+ | | +---+--+ | 446 +----+-------+ +------+----+ 447 ''''|'''''''''' ''''''|''''''' 448 ' L2VNa ' ' L2VNz ' 449 NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S 450 +-----+---+ +----+----+ +------+--+ +----+----+ 451 | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | 452 | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | 453 | ++---++ | | ++---++ | | ++---++ | | ++---++ | 454 +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ 455 |...| |...| |...| |...| 457 Tenant Systems Tenant Systems 459 DC Site A DC Site Z 461 Figure 2 Tenant Virtual Network with Bridging/Routing 463 4.3. Virtualized Data Center (vDC) 465 Enterprise DC's today may deploy routers, switches, and network 466 appliance devices to construct its internal network, DMZ, and 467 external network access and have many servers and storage running 468 various applications. A DC Provider may construct a virtualized DC 469 over its DC infrastructure and offer a virtual DC service to 470 enterprise customers. A vDC provides the same capability as a 471 physical DC. A customer manages what and how applications to run in 472 the vDC. Instead of using many hardware devices to do it, with the 473 network virtualization overlay technology, DC operators may build 474 such vDCs on top of a common DC infrastructure for many such 475 customers and offer network service functions to a vDC. The network 476 service functions may include firewall, DNS, load balancer, gateway, 477 etc. The network virtualization overlay further enables potential 478 for vDC mobility when a customer moves to different locations 479 because vDC configuration is decouple from the infrastructure 480 network. 482 Figure 3 below illustrates one scenario. For the simple 483 illustration, it only shows the L3 VN or L2 VN as virtual routers or 484 switches. In this case, DC operators create several L2 VNs (L2VNx, 485 L2VNy, L2VNz) in Figure 3 to group the tenant systems together per 486 application basis, create one L3 VN, e.g. VNa for the internal 487 routing. A net device (may be a VM or server) runs firewall/gateway 488 applications and connects to the L3VNa and Internet. A load balancer 489 (LB) is used in L2 VNx. A VPWS p2p tunnel is also built between the 490 gateway and enterprise router. Enterprise customer runs 491 Web/Mail/Voice applications at the provider DC site; lets the users 492 at Enterprise site to access the applications via the VPN tunnel and 493 Internet via a gateway at the Enterprise site; let Internet users 494 access the applications via the gateway in the provider DC. 496 The customer decides which applications are accessed by intranet 497 only and which by both intranet and extranet and configures the 498 proper security policy and gateway function. Furthermore a customer 499 may want multi-zones in a vDC for the security and/or set different 500 QoS levels for the different applications. 502 This use case requires the NVO3 solution to provide the DC operator 503 an easy way to create a VN and NVEs for any design and to quickly 504 assign TSs to VNIs on a NVE they attach to, easily to set up virtual 505 topology and place or configure policies on an NVE or VMs that run 506 net services, and support VM mobility. Furthermore a DC operator 507 and/or customer should be able to view the vDC topology and access 508 individual virtual components in the vDC. Either DC provider or 509 tenant can provision virtual components in the vDC. It is desirable 510 to automate the provisioning process and have programmability. 512 Internet ^ Internet 513 | 514 ^ +--+---+ 515 | | GW | 516 | +--+---+ 517 | | 518 +-------+--------+ +--+---+ 519 |Firewall/Gateway+--- VPN-----+router| 520 +-------+--------+ +-+--+-+ 521 | | | 522 ...+.... |..| 523 +-------: L3 VNa :---------+ LANs 524 +-+-+ ........ | 525 |LB | | | Enterprise Site 526 +-+-+ | | 527 ...+... ...+... ...+... 528 : L2VNx : : L2VNy : : L2VNx : 529 ....... ....... ....... 530 |..| |..| |..| 531 | | | | | | 532 Web Apps Mail Apps VoIP Apps 534 Provider DC Site 536 firewall/gateway and Load Balancer (LB) may run on a server or VMs 538 Figure 3 Virtual Data Center by Using NVO3 540 5. OAM Considerations 542 NVO3 brings the ability for a DC provider to segregate tenant 543 traffic. A DC provider needs to manage and maintain NVO3 instances. 544 Similarly, the tenant needs to be informed about underlying network 545 failures impacting tenant applications or the tenant network is able 546 to detect both overlay and underlay network failures and builds some 547 resiliency mechanisms. 549 Various OAM and SOAM tools and procedures are defined in [IEEE 550 802.1ag], [ITU-T Y.1731], [RFC4378], [RFC5880], [ITU-T Y.1564] for 551 L2 and L3 networks, and for user, including continuity check, 552 loopback, link trace, testing, alarms such as AIS/RDI, and on-demand 553 and periodic measurements. These procedures may apply to tenant 554 overlay networks and tenants not only for proactive maintenance, but 555 also to ensure support of Service Level Agreements (SLAs). 557 As the tunnel traverses different networks, OAM messages need to be 558 translated at the edge of each network to ensure end-to-end OAM. 560 6. Summary 562 The document describes some general potential use cases of NVO3 in 563 DCs. The combination of these cases should give operators 564 flexibility and capability to design more sophisticated cases for 565 various purposes. 567 DC services may vary from infrastructure as a service (IaaS), 568 platform as a service (PaaS), to software as a service (SaaS), in 569 which the network virtualization overlay is just a portion of an 570 application service. NVO3 decouples the service 571 construction/configurations from the DC network infrastructure 572 configuration, and helps deployment of higher level services over 573 the application. 575 NVO3's underlying network provides the tunneling between NVEs so 576 that two NVEs appear as one hop to each other. Many tunneling 577 technologies can serve this function. The tunneling may in turn be 578 tunneled over other intermediate tunnels over the Internet or other 579 WANs. It is also possible that intra DC and inter DC tunnels are 580 stitched together to form an end-to-end tunnel between two NVEs. 582 A DC virtual network may be accessed by external users in a secure 583 way. Many existing technologies can help achieve this. 585 NVO3 implementations may vary. Some DC operators prefer to use 586 centralized controller to manage tenant system reachbility in a 587 tenant network, other prefer to use distributed protocols to 588 advertise the tenant system location, i.e. associated NVEs. For the 589 migration and special requirement, the different solutions may apply 590 to one tenant network in a DC. When a tenant network spans across 591 multiple DCs and WANs, each network administration domain may use 592 different methods to distribute the tenant system locations. Both 593 control plane and data plane interworking are necessary. 595 7. Security Considerations 597 Security is a concern. DC operators need to provide a tenant a 598 secured virtual network, which means one tenant's traffic isolated 599 from the other tenant's traffic and non-tenant's traffic; they also 600 need to prevent DC underlying network from any tenant application 601 attacking through the tenant virtual network or one tenant 602 application attacking another tenant application via DC networks. 603 For example, a tenant application attempts to generate a large 604 volume of traffic to overload DC underlying network. The NVO3 605 solution has to address these issues. 607 8. IANA Considerations 609 This document does not request any action from IANA. 611 9. Acknowledgements 613 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 614 Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, and 615 Eric Gray for the review, comments, and suggestions. 617 10. References 619 10.1. Normative References 621 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 622 Networks (VPNs)", RFC 4364, February 2006. 624 [IEEE 802.1ag] "Virtual Bridged Local Area Networks - Amendment 5: 625 Connectivity Fault Management", December 2007. 627 [ITU-T G.8013/Y.1731] OAM Functions and Mechanisms for Ethernet 628 based Networks, 2011. 630 [ITU-T Y.1564] "Ethernet service activation test methodology", 2011. 632 [RFC4378] Allan, D., Nadeau, T., "A Framework for Multi-Protocol 633 Label Switching (MPLS) Operations and Management (OAM)", 634 RFC4378, February 2006 636 [RFC4301] Kent, S., "Security Architecture for the Internet 637 Protocol", rfc4301, December 2005 639 [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection 640 (BFD)", rfc5880, June 2010. 642 10.2. Informative References 644 [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic 645 Routing Encapsulation", draft-sridharan-virtualization- 646 nvgre-03, work in progress. 648 [NVO3ARCH] Black, D., et al, "An Architecture for Overlay Networks 649 (NVO3)", draft-ietf-nvo3-arch-00, work in progress. 651 [NVO3PRBM] Narten, T., et al "Problem Statement: Overlays for 652 Network Virtualization", draft-ietf-nvo3-overlay-problem- 653 statement-04, work in progress. 655 [NVO3FRWK] Lasserre, M., Motin, T., and et al, "Framework for DC 656 Network Virtualization", draft-ietf-nvo3-framework-04, 657 work in progress. 659 [NVO3MCAST] Ghanwani, A., "Multicast Issues in Networks Using NVO3", 660 draft-ghanwani-nvo3-mcast-issues-00, work in progress. 662 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 664 [VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for 665 Overlaying Virtualized Layer 2 Networks over Layer 3 666 Networks", draft-mahalingam-dutt-dcops-vxlan-06.txt, work 667 in progress. 669 Authors' Addresses 671 Lucy Yong 673 Phone: +1-918-808-1918 674 Email: lucy.yong@huawei.com 676 Mehmet Toy 677 Comcast 678 1800 Bishops Gate Blvd., 679 Mount Laurel, NJ 08054 681 Phone : +1-856-792-2801 682 E-mail : mehmet_toy@cable.comcast.com 684 Aldrin Isaac 685 Bloomberg 686 E-mail: aldrin.isaac@gmail.com 688 Vishwas Manral 689 Hewlett-Packard Corp. 690 3000 Hanover Street, Building 20C 691 Palo Alto, CA 95014 693 Phone: 650-857-5501 694 Email: vishwas.manral@hp.com 696 Linda Dunbar 697 Huawei Technologies, 698 5340 Legacy Dr. 699 Plano, TX 75025 US 701 Phone: +1-469-277-5840 702 Email: linda.dunbar@huawei.com