idnits 2.17.1 draft-ietf-nvo3-use-case-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 3 instances of too long lines in the document, the longest one being 126 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 11, 2013) is 3913 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'NVO3FMWK' is mentioned on line 207, but not defined == Missing Reference: 'ITU-T Y.1731' is mentioned on line 549, but not defined == Outdated reference: A later version (-08) exists of draft-sridharan-virtualization-nvgre-02 == Outdated reference: A later version (-04) exists of draft-ietf-nvo3-overlay-problem-statement-03 == Outdated reference: A later version (-09) exists of draft-ietf-nvo3-framework-03 == Outdated reference: A later version (-01) exists of draft-ghanwani-nvo3-mcast-issues-00 == Outdated reference: A later version (-09) exists of draft-mahalingam-dutt-dcops-vxlan-03 Summary: 1 error (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft Huawei 3 Category: Informational M. Toy 4 Comcast 5 A. Isaac 6 Bloomberg 7 V. Manral 8 Hewlett-Packard 9 L. Dunbar 10 Huawei 12 Expires: January 2014 July 11, 2013 14 Use Cases for DC Network Virtualization Overlays 16 draft-ietf-nvo3-use-case-02 18 Abstract 20 This document describes the DC Network Virtualization (NVO3) use 21 cases that may be potentially deployed in various data centers and 22 apply to different applications. 24 Status of this Memo 26 This Internet-Draft is submitted to IETF in full conformance with 27 the provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF), its areas, and its working groups. Note that 31 other groups may also distribute working documents as Internet- 32 Drafts. 34 Internet-Drafts are draft documents valid for a maximum of six 35 months and may be updated, replaced, or obsoleted by other documents 36 at any time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 The list of current Internet-Drafts can be accessed at 40 http://www.ietf.org/ietf/1id-abstracts.txt. 42 The list of Internet-Draft Shadow Directories can be accessed at 43 http://www.ietf.org/shadow.html. 45 This Internet-Draft will expire on January, 2014. 47 Copyright Notice 49 Copyright (c) 2013 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with 57 respect to this document. Code Components extracted from this 58 document must include Simplified BSD License text as described in 59 Section 4.e of the Trust Legal Provisions and are provided without 60 warranty as described in the Simplified BSD License. 62 Conventions used in this document 64 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 65 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 66 document are to be interpreted as described in RFC-2119 [RFC2119]. 68 Table of Contents 70 1. Introduction...................................................3 71 1.1. Contributors..............................................4 72 1.2. Terminology...............................................4 73 2. Basic Virtual Networks in a Data Center........................4 74 3. Interconnecting DC Virtual Network and External Networks.......6 75 3.1. DC Virtual Network Access via Internet....................6 76 3.2. DC VN and Enterprise Sites interconnected via SP WAN......7 77 4. DC Applications Using NVO3.....................................8 78 4.1. Supporting Multi Technologies and Applications in a DC....9 79 4.2. Tenant Network with Multi-Subnets or across multi DCs.....9 80 4.3. Virtual Data Center (vDC)................................11 81 5. OAM Considerations............................................12 82 6. Summary.......................................................13 83 7. Security Considerations.......................................14 84 8. IANA Considerations...........................................14 85 9. Acknowledgements..............................................14 86 10. References...................................................14 87 10.1. Normative References....................................14 88 10.2. Informative References..................................15 89 Authors' Addresses...............................................15 91 1. Introduction 93 Server Virtualization has changed IT industry in terms of efficiency, 94 cost, and the speed in providing a new applications and/or services. 95 However the problems in today's data center networks hinder the 96 support of cloud applications and multi tenant networks [NVO3PRBM]. 97 The goal of DC Network Virtualization Overlays, i.e. NVO3, is to 98 decouple the communication among tenant systems from DC physical 99 networks and to allow one physical network infrastructure to provide: 100 1) traffic isolation among tenant virtual networks over the same 101 physical network; 2) independent address space in each virtual 102 network and address isolation from the infrastructure's; 3) Flexible 103 VM placement and move from one server to another without VM address 104 and configuration change. These characteristics will help address 105 the issues in today's cloud applications [NVO3PRBM]. 107 Although NVO3 enables a true network virtualization environment, the 108 NVO3 solution has to address the communication between a virtual 109 network and a physical network. This is because 1) many DCs that 110 need to provide network virtualization are currently running over 111 physical networks, the migration will be in steps; 2) a lot of DC 112 applications are served to Internet users which run directly on 113 physical networks; 3) some applications are CPU bound like Big Data 114 analytics and may not need the virtualization capability. 116 This document is to describe general NVO3 use cases that apply to 117 various data centers. Three types of the use cases described here 118 are: 120 o Basic virtual networks in DC. A virtual network connects many 121 tenant systems in a Data Center site (or more) and forms one L2 122 or L3 communication domain. Many virtual networks are over same 123 DC physical network. The case may be used for DC internal 124 applications that constitute the DC East-West traffic. 126 o DC virtual network access from external. A DC provider offers a 127 secure DC service to an enterprise customer and/or Internet users. 128 An enterprise customer may use a traditional VPN provided by a 129 carrier or an IPsec tunnel over Internet connecting to a virtual 130 network within a provider DC site. This mainly constitutes DC 131 North-South traffic. 133 o DC applications or services that may use NVO3. Three scenarios 134 are described: 1) use NVO3 and other network technologies to 135 build a tenant network; 2) construct several virtual networks as 136 a tenant network; 3) apply NVO3 to a virtual DC (vDC) service. 138 The document uses the architecture reference model defined in 139 [NVO3FRWK] to describe the use cases. 141 1.1. Contributors 143 Vinay Bannai 144 PayPal 145 2211 N. First St, 146 San Jose, CA 95131 147 Phone: +1-408-967-7784 148 Email: vbannai@paypal.com 150 Ram Krishnan 151 Brocade Communications 152 San Jose, CA 95134 153 Phone: +1-408-406-7890 154 Email: ramk@brocade.com 156 1.2. Terminology 158 This document uses the terminologies defined in [NVO3FRWK], 159 [RFC4364]. Some additional terms used in the document are listed 160 here. 162 CPE: Customer Premise Equipment 164 DMZ: Demilitarized Zone. A computer or small subnetwork that sits 165 between a trusted internal network, such as a corporate private LAN, 166 and an un-trusted external network, such as the public Internet. 168 DNS: Domain Name Service 170 NAT: Network Address Translation 172 VIRB: Virtual Integrated Routing/Bridging 174 Note that a virtual network in this document is an overlay virtual 175 network instance. 177 2. Basic Virtual Networks in a Data Center 179 A virtual network may exist within a DC. The network enables a 180 communication among Tenant Systems (TSs) that are in a Closed User 181 Group (CUG). A TS may be a physical server/device or a virtual 182 machine (VM) on a server. The network virtual edge (NVE) may co- 183 exist with Tenant Systems, i.e. on a same end-device, or exist on a 184 different device, e.g. a top of rack switch (ToR). A virtual network 185 has a unique virtual network identifier (may be local or global 186 unique) for an NVE to properly differentiate it from other virtual 187 networks. 189 The TSs attached to the same NVE may belong to the same or different 190 virtual network. The multiple CUGs can be constructed in a way so 191 that the policies are enforced when the TSs in one CUG communicate 192 with the TSs in other CUGs. An NVE provides the reachbility for 193 Tenant Systems in a CUG, and may also have the policies and provide 194 the reachbility for Tenant Systems in different CUGs (See section 195 4.2). Furthermore in a DC operators may construct many tenant 196 networks that have no communication in between at all. In this case, 197 each tenant network may use its own address space. One tenant 198 network may have one or more virtual networks. 200 A Tenant System may also be configured with multiple addresses and 201 participate in multiple virtual networks, i.e. use different address 202 in different virtual networks. For examples, a TS may be a NAT GW; 203 or a firewall for multiple CUGs. 205 Network Virtualization Overlay in this context means that a virtual 206 network is implemented in overlay, i.e. traffic from an NVE to 207 another is sent via a tunnel.[NVO3FMWK] This architecture decouples 208 tenant system address scheme and configuration from the 209 infrastructure's, which brings a great flexibility for VM placement 210 and mobility. This also makes the transit nodes in the 211 infrastructure not aware of the existence of the virtual networks. 212 One tunnel may carry the traffic belonging to different virtual 213 networks; a virtual network identifier is used for traffic 214 demultiplexing. 216 A virtual network may be an L2 or L3 domain. The TSs attached to an 217 NVE may belong to different virtual networks that may be in L2 or 218 L3. A virtual network may carry unicast traffic and/or 219 broadcast/multicast/unknown traffic from/to tenant systems. There 220 are several ways to transport BUM traffic.[NVO3MCAST] 222 It is worth to mention two distinct cases here. The first is that 223 TSs and NVE are co-located on a same end device, which means that 224 the NVE can be made aware of the TS state at any time via internal 225 API. The second is that TSs and NVE are remotely connected, i.e. 226 connected via a switched network or point-to-point link. In this 227 case, a protocol is necessary for NVE to know TS state. 229 One virtual network may connect many TSes that attach to many 230 different NVEs. TS dynamic placement and mobility results in 231 frequent changes in the TS and NVE bindings. The TS reachbility 232 update mechanism need be fast enough to not cause any service 233 interruption. The capability of supporting many TSs in a virtual 234 network and many more virtual networks in a DC is critical for NVO3 235 solution. 237 If a virtual network spans across multiple DC sites, one design is 238 to allow the network seamlessly to span across the sites without DC 239 gateway routers' termination. In this case, the tunnel between a 240 pair of NVEs may in turn be tunneled over other intermediate tunnels 241 over the Internet or other WANs, or the intra DC and inter DC 242 tunnels are stitched together to form an end-to-end virtual network 243 across DCs. 245 3. Interconnecting DC Virtual Network and External Networks 247 For customers (an enterprise or individuals) who utilize the DC 248 provider's compute and storage resources to run their applications, 249 they need to access their systems hosted in a DC through Internet or 250 Service Providers' WANs. A DC provider may construct a virtual 251 network that connect all the resources designated for a customer and 252 allow the customer to access their resources via a virtual gateway 253 (vGW). This, in turn, becomes the case of interconnecting a DC 254 virtual network and the network at customer site(s) via Internet or 255 WANs. Two cases are described here. 257 3.1. DC Virtual Network Access via Internet 259 A customer can connect to a DC virtual network via Internet in a 260 secure way. Figure 1 illustrates this case. A virtual network is 261 configured on NVE1 and NVE2 and two NVEs are connected via an L3 262 tunnel in the Data Center. A set of tenant systems are attached to 263 NVE1 on a server. The NVE2 resides on a DC Gateway device. NVE2 264 terminates the tunnel and uses the VNID on the packet to pass the 265 packet to the corresponding vGW entity on the DC GW. A customer can 266 access their systems, i.e. TS1 or TSn, in the DC via Internet by 267 using IPsec tunnel [RFC4301]. The IPsec tunnel is configured between 268 the vGW and the customer gateway at customer site. Either static 269 route or BGP may be used for peer routes. The vGW provides IPsec 270 functionality such as authentication scheme and encryption. Note 271 that: 1) some vGW functions such as firewall and load balancer may 272 also be performed by locally attached network appliance devices; 2) 273 The virtual network in DC may use different address space than 274 external users, then vGW need to provide the NAT function; 3) more 275 than one IPsec tunnels can be configured for the redundancy; 4) vGW 276 may be implemented on a server or VM. In this case, IP tunnels or 277 IPsec tunnels may be used over DC infrastructure. 279 Server+---------------+ 280 | TS1 TSn | 281 | |...| | 282 | +-+---+-+ | Customer Site 283 | | NVE1 | | +-----+ 284 | +---+---+ | | CGW | 285 +------+--------+ +--+--+ 286 | * 287 L3 Tunnel * 288 | * 289 DC GW +------+---------+ .--. .--. 290 | +---+---+ | ( '* '.--. 291 | | NVE2 | | .-.' * ) 292 | +---+---+ | ( * Internet ) 293 | +---+---+. | ( * / 294 | | vGW | * * * * * * * * '-' '-' 295 | +-------+ | | IPsec \../ \.--/' 296 | +--------+ | Tunnel 297 +----------------+ 299 DC Provider Site 301 Figure 1 DC Virtual Network Access via Internet 303 3.2. DC VN and Enterprise Sites interconnected via SP WAN 305 An enterprise company may lease the VM and storage resources hosted 306 in the 3rd party DC to run its applications. For example, the rd company may run its web applications at 3 party sites but run 307 backend applications in own DCs. The Web applications and backend rd applications need to communicate privately. The 3 party DC may 308 construct one or more virtual networks to connect all VMs and 309 storage running the Enterprise Web applications. The company may buy 310 a p2p private tunnel such as VPWS from a SP to interconnect its site 311 and the virtual network at the 3rd party site. A protocol is 312 necessary for exchanging the reachability between two peering points 313 and the traffic are carried over the tunnel. If an enterprise has 314 multiple sites, it may buy multiple p2p tunnels to form a mesh rd interconnection among the sites and the 3 party site. This requires 315 each site peering with all other sites for route distribution. 317 Another way to achieve multi-site interconnection is to use Service 318 Provider (SP) VPN services, in which each site only peers with SP PE 319 site. A DC Provider and VPN SP may build a DC virtual network (VN) 320 and VPN independently. The VPN interconnects several enterprise 321 sites and the DC virtual network at DC site, i.e. VPN site. The DC 322 VN and SP VPN interconnect via a local link or a tunnel. The control 323 plan interconnection options are described in RFC4364 [RFC4364]. In 324 Option A with VRF-LITE [VRF-LITE], both DC GW and SP PE maintain a 325 routing/forwarding table, and perform the table lookup in forwarding. 326 In Option B, DC GW and SP PE do not maintain the forwarding table, 327 it only maintains the VN and VPN identifier mapping, and swap the 328 identifier on the packet in the forwarding process. Both option A 329 and B requires tunnel termination. In option C, DC GW and SP PE use 330 the same identifier for VN and VPN, and just perform the tunnel 331 stitching, i.e. change the tunnel end points. Each option has 332 pros/cons (see RFC4364) and has been deployed in SP networks 333 depending on the applications. The BGP protocols may be used in 334 these options for route distribution. Note that if the provider DC 335 is the SP Data Center, the DC GW and PE in this case may be on one 336 device. 338 This configuration allows the enterprise networks communicating to 339 the tenant systems attached to the VN in a provider DC without 340 interfering with DC provider underlying physical networks and other 341 virtual networks in the DC. The enterprise may use its own address 342 space on the tenant systems in the VN. The DC provider can manage 343 which VM and storage attachment to the VN. The enterprise customer 344 manages what applications to run on the VMs in the VN. See Section 4 345 for more. 347 The interesting feature in this use case is that the VN and compute 348 resource are managed by the DC provider. The DC operator can place 349 them at any server without notifying the enterprise and WAN SP 350 because the DC physical network is completely isolated from the 351 carrier and enterprise network. Furthermore, the DC operator may 352 move the VMs assigned to the enterprise from one sever to another in 353 the DC without the enterprise customer awareness, i.e. no impact on 354 the enterprise 'live' applications running these resources. Such 355 advanced features bring DC providers great benefits in serving cloud 356 applications but also add some requirements for NVO3 [NVO3PRBM]. 358 4. DC Applications Using NVO3 360 NVO3 brings DC operators the flexibility in designing and deploying 361 different applications in an end-to-end virtualization overlay 362 environment, where the operators no longer need to worry about the 363 constraints of the DC physical network configuration when creating 364 VMs and configuring a virtual network. DC provider may use NVO3 in 365 various ways and also use it in the conjunction with physical 366 networks in DC for many reasons. This section just highlights some 367 use cases. 369 4.1. Supporting Multi Technologies and Applications in a DC 371 Most likely servers deployed in a large data center are rolled in at 372 different times and may have different capacities/features. Some 373 servers may be virtualized, some may not; some may be equipped with 374 virtual switches, some may not. For the ones equipped with 375 hypervisor based virtual switches, some may support VxLAN [VXLAN] 376 encapsulation, some may support NVGRE encapsulation [NVGRE], and 377 some may not support any types of encapsulation. To construct a 378 tenant network among these servers and the ToR switches, it may 379 construct one virtual network and one traditional VLAN network; or 380 two virtual networks that one uses VxLAN encapsulation and another 381 uses NVGRE. 383 In these cases, a gateway device or virtual GW is used to 384 participate in multiple virtual networks. It performs the packet 385 encapsulation/decapsulation and may also perform address mapping or 386 translation, and etc. 388 A data center may be also constructed with multi-tier zones. Each 389 zone has different access permissions and run different applications. 390 For example, the three-tier zone design has a front zone (Web tier) 391 with Web applications, a mid zone (application tier) with service 392 applications such as payment and booking, and a back zone (database 393 tier) with Data. External users are only able to communicate with 394 the Web application in the front zone. In this case, the 395 communication between the zones MUST pass through the security 396 GW/firewall. One virtual network may be configured in each zone and 397 a GW is used to interconnect two virtual networks. If individual 398 zones use the different implementations, the GW needs to support 399 these implementations as well. 401 4.2. Tenant Network with Multi-Subnets or across multi DCs 403 A tenant network may contain multiple subnets. The DC physical 404 network needs support the connectivity for many tenant networks. The 405 inter-subnets policies may be placed at some designated gateway 406 devices only. Such design requires the inter-subnet traffic to be 407 sent to one of the gateways first for the policy checking, which may 408 cause traffic hairpin at the gateway in a DC. It is desirable that 409 an NVE can hold some policies and be able to forward inter-subnet 410 traffic directly. To reduce NVE burden, the hybrid design may be 411 deployed, i.e. an NVE can perform forwarding for the selected inter- 412 subnets and the designated GW performs for the rest. For example, 413 each NVE performs inter-subnet forwarding for a tenant, and the 414 designated GW is used for inter-subnet traffic from/to the different 415 tenant networks. 417 A tenant network may span across multiple Data Centers in distance. 418 DC operators may configure an L2 VN within each DC and an L3 VN 419 between DCs for a tenant network. For this configuration, the 420 virtual L2/L3 gateway can be implemented on DC GW device. Figure 2 421 illustrates this configuration. 423 Figure 2 depicts two DC sites. The site A constructs one L2 VN, say 424 L2VNa, on NVE1, NVE2, and NVE3. NVE1 and NVE2 reside on the servers 425 which host multiple tenant systems. NVE3 resides on the DC GW device. 426 The site Z has similar configuration with L2VNz on NVE3, NVE4, and 427 NVE6. One L3 VN, say L3VNx, is configured on the NVE5 at site A and 428 the NVE6 at site Z. An internal Virtual Interface of Routing and 429 Bridging (VIRB) is used between L2VNI and L3VNI on NVE5 and NVE6, 430 respectively. The L2VNI is the MAC/NVE mapping table and the L3VNI 431 is the IP prefix/NVE mapping table. A packet to the NVE5 from L2VNa 432 will be decapsulated and converted into an IP packet and then 433 encapsulated and sent to the site Z. The policies can be checked at 434 VIRB. 436 Note that the L2VNa, L2VNz, and L3VNx in Figure 2 are overlay 437 virtual networks. 439 NVE5/DCGW+------------+ +-----------+NVE6/DCGW 440 | +-----+ | '''''''''''''''' | +-----+ | 441 | |L3VNI+----+' L3VNx '+---+L3VNI| | 442 | +--+--+ | '''''''''''''''' | +--+--+ | 443 | |VIRB | | VIRB| | 444 | +--+---+ | | +---+--+ | 445 | |L2VNIs| | | |L2VNIs| | 446 | +--+---+ | | +---+--+ | 447 +----+-------+ +------+----+ 448 ''''|'''''''''' ''''''|''''''' 449 ' L2VNa ' ' L2VNz ' 450 NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S 451 +-----+---+ +----+----+ +------+--+ +----+----+ 452 | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | 453 | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | 454 | ++---++ | | ++---++ | | ++---++ | | ++---++ | 455 +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ 456 |...| |...| |...| |...| 458 Tenant Systems Tenant Systems 460 DC Site A DC Site Z 462 Figure 2 Tenant Virtual Network with Bridging/Routing 464 4.3. Virtual Data Center (vDC) 466 Enterprise DC's today may deploy routers, switches, and network 467 appliance devices to construct its internal network, DMZ, and 468 external network access and have many servers and storage running 469 various applications. A DC Provider may offer a virtual DC service 470 to enterprise customers. A vDC provides the same capability as a 471 physical DC. A customer manages what and how applications to run in 472 the vDC. Instead of using many hardware devices to do it, with the 473 network virtualization overlay technology, DC operators may build 474 such vDCs on top of a common DC infrastructure for many such 475 customers and run network service application per vDC. The network 476 service applications may include firewall, DNS, load balancer, 477 gateway, etc. The network virtualization overlay further enables 478 potential for vDC mobility when a customer moves to different 479 locations because vDC configuration is decouple from the 480 infrastructure network. 482 Figure 3 below illustrates one scenario. For the simple 483 illustration, it only shows the L3 VN or L2 VN as virtual routers or 484 switches. In this case, DC operators create several L2 VNs (L2VNx, 485 L2VNy, L2VNz) in Figure 3 to group the tenant systems together per 486 application basis, create one L3 VN, e.g. VNa for the internal 487 routing. A net device (may be a VM or server) runs firewall/gateway 488 applications and connects to the L3VNa and Internet. A load balancer 489 (LB) is used in L2 VNx. A VPWS p2p tunnel is also built between the 490 gateway and enterprise router. Enterprise customer runs 491 Web/Mail/Voice applications at the provider DC site; lets the users 492 at Enterprise site to access the applications via the VPN tunnel and 493 Internet via a gateway at the Enterprise site; let Internet users 494 access the applications via the gateway in the provider DC. 496 The customer decides which applications are accessed by intranet 497 only and which by both intranet and extranet and configures the 498 proper security policy and gateway function. Furthermore a customer 499 may want multi-zones in a vDC for the security and/or set different 500 QoS levels for the different applications. 502 This use case requires the NVO3 solution to provide the DC operator 503 an easy way to create a VN and NVEs for any design and to quickly 504 assign TSs to VNIs on a NVE they attach to, easily to set up virtual 505 topology and place or configure policies on an NVE or VMs that run 506 net services, and support VM mobility. Furthermore a DC operator 507 and/or customer should be able to view the tenant network topology 508 and configure the tenant network functions. DC provider may further 509 let a tenant to manage the vDC itself. 511 Internet ^ Internet 512 | 513 ^ +--+---+ 514 | | GW | 515 | +--+---+ 516 | | 517 +-------+--------+ +--+---+ 518 |FireWall/Gateway+--- VPN-----+router| 519 +-------+--------+ +-+--+-+ 520 | | | 521 ...+.... |..| 522 +-------: L3 VNa :---------+ LANs 523 +-+-+ ........ | 524 |LB | | | Enterprise Site 525 +-+-+ | | 526 ...+... ...+... ...+... 527 : L2VNx : : L2VNy : : L2VNx : 528 ....... ....... ....... 529 |..| |..| |..| 530 | | | | | | 531 Web Apps Mail Apps VoIP Apps 533 Provider DC Site 535 firewall/gateway and Load Balancer (LB) may run on a server or VMs 537 Figure 3 Virtual Data Center by Using NVO3 539 5. OAM Considerations 541 NVO3 brings the ability for a DC provider to segregate tenant 542 traffic. A DC provider needs to manage and maintain NVO3 instances. 543 Similarly, the tenant needs to be informed about underlying network 544 failures impacting tenant applications or the tenant network is able 545 to detect both overlay and underlay network failures and builds some 546 resiliency mechanisms. 548 Various OAM and SOAM tools and procedures are defined in [IEEE 549 802.1ag], [ITU-T Y.1731], [RFC4378], [RFC5880], [ITU-T Y.1564] for 550 L2 and L3 networks, and for user, including continuity check, 551 loopback, link trace, testing, alarms such as AIS/RDI, and on-demand 552 and periodic measurements. These procedures may apply to tenant 553 overlay networks and tenants not only for proactive maintenance, but 554 also to ensure support of Service Level Agreements (SLAs). 556 As the tunnel traverses different networks, OAM messages need to be 557 translated at the edge of each network to ensure end-to-end OAM. 559 6. Summary 561 The document describes some general potential use cases of NVO3 in 562 DCs. The combination of these cases should give operators 563 flexibility and capability to design more sophisticated cases for 564 various purposes. 566 DC services may vary from infrastructure as a service (IaaS), 567 platform as a service (PaaS), to software as a service (SaaS), in 568 which the network virtualization overlay is just a portion of an 569 application service. NVO3 decouples the service 570 construction/configurations from the DC network infrastructure 571 configuration, and helps deployment of higher level services over 572 the application. 574 NVO3's underlying network provides the tunneling between NVEs so 575 that two NVEs appear as one hop to each other. Many tunneling 576 technologies can serve this function. The tunneling may in turn be 577 tunneled over other intermediate tunnels over the Internet or other 578 WANs. It is also possible that intra DC and inter DC tunnels are 579 stitched together to form an end-to-end tunnel between two NVEs. 581 A DC virtual network may be accessed by external users in a secure 582 way. Many existing technologies can help achieve this. 584 NVO3 implementations may vary. Some DC operators prefer to use 585 centralized controller to manage tenant system reachbility in a 586 tenant network, other prefer to use distributed protocols to 587 advertise the tenant system location, i.e. associated NVEs. For the 588 migration and special requirement, the different solutions may apply 589 to one tenant network in a DC. When a tenant network spans across 590 multiple DCs and WANs, each network administration domain may use 591 different methods to distribute the tenant system locations. Both 592 control plane and data plane interworking are necessary. 594 7. Security Considerations 596 Security is a concern. DC operators need to provide a tenant a 597 secured virtual network, which means one tenant's traffic isolated 598 from the other tenant's traffic and non-tenant's traffic; they also 599 need to prevent DC underlying network from any tenant application 600 attacking through the tenant virtual network or one tenant 601 application attacking another tenant application via DC networks. 602 For example, a tenant application attempts to generate a large 603 volume of traffic to overload DC underlying network. The NVO3 604 solution has to address these issues. 606 8. IANA Considerations 608 This document does not request any action from IANA. 610 9. Acknowledgements 612 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 613 Marques, Mike McBride, David McDysan, Randy Bush, and Uma Chunduri 614 for the review, comments, and suggestions. 616 10. References 618 10.1. Normative References 620 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 621 Requirement Levels", BCP 14, RFC 2119, March 1997 623 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 624 Networks (VPNs)", RFC 4364, February 2006. 626 [IEEE 802.1ag] "Virtual Bridged Local Area Networks - Amendment 5: 627 Connectivity Fault Management", December 2007. 629 [ITU-T G.8013/Y.1731] OAM Functions and Mechanisms for Ethernet 630 based Networks, 2011. 632 [ITU-T Y.1564] "Ethernet service activation test methodology", 2011. 634 [RFC4378] Allan, D., Nadeau, T., "A Framework for Multi-Protocol 635 Label Switching (MPLS) Operations and Management (OAM)", 636 RFC4378, February 2006 638 [RFC4301] Kent, S., "Security Architecture for the Internet 639 Protocol", rfc4301, December 2005 641 [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection 642 (BFD)", rfc5880, June 2010. 644 10.2. Informative References 646 [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic 647 Routing Encapsulation", draft-sridharan-virtualization- 648 nvgre-02, work in progress. 650 [NVO3PRBM] Narten, T., et al "Problem Statement: Overlays for 651 Network Virtualization", draft-ietf-nvo3-overlay-problem- 652 statement-03, work in progress. 654 [NVO3FRWK] Lasserre, M., Motin, T., and et al, "Framework for DC 655 Network Virtualization", draft-ietf-nvo3-framework-03, 656 work in progress. 658 [NVO3MCAST] Ghanwani, A., "Multicast Issues in Networks Using NVO3", 659 draft-ghanwani-nvo3-mcast-issues-00, work in progress. 661 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 663 [VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for 664 Overlaying Virtualized Layer 2 Networks over Layer 3 665 Networks", draft-mahalingam-dutt-dcops-vxlan-03.txt, work 666 in progress. 668 Authors' Addresses 670 Lucy Yong 671 Huawei Technologies, 672 5340 Legacy Dr. 673 Plano, TX 75025 675 Phone: +1-469-277-5837 676 Email: lucy.yong@huawei.com 678 Mehmet Toy 679 Comcast 680 1800 Bishops Gate Blvd., 681 Mount Laurel, NJ 08054 683 Phone : +1-856-792-2801 684 E-mail : mehmet_toy@cable.comcast.com 685 Aldrin Isaac 686 Bloomberg 687 E-mail: aldrin.isaac@gmail.com 689 Vishwas Manral 690 Hewlett-Packard Corp. 691 3000 Hanover Street, Building 20C 692 Palo Alto, CA 95014 694 Phone: 650-857-5501 695 Email: vishwas.manral@hp.com 697 Linda Dunbar 698 Huawei Technologies, 699 5340 Legacy Dr. 700 Plano, TX 75025 US 702 Phone: +1-469-277-5840 703 Email: linda.dunbar@huawei.com