idnits 2.17.1 draft-ietf-nvo3-use-case-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 1, 2013) is 4005 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ITU-T Y.1731' is mentioned on line 557, but not defined == Outdated reference: A later version (-08) exists of draft-sridharan-virtualization-nvgre-02 == Outdated reference: A later version (-04) exists of draft-ietf-nvo3-overlay-problem-statement-02 == Outdated reference: A later version (-09) exists of draft-ietf-nvo3-framework-02 == Outdated reference: A later version (-01) exists of draft-ghanwani-nvo3-mcast-issues-00 == Outdated reference: A later version (-09) exists of draft-mahalingam-dutt-dcops-vxlan-03 Summary: 0 errors (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft Huawei 3 Category: Informational M. Toy 4 Comcast 5 A. Isaac 6 Bloomberg 7 V. Manral 8 Hewlett-Packard 9 L. Dunbar 10 Huawei 12 Expires: November 2013 May 1, 2013 14 Use Cases for DC Network Virtualization Overlays 16 draft-ietf-nvo3-use-case-01 18 Abstract 20 This document describes the DC NVO3 use cases that may be 21 potentially deployed in various data centers and apply to different 22 applications. An application in a DC may be a combination of some 23 use cases described here. 25 Status of this Memo 27 This Internet-Draft is submitted to IETF in full conformance with 28 the provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF), its areas, and its working groups. Note that 32 other groups may also distribute working documents as Internet- 33 Drafts. 35 Internet-Drafts are draft documents valid for a maximum of six 36 months and may be updated, replaced, or obsoleted by other documents 37 at any time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 The list of current Internet-Drafts can be accessed at 41 http://www.ietf.org/ietf/1id-abstracts.txt. 43 The list of Internet-Draft Shadow Directories can be accessed at 44 http://www.ietf.org/shadow.html. 46 This Internet-Draft will expire on November, 2013. 48 Copyright Notice 50 Copyright (c) 2013 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents 55 (http://trustee.ietf.org/license-info) in effect on the date of 56 publication of this document. Please review these documents 57 carefully, as they describe your rights and restrictions with 58 respect to this document. Code Components extracted from this 59 document must include Simplified BSD License text as described in 60 Section 4.e of the Trust Legal Provisions and are provided without 61 warranty as described in the Simplified BSD License. 63 Conventions used in this document 65 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 66 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 67 document are to be interpreted as described in RFC-2119 [RFC2119]. 69 Table of Contents 71 1. Introduction...................................................3 72 1.1. Contributors..............................................4 73 1.2. Terminology...............................................4 74 2. Basic Virtual Networks in a Data Center........................5 75 3. Interconnecting DC Virtual Network and External Networks.......6 76 3.1. DC Virtual Network Access via Internet....................6 77 3.2. DC VN and Enterprise Sites interconnected via SP WAN......7 78 4. DC Applications Using NVO3.....................................8 79 4.1. Supporting Multi Technologies and Applications in a DC....9 80 4.2. Tenant Network with Multi-Subnets or across multi DCs.....9 81 4.3. Virtual Data Center (vDC)................................11 82 5. OAM Considerations............................................13 83 6. Summary.......................................................13 84 7. Security Considerations.......................................14 85 8. IANA Considerations...........................................14 86 9. Acknowledgements..............................................14 87 10. References...................................................14 88 10.1. Normative References....................................14 89 10.2. Informative References..................................15 90 Authors' Addresses...............................................15 92 1. Introduction 94 Server Virtualization has changed IT industry in terms of efficiency, 95 cost, and the speed in providing a new applications and/or services. 96 However the problems in today's data center networks hinder the 97 support of an elastic cloud service and dynamic virtual tenant 98 networks [NVO3PRBM]. The goal of DC Network Virtualization Overlays, 99 i.e. NVO3, is to decouple the communication among tenant systems 100 from DC physical networks and to allow one physical network 101 infrastructure to provide: 1) traffic isolation among tenant virtual 102 networks over the same physical network; 2) independent address 103 space in each virtual network and address isolation from the 104 infrastructure's; 3) Flexible VM placement and move from one server 105 to another without any of the physical network limitations. These 106 characteristics will help address the issues that hinder true 107 virtualization in the data centers [NVO3PRBM]. 109 Although NVO3 enables a true virtualization environment, the NVO3 110 solution has to address the communication between a virtual network 111 and a physical network. This is because 1) many DCs that need to 112 provide network virtualization are currently running over physical 113 networks, the migration will be in steps; 2) a lot of DC 114 applications are served to Internet users which run directly on 115 physical networks; 3) some applications are CPU bound like Big Data 116 analytics and may not need the virtualization capability. 118 This document is to describe general NVO3 use cases that apply to 119 various data centers. Three types of the use cases described here 120 are: 122 o A virtual network connects many tenant systems within a Data 123 Center and form one L2 or L3 communication domain. A virtual 124 network segregates its traffic from others and allows the VMs in 125 the network moving from one server to another. The case may be 126 used for DC internal applications that constitute the DC East- 127 West traffic. 129 o A DC provider offers a secure DC service to an enterprise 130 customer and/or Internet users. In these cases, the enterprise 131 customer may use a traditional VPN provided by a carrier or an 132 IPsec tunnel over Internet connecting to a NVO3 network within a 133 provider DC. This is mainly constitutes DC North-South traffic. 135 o A DC provider may use NVO3 and other network technologies for a 136 tenant network, construct different topologies or zones for a 137 tenant network, and may design a variety of cloud applications 138 that may require the network service appliance, virtual compute, 139 storage, and networking. In this case, the NVO3 provides the 140 networking functions for the applications. 142 The document uses the architecture reference model defined in 143 [NVO3FRWK] to describe the use cases. 145 1.1. Contributors 147 Vinay Bannai 148 PayPal 149 2211 N. First St, 150 San Jose, CA 95131 151 Phone: +1-408-967-7784 152 Email: vbannai@paypal.com 154 Ram Krishnan 155 Brocade Communications 156 San Jose, CA 95134 157 Phone: +1-408-406-7890 158 Email: ramk@brocade.com 160 1.2. Terminology 162 This document uses the terminologies defined in [NVO3FRWK], 163 [RFC4364]. Some additional terms used in the document are listed 164 here. 166 CPE: Customer Premise Equipment 168 DMZ: Demilitarized Zone 170 DNS: Domain Name Service 172 NAT: Network Address Translation 174 VIRB: Virtual Integrated Routing/Bridging 176 Note that a virtual network in this document is a network 177 virtualization overlay instance. 179 2. Basic Virtual Networks in a Data Center 181 A virtual network may exist within a DC. The network enables a 182 communication among Tenant Systems (TSs) that are in a Closed User 183 Group (CUG). A TS may be a physical server or virtual machine (VM) 184 on a server. The network virtual edge (NVE) may co-exist with Tenant 185 Systems, i.e. on an end-device, or exist on a different device, e.g. 186 a top of rack switch (ToR). A virtual network has a unique virtual 187 network identifier (may be local or global unique) for an NVE to 188 properly differentiate it from other virtual networks. 190 The TSs attached to the same NVE are not necessary in the same CUG, 191 i.e. in the same virtual network. The multiple CUGs can be 192 constructed in a way so that the policies are enforced when the TSs 193 in one CUG communicate with the TSs in other CUGs. An NVE provides 194 the reachbility for Tenant Systems in a CUG, and may also have the 195 policies and provide the reachbility for Tenant Systems in different 196 CUGs (See section 4.2). Furthermore in a DC operators may construct 197 many tenant networks that have no communication at all. In this 198 case, each tenant network may use its own address space. Note that 199 one tenant network may contain one or more CUGs. 201 A Tenant System may also be configured with multiple addresses and 202 participate in multiple virtual networks, i.e. use different address 203 in different virtual network. For examples, a TS is NAT GW; or a TS 204 is a firewall server for multiple CUGs. 206 Network Virtualization Overlay in this context means the virtual 207 networks over DC infrastructure network via a tunnel, i.e. a tunnel 208 between any pair of NVEs. This architecture decouples tenant system 209 address schema from the infrastructure address space, which brings a 210 great flexibility for VM placement and mobility. This also makes the 211 transit nodes in the infrastructure not aware of the existence of 212 the virtual networks. One tunnel may carry the traffic belonging to 213 different virtual networks; a virtual network identifier is used for 214 traffic segregation in a tunnel. 216 A virtual network may be an L2 or L3 domain. An NVE may be a member 217 of several virtual networks each of which is in L2 or L3. A virtual 218 network may carry unicast traffic and/or broadcast/multicast/unknown 219 traffic from/to tenant systems. An NVE may use p2p tunnels or a p2mp 220 tunnel to transport broadcast or multicast traffic, or may use other 221 mechanisms [NVO3MCAST]. 223 It is worth to mention two distinct cases here. The first is that TS 224 and NVE are co-located on a same end device, which means that the 225 NVE can be made aware of the TS state at any time via internal API. 227 The second is that TS and NVE are remotely connected, i.e. connected 228 via a switched network or point-to-point link. In this case, a 229 protocol is necessary for NVE to know TS state. 231 One virtual network may have many NVE members each of which many TSs 232 may attach to. TS dynamic placement and mobility results in frequent 233 changes in the TS and NVE bindings. The TS reachbility update 234 mechanism MUST be fast enough to not cause any service interruption. 235 The capability of supporting a lot of TSs in a tenant network and a 236 lot of tenant networks is critical for NVO3 solution. 238 If a virtual network spans across multiple DC sites, one design is 239 to allow the corresponding NVO3 instance seamlessly span across 240 those sites without DC gateway routers' termination. In this case, 241 the tunnel between a pair of NVEs may in turn be tunneled over other 242 intermediate tunnels over the Internet or other WANs, or the intra 243 DC and inter DC tunnels are stitched together to form an end-to-end 244 virtual network across DCs. The latter is described in section 3.2. 245 Section 4.2 describes other options. 247 3. Interconnecting DC Virtual Network and External Networks 249 For customers (an enterprise or individuals) who want to utilize the 250 DC provider's compute and storage resources to run their 251 applications, they need to access their systems hosted in a DC 252 through Internet or Service Providers' WANs. A DC provider may 253 construct an NVO3 network which all the resources designated for a 254 customer connect to and allow the customer to access their systems 255 via the network. This, in turn, becomes the case of interconnecting 256 a DC NVO3 network and external networks via Internet or WANs. Two 257 cases are described here. 259 3.1. DC Virtual Network Access via Internet 261 A user or an enterprise customer connects securely to a DC virtual 262 network via Internet. Figure 1 illustrates this case. A virtual 263 network is configured on NVE1 and NVE2 and two NVEs are connected 264 via an L3 tunnel in the Data Center. A set of tenant systems are 265 attached to NVE1 on a server. The NVE2 resides on a DC Gateway 266 device. NVE2 terminates the tunnel and uses the VNID on the packet 267 to pass the packet to the corresponding VN GW entity on the DC GW. A 268 user or customer can access their systems, i.e. TS1 or TSn, in the 269 DC via Internet by using IPsec tunnel [RFC4301]. The IPsec tunnel is 270 between the VN GW and the user or CPE at enterprise edge location. 271 The VN GW provides IPsec functionality such as authentication scheme 272 and encryption, as well as the mapping to the right virtual network 273 entity on the DC GW. Note that 1) some VN GW functions such as 274 firewall and load balancer may also be performed by locally attached 275 network appliance devices; 2) The virtual network in DC may use 276 different address space than external users, then VN GW serves the 277 NAT function. 279 Server+---------------+ 280 | TS1 TSn | 281 | |...| | 282 | +-+---+-+ | External User 283 | | NVE1 | | +-----+ 284 | +---+---+ | | PC | 285 +------+--------+ +--+--+ 286 | * 287 L3 Tunnel * 288 | * 289 DC GW +------+---------+ .--. .--. 290 | +---+---+ | ( '* '.--. 291 | | NVE2 | | .-.' * ) 292 | +---+---+ | ( * Internet ) 293 | +---+---+. | ( * / 294 | | VNGW1 * * * * * * * * '-' '-' 295 | +-------+ | | IPsec \../ \.--/' 296 | +--------+ | Tunnel 297 +----------------+ 299 DC Provider Site 301 Figure 1 DC Virtual Network Access via Internet 303 3.2. DC VN and Enterprise Sites interconnected via SP WAN 305 An Enterprise company would lease some DC provider compute resources 306 to run some applications. For example, the company may run its web 307 applications at DC provider sites but run backend applications in 308 their own DCs. The Web applications and backend applications need to 309 communicate privately. DC provider may construct a NVO3 network to 310 connect all VMs running the Enterprise Web applications. The 311 enterprise company may buy a p2p private tunnel such as VPWS from a 312 SP to interconnect its site and the NVO3 network in provider DC site. 313 A protocol is necessary for exchanging the reachability between two 314 peering points and the traffic are carried over the tunnel. If an 315 enterprise has multiple sites, it may buy multiple p2p tunnels to 316 form a mesh interconnection among the sites and DC provider site. 317 This requires each site peering with all other sites for route 318 distribution. 320 Another way to achieve multi-site interconnection is to use Service 321 Provider (SP) VPN services, in which each site only peers with SP PE 322 site. A DC Provider and VPN SP may build a NVO3 network (VN) and VPN 323 independently. The VN provides the networking for all the related 324 TSes within the provider DC. The VPN interconnects several 325 enterprise sites, i.e. VPN sites. The DC provider and VPN SP further 326 connect the VN and VPN at the DC GW/ASBR and SP PE/ASBR. Several 327 options for the interconnection of the VN and VPN are described in 328 RFC4364 [RFC4364]. In Option A with VRF-LITE [VRF-LITE], both DC GW 329 and SP PE maintain the routing/forwarding table, and perform the 330 table lookup in forwarding. In Option B, DC GW and SP PE do not 331 maintain the forwarding table, it only maintains the VN and VPN 332 identifier mapping, and exchange the identifier on the packet in the 333 forwarding process. In option C, DC GW and SP PE use the same 334 identifier for VN and VPN, and just perform the tunnel stitching, 335 i.e. change the tunnel end points. Each option has pros/cons (see 336 RFC4364) and has been deployed in SP networks depending on the 337 applications. The BGP protocols may be used in these options for 338 route distribution. Note that if the provider DC is the SP Data 339 Center, the DC GW and PE in this case may be on one device. 341 This configuration allows the enterprise networks communicating to 342 the tenant systems attached to the VN in a provider DC without 343 interfering with DC provider underlying physical networks and other 344 virtual networks in the DC. The enterprise may use its own address 345 space on the tenant systems attached to the VN. The DC provider can 346 manage the VMs and storage attachment to the VN for the enterprise 347 customer. The enterprise customer can determine and run their 348 applications on the VMs. See section 4 for more. 350 The interesting feature in this use case is that the VN and compute 351 resource are managed by the DC provider. The DC operator can place 352 them at any location without notifying the enterprise and WAN SP 353 because the DC physical network is completely isolated from the 354 carrier and enterprise network. Furthermore, the DC operator may 355 move the VMs assigned to the enterprise from one sever to another in 356 the DC without the enterprise customer awareness, i.e. no impact on 357 the enterprise 'live' applications running these resources. Such 358 advanced features bring DC providers great benefits in serving these 359 kinds of applications but also add some requirements for NVO3 360 [NVO3PRBM]. 362 4. DC Applications Using NVO3 364 NVO3 brings DC operators the flexibility in designing and deploying 365 different applications in an end-to-end virtualization environment, 366 where the operators not need worry about the constraints of the 367 physical network configuration in the Data Center. DC provider may 368 use NVO3 in various ways and also use it in the conjunction with 369 physical networks in DC for many reasons. This section highlights 370 some use cases but not limits to. 372 4.1. Supporting Multi Technologies and Applications in a DC 374 Most likely servers deployed in a large data center are rolled in at 375 different times and may have different capacities/features. Some 376 servers may be virtualized, some may not; some may be equipped with 377 virtual switches, some may not. For the ones equipped with 378 hypervisor based virtual switches, some may support VxLAN [VXLAN] 379 encapsulation, some may support NVGRE encapsulation [NVGRE], and 380 some may not support any types of encapsulation. To construct a 381 tenant virtual network among these servers and the ToR switches, it 382 may construct one virtual network overlay and one virtual network 383 w/o overlay, or two virtual networks overlay with different 384 implementations. For example, one virtual network overlay uses VxLAN 385 encapsulation and another virtual network w/o overlay uses 386 traditional VLAN or another virtual network overlay uses NVGRE. 388 The gateway device or virtual gateway on a device may be used. The 389 gateway participates in to both virtual networks. It performs the 390 packet encapsulation/decapsulation and may also perform address 391 mapping or translation, and etc. 393 A data center may be also constructed with multi-tier zones. Each 394 zone has different access permissions and run different applications. 395 For example, the three-tier zone design has a front zone (Web tier) 396 with Web applications, a mid zone (application tier) with service 397 applications such as payment and booking, and a back zone (database 398 tier) with Data. External users are only able to communicate with 399 the web application in the front zone. In this case, the 400 communication between the zones MUST pass through the security 401 GW/firewall. The network virtualization may be used in each zone. If 402 individual zones use the different implementations, the GW needs to 403 support these implementations as well. 405 4.2. Tenant Network with Multi-Subnets or across multi DCs 407 A tenant network may contain multiple subnets. DC operators may 408 construct multiple tenant networks. The access policy for inter- 409 subnets is often necessary. To benefit the policy management, the 410 policies may be placed at some designated gateway devices only. Such 411 design requires the inter-subnet traffic MUST be sent to one of the 412 gateways first for the policy checking. However this may cause 413 traffic hairpin on the gateway in a DC. It is desirable that an NVE 414 can hold some policy and be able to forward inter-subnet traffic 415 directly. To reduce NVE burden, the hybrid design may be deployed, 416 i.e. an NVE can perform forwarding for the selected inter-subnets 417 and the designated GW performs for the rest. For example, each NVE 418 performs inter-subnet forwarding for a tenant, and the designated GW 419 is used for inter-subnet traffic from/to the different tenant 420 networks. 422 A tenant network may span across multiple Data Centers in distance. 423 DC operators may want an L2VN within each DC and L3VN between DCs 424 for a tenant network. L2 bridging has the simplicity and endpoint 425 awareness while L3 routing has advantages in policy based routing, 426 aggregation, and scalability. For this configuration, the virtual 427 L2/L3 gateway can be implemented on DC GW device. Figure 2 428 illustrates this configuration. 430 Figure 2 depicts two DC sites. The site A constructs an L2VN with 431 NVE1, NVE2, and NVE3. NVE1 and NVE2 reside on the servers where the 432 tenant systems are created. NVE3 resides on the DC GW device. The 433 site Z has similar configuration with NVE3 and NVE4 on the servers 434 and NVE6 on the DC GW. An L3VN is configured between the NVE5 at 435 site A and the NVE6 at site Z. An internal Virtual Integrated 436 Routing and Bridging (VIRB) is used between L2VNI and L3VNI on NVE5 437 and NVE6. The L2VNI is the MAC/NVE mapping table and the L3VNI is 438 the IP prefix/NVE mapping table. A packet to the NVE5 from L2VN will 439 be decapsulated and converted into an IP packet and then 440 encapsulated and sent to the site Z. 442 Note that both the L2VNs and L3VN in Figure 2 are encapsulated and 443 carried over within DC and across WAN networks, respectively. 445 NVE5/DCGW+------------+ +-----------+NVE6/DCGW 446 | +-----+ | '''''''''''''''' | +-----+ | 447 | |L3VNI+----+' L3VN '+---+L3VNI| | 448 | +--+--+ | '''''''''''''''' | +--+--+ | 449 | |VIRB | | VIRB| | 450 | +--+---+ | | +---+--+ | 451 | |L2VNIs| | | |L2VNIs| | 452 | +--+---+ | | +---+--+ | 453 +----+-------+ +------+----+ 454 ''''|'''''''''' ''''''|''''''' 455 ' L2VN ' ' L2VN ' 456 NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S 457 +-----+---+ +----+----+ +------+--+ +----+----+ 458 | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | 459 | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | 460 | ++---++ | | ++---++ | | ++---++ | | ++---++ | 461 +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ 462 |...| |...| |...| |...| 464 Tenant Systems Tenant Systems 466 DC Site A DC Site Z 468 Figure 2 Tenant Virtual Network with Bridging/Routing 470 4.3. Virtual Data Center (vDC) 472 Enterprise DC's today may often use several routers, switches, and 473 network appliance devices to construct its internal network, DMZ, 474 and external network access. A DC Provider may offer a virtual DC 475 service to an enterprise customer and run enterprise applications 476 such as website/emails as well. Instead of using many hardware 477 devices to do it, with the network virtualization overlay 478 technology, DC operators may build such vDCs on top of a common 479 network infrastructure for many such customers and run network 480 service applications per a vDC basis. The net service applications 481 such as firewall, DNS, load balancer can be designed per vDC. The 482 network virtualization overlay further enables potential for vDC 483 mobility when customer moves to different locations because tenant 484 systems and net appliances configuration can be completely decouple 485 from the infrastructure network. 487 Figure 3 below illustrates one scenario. For the simple 488 illustration, it only shows the L3VN or L2VN as virtual and overlay 489 routers or switches. In this case, DC operators construct several L2 490 VNs (L2VNx, L2VNy, L2VNz) in Figure 3 to group the end tenant 491 systems together per application basis, create an L3VNa for the 492 internal routing. A net device (may be a VM or server) runs 493 firewall/gateway applications and connects to the L3VNa and 494 Internet. A load Balancer (LB) is used in L2VNx. A VPWS p2p tunnel 495 is also built between the gateway and enterprise router. The design 496 runs Enterprise Web/Mail/Voice applications at the provider DC site; 497 lets the users at Enterprise site to access the applications via the 498 VPN tunnel and Internet via a gateway at the Enterprise site; let 499 Internet users access the applications via the gateway in the 500 provider DC. 502 The Enterprise customer decides which applications are accessed by 503 intranet only and which by both intranet and extranet; DC operators 504 then design and configure the proper security policy and gateway 505 function. Furthermore DC operators may use multi-zones in a vDC for 506 the security and/or set different QoS levels for the different 507 applications based on customer applications. 509 This use case requires the NVO3 solution to provide the DC operator 510 an easy way to create a VN and NVEs for any design and to quickly 511 assign TSs to a VNI on a NVE they attach to, easily to set up 512 virtual topology and place or configure policies on an NVE or VMs 513 that run net services, and support VM mobility. Furthermore, DC 514 operator needs to view the tenant network topology and know the 515 tenant node capability and is able to configure a net service on the 516 tenant node. DC provider may further let a tenant to manage the vDC 517 itself. 519 Internet ^ Internet 520 | 521 ^ +-+----+ 522 | | GW | 523 | +--+---+ 524 | | 525 +-------+--------+ +-+----+ 526 |FireWall/Gateway+--- VPWS/MPLS---+Router| 527 +-------+--------+ +-+--+-+ 528 | | | 529 ...+... |..| 530 +-----: L3VNa :--------+ LANs 531 +-+-+ ....... | 532 |LB | | | Enterprise Site 533 +-+-+ | | 534 ...+... ...+... ...+... 535 : L2VNx : : L2VNy : : L2VNz : 536 ....... ....... ....... 537 |..| |..| |..| 538 | | | | | | 539 Web Apps Mail Apps VoIP Apps 541 Provider DC Site 543 firewall/gateway and Load Balancer (LB) may run on a server or VMs 545 Figure 3 Virtual Data Center by Using NVO3 547 5. OAM Considerations 549 NVO3 brings the ability for a DC provider to segregate tenant 550 traffic. A DC provider needs to manage and maintain NVO3 instances. 551 Similarly, the tenant needs to be informed about underlying network 552 failures impacting tenant applications or the tenant network is able 553 to detect both overlay and underlay network failures and builds some 554 resiliency mechanisms. 556 Various OAM and SOAM tools and procedures are defined in [IEEE 557 802.1ag], [ITU-T Y.1731], [RFC4378], [RFC5880], [ITU-T Y.1564] for 558 L2 and L3 networks, and for user, including continuity check, 559 loopback, link trace, testing, alarms such as AIS/RDI, and on-demand 560 and periodic measurements. These procedures may apply to tenant 561 overlay networks and tenants not only for proactive maintenance, but 562 also to ensure support of Service Level Agreements (SLAs). 564 As the tunnel traverses different networks, OAM messages need to be 565 translated at the edge of each network to ensure end-to-end OAM. 567 6. Summary 569 The document describes some general potential use cases of NVO3 in 570 DCs. The combination of these cases should give operators 571 flexibility and capability to design more sophisticated cases for 572 various purposes. 574 DC services may vary from infrastructure as a service (IaaS), 575 platform as a service (PaaS), to software as a service (SaaS), in 576 which the network virtualization overlay is just a portion of an 577 application service. NVO3 decouples the service 578 construction/configurations from the DC network infrastructure 579 configuration, and helps deployment of higher level services over 580 the application. 582 NVO3's underlying network provides the tunneling between NVEs so 583 that two NVEs appear as one hop to each other. Many tunneling 584 technologies can serve this function. The tunneling may in turn be 585 tunneled over other intermediate tunnels over the Internet or other 586 WANs. It is also possible that intra DC and inter DC tunnels are 587 stitched together to form an end-to-end tunnel between two NVEs. 589 A DC virtual network may be accessed via an external network in a 590 secure way. Many existing technologies can help achieve this. 592 NVO3 implementation may vary. Some DC operators prefer to use 593 centralized controller to manage tenant system reachbility in a 594 tenant network, other prefer to use distributed protocols to 595 advertise the tenant system location, i.e. attached NVEs. For the 596 migration and special requirement, the different solutions may apply 597 to one tenant network in a DC. When a tenant network spans across 598 multiple DCs and WANs, each network administration domain may use 599 different methods to distribute the tenant system locations. Both 600 control plane and data plane interworking are necessary. 602 7. Security Considerations 604 Security is a concern. DC operators need to provide a tenant a 605 secured virtual network, which means one tenant's traffic isolated 606 from the other tenant's traffic and non-tenant's traffic; they also 607 need to prevent DC underlying network from any tenant application 608 attacking through the tenant virtual network or one tenant 609 application attacking another tenant application via DC networks. 610 For example, a tenant application attempts to generate a large 611 volume of traffic to overload DC underlying network. The NVO3 612 solution has to address these issues. 614 8. IANA Considerations 616 This document does not request any action from IANA. 618 9. Acknowledgements 620 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 621 Marques, Mike McBride, David McDysan, Randy Bush, and Uma Chunduri 622 for the review, comments, and suggestions. 624 10. References 626 10.1. Normative References 628 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 629 Requirement Levels", BCP 14, RFC 2119, March 1997 631 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 632 Networks (VPNs)", RFC 4364, February 2006. 634 [IEEE 802.1ag] "Virtual Bridged Local Area Networks - Amendment 5: 635 Connectivity Fault Management", December 2007. 637 [ITU-T G.8013/Y.1731] OAM Functions and Mechanisms for Ethernet 638 based Networks, 2011. 640 [ITU-T Y.1564] "Ethernet service activation test methodology", 2011. 642 [RFC4378] Allan, D., Nadeau, T., "A Framework for Multi-Protocol 643 Label Switching (MPLS) Operations and Management (OAM)", 644 RFC4378, February 2006 646 [RFC4301] Kent, S., "Security Architecture for the Internet 647 Protocol", rfc4301, December 2005 649 [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection 650 (BFD)", rfc5880, June 2010. 652 10.2. Informative References 654 [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic 655 Routing Encapsulation", draft-sridharan-virtualization- 656 nvgre-02, work in progress. 658 [NVO3PRBM] Narten, T., etc "Problem Statement: Overlays for Network 659 Virtualization", draft-ietf-nvo3-overlay-problem- 660 statement-02, work in progress. 662 [NVO3FRWK] Lasserre, M., Motin, T., and etc, "Framework for DC 663 Network Virtualization", draft-ietf-nvo3-framework-02, 664 work in progress. 666 [NVO3MCAST] Ghanwani, A., "Multicast Issues in Networks Using NVO3", 667 draft-ghanwani-nvo3-mcast-issues-00, work in progress. 669 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 671 [VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for 672 Overlaying Virtualized Layer 2 Networks over Layer 3 673 Networks", draft-mahalingam-dutt-dcops-vxlan-03.txt, work 674 in progress. 676 Authors' Addresses 678 Lucy Yong 679 Huawei Technologies, 680 5340 Legacy Dr. 681 Plano, TX 75025 683 Phone: +1-469-277-5837 684 Email: lucy.yong@huawei.com 686 Mehmet Toy 687 Comcast 688 1800 Bishops Gate Blvd., 689 Mount Laurel, NJ 08054 691 Phone : +1-856-792-2801 692 E-mail : mehmet_toy@cable.comcast.com 694 Aldrin Isaac 695 Bloomberg 696 E-mail: aldrin.isaac@gmail.com 698 Vishwas Manral 699 Hewlett-Packard Corp. 700 3000 Hanover Street, Building 20C 701 Palo Alto, CA 95014 703 Phone: 650-857-5501 704 Email: vishwas.manral@hp.com 706 Linda Dunbar 707 Huawei Technologies, 708 5340 Legacy Dr. 709 Plano, TX 75025 US 711 Phone: +1-469-277-5840 712 Email: linda.dunbar@huawei.com