idnits 2.17.1 draft-ietf-nvo3-use-case-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (February 15, 2013) is 4087 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ITU-T Y.1731' is mentioned on line 565, but not defined == Outdated reference: A later version (-08) exists of draft-sridharan-virtualization-nvgre-01 == Outdated reference: A later version (-04) exists of draft-ietf-nvo3-overlay-problem-statement-02 == Outdated reference: A later version (-09) exists of draft-ietf-nvo3-framework-02 == Outdated reference: A later version (-09) exists of draft-mahalingam-dutt-dcops-vxlan-02 Summary: 0 errors (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network working group L. Yong 2 Internet Draft Huawei 3 Category: Informational M. Toy 4 Comcast 5 A. Isaac 6 Bloomberg 7 V. Manral 8 Hewlett-Packard 9 L. Dunbar 10 Huawei 12 Expires: August 2013 February 15, 2013 14 Use Cases for DC Network Virtualization Overlays 16 draft-ietf-nvo3-use-case-00 18 Abstract 20 This draft describes the general NVO3 use cases. The work intention 21 is to help validate the NVO3 framework and requirements as along 22 with the development of the solutions. 24 Status of this Memo 26 This Internet-Draft is submitted to IETF in full conformance with 27 the provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF), its areas, and its working groups. Note that 31 other groups may also distribute working documents as Internet- 32 Drafts. 34 Internet-Drafts are draft documents valid for a maximum of six 35 months and may be updated, replaced, or obsoleted by other documents 36 at any time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 The list of current Internet-Drafts can be accessed at 40 http://www.ietf.org/ietf/1id-abstracts.txt. 42 The list of Internet-Draft Shadow Directories can be accessed at 43 http://www.ietf.org/shadow.html. 45 This Internet-Draft will expire on August, 2013. 47 Copyright Notice 49 Copyright (c) 2013 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with 57 respect to this document. Code Components extracted from this 58 document must include Simplified BSD License text as described in 59 Section 4.e of the Trust Legal Provisions and are provided without 60 warranty as described in the Simplified BSD License. 62 Conventions used in this document 64 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 65 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 66 document are to be interpreted as described in RFC-2119 [RFC2119]. 68 Table of Contents 70 1. Introduction...................................................3 71 2. Terminology....................................................4 72 3. Basic Virtual Networks in a Data Center........................4 73 4. Interconnecting DC Virtual Network and External Networks.......6 74 4.1. DC Virtual Network Access via Internet....................7 75 4.2. DC Virtual Network and WAN VPN Interconnection............7 76 5. DC Applications Using NVO3.....................................9 77 5.1. Supporting Multi Technologies in a Data Center...........10 78 5.2. Tenant Virtual Network with Bridging/Routing.............10 79 5.3. Virtual Data Center (VDC)................................11 80 5.4. Federating NV03 Domains..................................13 81 6. OAM Considerations............................................13 82 7. Summary.......................................................13 83 8. Security Considerations.......................................14 84 9. IANA Considerations...........................................14 85 10. Acknowledgements.............................................15 86 11. References...................................................15 87 11.1. Normative References....................................15 88 11.2. Informative References..................................16 89 Authors' Addresses...............................................16 91 1. Introduction 93 Compute Virtualization has dramatically and quickly changed IT 94 industry in terms of efficiency, cost, and the speed in providing a 95 new applications and/or services. However the problems in today's 96 data center hinder the support of an elastic cloud service and 97 dynamic virtual tenant networks [NVO3PRBM]. The goal of DC Network 98 Virtualization Overlays, i.e. NVO3, is to decouple tenant system 99 communication networking from DC physical networks and to allow one 100 physical network infrastructure to provide: 1) traffic isolation 101 among virtual networks over the same physical network; 2) 102 independent address space in each virtual network and address 103 isolation from the infrastructure's; 3) Flexible VM placement and 104 move from one server to another without any physical network 105 limitation. These characteristics will help address the issues in 106 the data centers [NVO3PRBM]. 108 Although NVO3 may enable a true virtual environment where VMs and 109 network service appliances communicate, the NVO3 solution has to 110 address the communication between a virtual network and one physical 111 network. This is because 1) many traditional DCs exist and will not 112 disappear any time soon; 2) a lot of DC applications serve to 113 Internet and/or cooperation users on physical networks; 3) some 114 applications like Big Data analytics which are CPU bound may not 115 want the virtualization capability. 117 This document is to describe general NVO3 use cases that apply to 118 various data center networks to ensure nvo3 framework and solutions 119 can meet the demands. Three types of the use cases described here 120 are: 122 o A virtual network connects many tenant systems within a Data 123 Center and form one L2 or L3 communication domain. A virtual 124 network segregates its traffic from others and allows the VMs in 125 the network moving from one server to another. The case may be 126 used for DC internal applications that constitute the DC East- 127 West traffic. 129 o A DC provider offers a secure DC service to an enterprise 130 customer and/or Internet users. In these cases, the enterprise 131 customer may use a traditional VPN provided by a carrier or an 132 IPsec tunnel over Internet connecting to an overlay virtual 133 network offered by a Data Center provider. This is mainly 134 constitutes DC North-South traffic. 136 o A DC provider uses NVO3 to design a variety of cloud applications 137 that make use of the network service appliance, virtual compute, 138 storage, and networking. In this case, the NVO3 provides the 139 virtual networking functions for the applications. 141 The document uses the architecture reference model and terminologies 142 defined in [NVO3FRWK] to describe the use cases. 144 2. Terminology 146 This document uses the terminologies defined in [NVO3FRWK], 147 [RFC4364]. Some additional terms used in the document are listed 148 here. 150 CUG: Closed User Group 152 L2 VNI: L2 Virtual Network Instance 154 L3 VNI: L3 Virtual Network Instance 156 ARP: Address Resolution Protocol 158 CPE: Customer Premise Equipment 160 DNS: Domain Name Service 162 DMZ: DeMilitarized Zone 164 NAT: Network Address Translation 166 VNIF: Virtual Network Interconnection Interface 168 3. Basic Virtual Networks in a Data Center 170 A virtual network may exist within a DC. The network enables a 171 communication among tenant systems (TSs) that are in a Closed User 172 Group (CUG). A TS may be a physical server or virtual machine (VM) 173 on a server. A virtual network has a unique virtual network 174 identifier (may be local or global unique) for switches/routers to 175 properly differentiate it from other virtual networks. The CUGs are 176 formed so that proper policies can be applied when the TSs in one 177 CUG communicate with the TSs in other CUGs. 179 Figure 1 depicts this case by using the framework model.[NVO3FRWK] 180 NVE1 and NVE2 are two network virtual edges and each may exist on a 181 server or ToR. Each NVE may be the member of one or more virtual 182 networks. Each virtual network may be L2 or L3 basis. In this 183 illustration, three virtual networks with VN context Ta, Tn, and Tm 184 are shown. The VN 'Ta' terminates on both NVE1 and NVE2; The VN 'Tn' 185 terminates on NVE1 and the VN 'Tm' at NVE2 only. If an NVE is a 186 member of a VN, one or more virtual network instances (VNI) (i.e. 187 routing and forwarding table) exist on the NVE. Each NVE has one 188 overlay module to perform frame encapsulation/decapsulation and 189 tunneling initiation/termination. In this scenario, a tunnel between 190 NVE1 and NVE2 is necessary for the virtual network Ta. 192 A TS attaches to a virtual network (VN) via a virtual access point 193 (VAP) on an NVE. One TS may participate in one or more virtual 194 networks via VAPs; one NVE may be configured with multiple VAPs for 195 a VN. Furthermore if individual virtual networks use different 196 address spaces, the TS participating in all of them will be 197 configured with multiple addresses as well. A TS as a gateway is an 198 example for this. In addition, multiple TSs may use one VAP to 199 attach to a VN. For example, VMs are on a server and NVE is on ToR, 200 then some VMs may attach to NVE via a VLAN. 202 A VNI on an NVE is a routing and forwarding table that caches and/or 203 maintains the mapping of a tenant system and its attached NVE. The 204 table entry may be updated by the control plane, data plane, 205 management plane, or the combination of them. 207 +------- L3 Network ------+ 208 | Tunnel Overlay | 209 +------------+--------+ +--------+-------------+ 210 | +----------+------+ | | +------+----------+ | 211 | | Overlay Module | | | | Overlay Module | | 212 | +---+---------+---+ | | +--+----------+---+ | 213 | |Ta |Tn | | |Ta |Tm | 214 | +--+---+ +--+---+ | | +-+----+ +--+---+ | 215 | | VNIa |..| VNIn | | | | VNIa |..| VNIm | | 216 NVE1 | ++----++ ++----++ | | ++----++ ++----++ | NVE2 217 | |VAPs| |VAPs| | | |VAPs| |VAPs| | 218 +---+----+----+----+--+ +---+----+----+----+---+ 219 | | | | | | | | 220 ------+----+----+----+------ -----+----+----+----+----- 221 | .. | | .. | | .. | | .. | 222 | | | | | | | | 223 Tenant systems Tenant systems 224 Figure 1 NVO3 for Tenant System Networking 226 One virtual network may have many NVE members and interconnect 227 several thousands of TSs (as a matter of policy), the capability of 228 supporting a lot of TSs per tenant instance and TS mobility is 229 critical for NVO3 solution no matter where an NVE resides. 231 It is worth to mention two distinct cases here. The first is when TS 232 and NVE are co-located on a same physical device, which means that 233 the NVE is aware of the TS state at any time via internal API. The 234 second is when TS and NVE are remotely connected, i.e. connected via 235 a switched network or point-to-point link. In this case, a protocol 236 is necessary for NVE to know TS state. 238 Note that if all NVEs are co-located with TSs in a CUG, the 239 communication in the CUG is in a true virtual environment. If a TS 240 connects to a NVE remotely, the communication from this TS to other 241 TSs in the CUG is not in a true virtual environment. The packets 242 to/from this TS are directly carried over a physical network, i.e. 243 on the wire. This may require some necessary configuration on the 244 physical network to facilitate the communication. 246 Individual virtual networks may use its own address space and the 247 space is isolated from DC infrastructure. This eliminates the route 248 reconfiguration in the DC underlying network when VMs move. Note 249 that the NVO3 solutions still have to address VM move in the overlay 250 network, i.e. the TS/NVE association change when a VM moves. 252 If a virtual network spans across multiple DC sites, one design is 253 to allow the corresponding NVO3 instance seamlessly span across 254 those sites without DC gateway routers' termination. In this case, 255 the tunnel between a pair of NVEs may in turn be tunneled over other 256 intermediate tunnels over the Internet or other WANs, or the intra 257 DC and inter DC tunnels are stitched together to form an end-to-end 258 tunnel between two NVEs in different DCs. 260 4. Interconnecting DC Virtual Network and External Networks 262 For customers (an enterprise or individuals) who want to utilize the 263 DC provider's compute and storage resources to run their 264 applications, they need to access their systems hosted in a DC 265 through Carrier WANs or Internet. A DC provider may use an NVO3 266 virtual network for such customer to access their systems; then it, 267 in turn, becomes the case of interconnecting DC virtual network and 268 external networks. Two cases are described here. 270 4.1. DC Virtual Network Access via Internet 272 A user or an enterprise customer connects to a DC virtual network 273 via Internet but securely. Figure 2 illustrates this case. An L3 274 virtual network is configured on NVE1 and NVE2 and two NVEs are 275 connected via an L3 tunnel in the Data Center. A set of tenant 276 systems attach to NVE1. The NVE2 connects to one (may be more) TS 277 that runs the VN gateway and NAT applications (known as network 278 service appliance). A user or customer can access their systems via 279 Internet by using IPsec tunnel [RFC4301]. The encrypted tunnel is 280 established between the VN GW and the user machine or CPE at 281 enterprise location. The VN GW provides authentication scheme and 282 encryption. Note that VN GW function may be performed by a network 283 service appliance device or on a DC GW. 285 +--------------+ +----------+ 286 | +------+ | | Firewall | TS 287 +----+(OM)+L3 VNI+--+-+ NAT | (VN GW) 288 | | +------+ | +----+-----+ 289 L3 Tunnel +--------------+ ^ 290 | NVE2 |IPsec Tunnel 291 +--------+---------+ .--. .--. 292 | +------+-------+ | ( :' '.--. 293 | |Overlay Module| | .-.' : ) 294 | +------+-------+ | ( Internet ) 295 | +-----+------+ | ( : / 296 | | L3 VNI | | '-' : '-' 297 NVE1 | +-+--------+-+ | \../+\.--/' 298 +----+--------+----+ | 299 | ... | V 300 Tenant Systems User Access 302 DC Provider Site 304 OM: Overlay Module; 306 Figure 2 DC Virtual Network Access via Internet 308 4.2. DC Virtual Network and WAN VPN Interconnection 310 A DC Provider and Carrier may build a VN and VPN independently and 311 interconnect the VN and VPN at the DC GW and PE for an enterprise 312 customer. Figure 3 depicts this case in an L3 overlay (L2 overlay is 313 the same). The DC provider constructs an L3 VN between the NVE1 on a 314 server and the NVE2 on the DC GW in the DC site; the carrier 315 constructs an L3VPN between PE1 and PE2 in its IP/MPLS network. An 316 Ethernet Interface physically connects the DC GW and PE2 devices. 317 The local VLAN over the Ethernet interface [VRF-LITE] is configured 318 to connect the L3VNI/NVE2 and VRF, which makes the interconnection 319 between the L3 VN in the DC and the L3VPN in IP/MPLS network. An 320 Ethernet Interface may be used between PE1 and CE to connect the 321 L3VPN and enterprise physical networks. 323 This configuration allows the enterprise networks communicating to 324 the tenant systems attached to the L3 VN without interfering with DC 325 provider underlying physical networks and other overlay networks in 326 the DC. The enterprise may use its own address space on the tenant 327 systems attached to the L3 VN. The DC provider can manage the VMs 328 and storage attached to the L3 VN for the enterprise customer. The 329 enterprise customer can determine and run their applications on the 330 VMs. From the L3 VN perspective, an end point in the enterprise 331 location appears as the end point associating to the NVE2. The NVE2 332 on the DC GW has to perform both the GRE tunnel termination [RFC4797] 333 and the local VLAN termination and forward the packets in between. 334 The DC provider and Carrier negotiate the local VLAN ID used on the 335 Ethernet interface. 337 This configuration makes the L3VPN over the WANs only has the 338 reachbility to the TS in the L3 VN. It does not have the 339 reachability of DC physical networks and other VNs in the DC. 340 However, the L3VPN has the reachbility of enterprise networks. Note 341 that both the DC provider and enterprise may have multiple network 342 locations connecting to the L3VPN. 344 The eBGP protocol can be used between DC GW and PE2 for the route 345 population in between. In fact, this is like the Option A in 346 [RFC4364]. This configuration can work with any NVO3 solution. The 347 eBGP, OSPF, or other can be used between PE1 and CE for the route 348 population. 350 +-----------------+ +-------------+ 351 | +----------+ | | +-------+ | 352 NVE2 | | L3 VNI +---+===========+-+ VRF | | 353 | +----+-----+ | VLAN | +---+---+ | PE2 354 | | | | | | 355 | +-----+-------+ | /+-----+-------+--\ 356 | |Overly Module| | ( : ' 357 | +-------------+ | { : } 358 +--------+--------+ { : LSP Tunnel } 359 | ; : ; 360 |IP Tunnel { IP/MPLS Network } 361 | \ : / 362 +--------+---------+ +----+------+ - 363 | +------+-------+ | | +--+---+ | ' 364 | |Overlay Module| | | | VRF | | 365 | +------+-------+ | | +--+---+ | PE1 366 | |Ta | | | | 367 | +-----+------+ | +----+------+ 368 | | L3 VNI | | | 369 NVE1 | +-+--------+-+ | | 370 | | VAPs | | CE Site 371 +----+--------+----+ 372 | ... | Enterprise Site 373 Tenant systems 375 DC Provider Site 377 Figure 3 L3 VNI and L3VPN interconnection across multi networks 379 If an enterprise only has one location, it may use P2P VPWS [RFC4664] 380 or L2TP [RFC5641] to connect one DC provider site. In this case, one 381 edge connects to a physical network and another edge connects to an 382 overlay network. 384 Various alternatives can be configured between DC GW and SP PE to 385 achieve the same capability. Option B, C, or D in RFC4364 [RFC4364] 386 can be used and the characteristics of each option are described 387 there. 389 The interesting feature in this use case is that the L3 VN and 390 compute resource are managed by the DC provider. The DC operator can 391 place them at any location without notifying the enterprise and 392 carrier because the DC physical network is completely isolated from 393 the carrier and enterprise network. Furthermore, the DC operator may 394 move the VMs assigned to the enterprise from one sever to another in 395 the DC without the enterprise customer awareness, i.e. no impact on 396 the enterprise 'live' applications running these resources. Such 397 advanced feature brings some requirements for NVO3 [NVO3PRBM]. 399 5. DC Applications Using NVO3 401 NVO3 brings DC operators the flexibility to design different 402 applications in a true virtual environment (or nearly true) without 403 worrying about physical network configuration in the Data Center. DC 404 operators may build several virtual networks and interconnect them 405 directly to form a tenant virtual network and implement the 406 communication rules, i.e. policy between different virtual networks; 407 or may allocate some VMs to run tenant applications and some to run 408 network service application such as Firewall and DNS for the tenant. 409 Several use cases are given in this section. 411 5.1. Supporting Multi Technologies in a Data Center 413 Most likely servers deployed in a large data center are rolled in at 414 different times and may have different capacities/features. Some 415 servers may be virtualized, some may not; some may be equipped with 416 virtual switches, some may not. For the ones equipped with 417 hypervisor based virtual switches, some may support VxLAN [VXLAN] 418 encapsulation, some may support NVGRE encapsulation [NVGRE], and 419 some may not support any types of encapsulation. To construct a 420 tenant virtual network among these servers and the ToRs, it may use 421 two virtual networks and a gateway to allow different 422 implementations working together. For example, one virtual network 423 uses VxLAN encapsulation and another virtual network uses 424 traditional VLAN. 426 The gateway entity, either on VMs or standalone one, participates in 427 to both virtual networks, and maps the services and identifiers and 428 changes the packet encapsulations. 430 5.2. Tenant Virtual Network with Bridging/Routing 432 A tenant virtual network may span across multiple Data Centers. DC 433 operator may want to use L2VN within a DC and L3VN outside DCs for a 434 tenant network. This is very similar to today's DC physical network 435 configuration. L2 bridging has the simplicity and endpoint awareness 436 while L3 routing has advantages in policy based routing, aggregation, 437 and scalability. For this configuration, the virtual L2/L3 gateway 438 function is necessary to interconnect L2VN and L3VN in each DC. 439 Figure 4 illustrates this configuration. 441 Figure 4 depicts two DC sites. The site A constructs an L2VN that 442 terminates on NVE1, NVE2, and GW1. An L3VN is configured between the 443 GW1 at site A and the GW2 at site Z. An internal Virtual Network 444 Interconnection Interface (VNIF) connects to L2VNI and L3VNI on GW1. 445 Thus the GW1 is the members of the L2VN and L3VN. The L2VNI is the 446 MAC/NVE mapping table and the L3VNI is IP prefix/NVE mapping table. 447 Note that a VNI also has the mapping of TS and VAP at the local NVE. 448 The site Z has the similar configuration. A packet coming to the GW1 449 from L2VN will be descapulated and converted into an IP packet and 450 then encapsulated and sent to the site Z. The Gateway uses ARP 451 protocol to obtain MAC/IP address mapping. 453 Note that both the L2VN and L3VN in the figure are carried by the 454 tunnels supported by the underlying networks which are not shown in 455 the figure. 457 +------------+ +-----------+ 458 GW1| +-----+ | '''''''''''''''' | +-----+ |GW2 459 | |L3VNI+----+' L3VN '+---+L3VNI| | 460 | +--+--+ | '''''''''''''''' | +--+--+ | 461 | |VNIF | | VNIF| | 462 | +--+--+ | | +--+--+ | 463 | |L2VNI| | | |L2VNI| | 464 | +--+--+ | | +--+--+ | 465 +----+-------+ +------+----+ 466 ''''|'''''''''' ''''''|''''''' 467 ' L2VN ' ' L2VN ' 468 NVE1 ''/'''''''''\'' NVE2 NVE3 '''/'''''''\'' NVE4 469 +-----+---+ +----+----+ +------+--+ +----+----+ 470 | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | 471 | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | 472 | ++---++ | | ++---++ | | ++---++ | | ++---++ | 473 +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ 474 |...| |...| |...| |...| 476 Tenant Systems Tenant Systems 478 DC Site A DC Site Z 480 Figure 4 Tenant Virtual Network with Bridging/Routing 482 5.3. Virtual Data Center (VDC) 484 Enterprise DC's today may often use several routers, switches, and 485 service devices to construct its internal network, DMZ, and external 486 network access. A DC Provider may offer a virtual DC to an 487 enterprise customer to run enterprise applications such as 488 website/emails. Instead of using many hardware devices, with the 489 overlay and virtualization technology of NVO3, DC operators can 490 build them on top of a common network infrastructure for many 491 customers and run service applications per customer basis. The 492 service applications may include firewall, gateway, DNS, load 493 balancer, NAT, etc. 495 Figure 5 below illustrates this scenario. For the simple 496 illustration, it only shows the L3VN or L2VN as virtual and overlay 497 routers or switches. In this case, DC operators construct several L2 498 VNs (L2VNx, L2VNy, L2VNz in Figure 5) to group the end tenant 499 systems together per application basis, create an L3VNa for the 500 internal routing. A server or VM runs firewall/gateway applications 501 and connects to the L3VNa and Internet. A VPN tunnel is also built 502 between the gateway and enterprise router. The design runs 503 Enterprise Web/Mail/Voice applications at the provider DC site; lets 504 the users at Enterprise site to access the applications via the VPN 505 tunnel and Internet via a gateway at the Enterprise site; let 506 Internet users access the applications via the gateway in the 507 provider DC. The enterprise operators can also use the VPN tunnel or 508 IPsec over Internet to access the vDC for the management purpose. 509 The firewall/gateway provides application-level and packet-level 510 gateway function and/or NAT function. 512 The Enterprise customer decides which applications are accessed by 513 intranet only and which by both intranet and extranet; DC operators 514 then design and configure the proper security policy and gateway 515 function. DC operators may further set different QoS levels for the 516 different applications for a customer. 518 This application requires the NVO3 solution to provide the DC 519 operator an easy way to create NVEs and VNIs for any design and to 520 quickly assign TSs to a VNI, easily place and configure policies on 521 an NVE, and support VM mobility. 523 Internet ^ Internet 524 | 525 ^ +-+----+ 526 | | GW | 527 | +--+---+ 528 | | 529 +-------+--------+ +-+----+ 530 |FireWall/Gateway+---VPN Tunnel---+Router| 531 +-------+--------+ +-+--+-+ 532 | | | 533 ...+... |..| 534 +-----: L3VNa :--------+ LANs 535 | ....... | 536 | | | Enterprise Site 537 ...+... ...+... ...+... 538 : L2VNx : : L2VNy : : L2VNz : 539 ....... ....... ....... 541 |..| |..| |..| 542 | | | | | | 543 Web Apps Mail Apps VoIP Apps 545 Provider DC Site 547 * firewall/gateway may run on a server or VMs 549 Figure 5 Virtual Data Center by Using NVO3 551 5.4. Federating NV03 Domains 553 Two general cases are 1) Federating AS managed by a single operator; 554 2) Federating AS managed by different Operators. The detail will be 555 described in next version. 557 6. OAM Considerations 559 NVO3 brings the ability for a DC provider to segregate tenant 560 traffic. A DC provider needs to manage and maintain NVO3 instances. 561 Similarly, the tenant needs to be informed about tunnel failures 562 impacting tenant applications. 564 Various OAM and SOAM tools and procedures are defined in [IEEE 565 802.1ag], [ITU-T Y.1731], [RFC4378], [RFC5880], [ITU-T Y.1564] for 566 L2 and L3 networks, and for user, including continuity check, 567 loopback, link trace, testing, alarms such as AIS/RDI, and on-demand 568 and periodic measurements. These procedures may apply to tenant 569 overlay networks and tenants not only for proactive maintenance, but 570 also to ensure support of Service Level Agreements (SLAs). 572 As the tunnel traverses different networks, OAM messages need to be 573 translated at the edge of each network to ensure end-to-end OAM. 575 It is important that failures at lower layers which do not affect 576 NVo3 instance are to be suppressed. 578 7. Summary 580 The document describes some basic potential use cases of NVO3. The 581 combination of these cases should give operators flexibility and 582 capability to design more sophisticated cases for various purposes. 584 The key requirements for NVO3 are 1) traffic segregation; 2) 585 supporting a large scale number of virtual networks in a common 586 infrastructure; 3) supporting highly distributed virtual network 587 with sparse memberships 3) VM mobility 4) auto or easy to construct 588 a NVE and its associated TS; 5) Security 6) NVO3 Management 589 [NVO3PRBM]. 591 Difference between other overlay network technologies and NVO3 is 592 that the client edges of the NVO3 network are individual and 593 virtualized hosts, not network sites or LANs. NVO3 enables these 594 virtual hosts communicating in a true virtual environment without 595 constraints in physical networks. 597 NVO3 allows individual tenant virtual networks to use their own 598 address space and isolates the space from the network infrastructure. 599 The approach not only segregates the traffic from multi tenants on a 600 common infrastructure but also makes VM placement and move easier. 602 DC services may vary from infrastructure as a service (IaaS), 603 platform as a service (PaaS), to software as a service (SaaS), in 604 which the network virtual overlay is just a portion of an 605 application service. NVO3 decouples the services from DC network 606 infrastructure configuration. 608 NVO3's underlying network provides the tunneling between NVEs so 609 that two NVEs appear as one hop to each other. Many tunneling 610 technologies can serve this function. The tunneling may in turn be 611 tunneled over other intermediate tunnels over the Internet or other 612 WANs. It is also possible that intra DC and inter DC tunnels are 613 stitched together to form an end-to-end tunnel between two NVEs. 615 A DC virtual network may be accessed via an external network in a 616 secure way. Many existing technologies can help achieve this. 618 8. Security Considerations 620 Security is a concern. DC operators need to provide a tenant a 621 secured virtual network, which means one tenant's traffic isolated 622 from the other tenant's traffic and non-tenant's traffic; they also 623 need to prevent DC underlying network from any tenant application 624 attacking through the tenant virtual network or one tenant 625 application attacking another tenant application via DC networks. 626 For example, a tenant application attempts to generate a large 627 volume of traffic to overload DC underlying network. The NVO3 628 solution has to address these issues. 630 9. IANA Considerations 632 This document does not request any action from IANA. 634 10. Acknowledgements 636 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 637 Marques, Mike McBride, David McDysan, Randy Bush, and Uma Chunduri 638 for the review, comments, and suggestions. 640 11. References 642 11.1. Normative References 644 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 645 Requirement Levels", BCP 14, RFC 2119, March 1997 647 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 648 Networks (VPNs)", RFC 4364, February 2006. 650 [IEEE 802.1ag] "Virtual Bridged Local Area Networks - Amendment 5: 651 Connectivity Fault Management", December 2007. 653 [ITU-T G.8013/Y.1731] OAM Functions and Mechanisms for Ethernet 654 based Networks, 2011. 656 [ITU-T Y.1564] "Ethernet service activation test methodology", 2011. 658 [RFC4378] Allan, D., Nadeau, T., "A Framework for Multi-Protocol 659 Label Switching (MPLS) Operations and Management (OAM)", 660 RFC4378, February 2006 662 [RFC4301] Kent, S., "Security Architecture for the Internet 663 Protocol", rfc4301, December 2005 665 [RFC4664] Andersson, L., "Framework for Layer 2 Virtual Private 666 Networks (L2VPNs)", rfc4664, September 2006 668 [RFC4797] Rekhter, Y., et al, "Use of Provider Edge to Provider Edge 669 (PE-PE) Generic Routing Encapsulation (GRE) or IP in 670 BGP/MPLS IP Virtual Private Networks", RFC4797, January 671 2007 673 [RFC5641] McGill, N., "Layer 2 Tunneling Protocol Version 3 (L2TPv3) 674 Extended Circuit Status Values", rfc5641, April 2009. 676 [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection 677 (BFD)", rfc5880, June 2010. 679 11.2. Informative References 681 [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic 682 Routing Encapsulation", draft-sridharan-virtualization- 683 nvgre-01, work in progress. 685 [NVO3PRBM] Narten, T., etc "Problem Statement: Overlays for Network 686 Virtualization", draft-ietf-nvo3-overlay-problem- 687 statement-02, work in progress. 689 [NVO3FRWK] Lasserre, M., Motin, T., and etc, "Framework for DC 690 Network Virtualization", draft-ietf-nvo3-framework-02, 691 work in progress. 693 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 695 [VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for 696 Overlaying Virtualized Layer 2 Networks over Layer 3 697 Networks", draft-mahalingam-dutt-dcops-vxlan-02.txt, work 698 in progress. 700 Authors' Addresses 702 Lucy Yong 703 Huawei Technologies, 704 4320 Legacy Dr. 705 Plano, Tx75025 US 707 Phone: +1-469-277-5837 708 Email: lucy.yong@huawei.com 710 Mehmet Toy 711 Comcast 712 1800 Bishops Gate Blvd., 713 Mount Laurel, NJ 08054 715 Phone : +1-856-792-2801 716 E-mail : mehmet_toy@cable.comcast.com 718 Aldrin Isaac 719 Bloomberg 720 E-mail: aldrin.isaac@gmail.com 722 Vishwas Manral 723 Hewlett-Packard Corp. 724 191111 Pruneridge Ave. 726 Cupertino, CA 95014 728 Phone: 408-447-1497 729 Email: vishwas.manral@hp.com 731 Linda Dunbar 732 Huawei Technologies, 733 4320 Legacy Dr. 734 Plano, Tx75025 US 736 Phone: +1-469-277-5840 737 Email: linda.dunbar@huawei.com