idnits 2.17.1 draft-mity-nvo3-use-case-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (October 22, 2012) is 4201 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'ITU-T Y.1564' is defined on line 640, but no explicit reference was found in the text == Unused Reference: 'RFC4378' is defined on line 642, but no explicit reference was found in the text == Unused Reference: 'RFC5880' is defined on line 660, but no explicit reference was found in the text == Outdated reference: A later version (-08) exists of draft-sridharan-virtualization-nvgre-01 == Outdated reference: A later version (-04) exists of draft-ietf-nvo3-overlay-problem-statement-00 == Outdated reference: A later version (-09) exists of draft-ietf-nvo3-framework-01 == Outdated reference: A later version (-09) exists of draft-mahalingam-dutt-dcops-vxlan-02 Summary: 0 errors (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network working group L. Yong 2 Internet Draft Huawei 3 Category: Informational M. Toy 4 Comcast 5 A. Isaac 6 Bloomberg 7 V. Manral 8 Hewlett-Packard 9 L. Dunbar 10 Huawei 12 Expires: April 2013 October 22, 2012 14 Use Cases for DC Network Virtualization Overlays 16 draft-mity-nvo3-use-case-04 18 Status of this Memo 20 This Internet-Draft is submitted to IETF in full conformance with 21 the provisions of BCP 78 and BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF), its areas, and its working groups. Note that 25 other groups may also distribute working documents as Internet- 26 Drafts. 28 Internet-Drafts are draft documents valid for a maximum of six 29 months and may be updated, replaced, or obsoleted by other documents 30 at any time. It is inappropriate to use Internet-Drafts as reference 31 material or to cite them other than as "work in progress." 33 The list of current Internet-Drafts can be accessed at 34 http://www.ietf.org/ietf/1id-abstracts.txt. 36 The list of Internet-Draft Shadow Directories can be accessed at 37 http://www.ietf.org/shadow.html. 39 This Internet-Draft will expire on April, 2013. 41 Copyright Notice 43 Copyright (c) 2009 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with 51 respect to this document. 53 Abstract 55 This draft describes the general NVO3 use cases. The work intention 56 is to help validate the NVO3 framework and requirements as along 57 with the development of the solutions. 59 Conventions used in this document 61 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 62 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 63 document are to be interpreted as described in RFC-2119 [RFC2119]. 65 Table of Contents 67 1. Introduction...................................................3 68 2. Terminology....................................................4 69 3. Basic Virtual Networks in a Data Center........................4 70 4. Interconnecting DC Virtual Network and External Networks.......6 71 4.1. DC Virtual Network Access via Internet....................6 72 4.2. DC Virtual Network and WAN VPN Interconnection............7 73 5. DC Applications Using NVO3.....................................9 74 5.1. Supporting Multi Technologies in a Data Center............9 75 5.2. Tenant Virtual Network with Bridging/Routing.............10 76 5.3. Virtual Data Center (VDC)................................11 77 5.4. Federating NV03 Domains..................................13 78 6. OAM Considerations............................................13 79 7. Summary.......................................................13 80 8. Security Considerations.......................................14 81 9. IANA Considerations...........................................14 82 10. Acknowledgements.............................................14 83 11. References...................................................15 84 11.1. Normative References....................................15 85 11.2. Informative References..................................15 86 Authors' Addresses...............................................16 88 1. Introduction 90 Compute Virtualization has dramatically and quickly changed IT 91 industry in terms of efficiency, cost, and the speed in providing a 92 new applications and/or services. However the problems in today's 93 data center hinder the support of an elastic cloud service and 94 dynamic virtual tenant networks [NVO3PRBM]. The goal of DC Network 95 Virtualization Overlays, i.e. NVO3, is to decouple a communication 96 among tenant end systems (VMs) from DC physical networks and to 97 allow the network infrastructure to provide: 1) traffic isolation 98 among one virtual network and another; 2) independent address space 99 in each virtual network and address isolation from the 100 infrastructure's; 3) Flexible VM placement and move from one server 101 to another without any physical network limitation. These 102 characteristics will help address the issues in the data centers. 104 Although NVO3 may enable a true virtual environment where VMs and 105 net service appliances communicate, the NVO3 solution has to address 106 how to communicate between a virtual network and a physical network. 107 This is because 1) many traditional DCs exist and will not disappear 108 any time soon; 2) a lot of DC applications serve to Internet and/or 109 cooperation users; 3) some applications like Big Data analytics 110 which are CPU bound may not want the virtualization capability. 112 This document is to describe general NVO3 use cases that apply to 113 various data center networks to ensure nvo3 framework and solutions 114 can meet the demands. Three types of the use cases are: 116 o A virtual network connects many tenant end systems within a Data 117 Center and form one L2 or L3 communication domain. A virtual 118 network segregates its traffic from others and allows the VMs in 119 the network moving from one server to another. The case may be 120 used for DC internal applications that constitute the DC East- 121 West traffic. 123 o A DC provider offers a secure DC service to an enterprise 124 customer and/or Internet users. In these cases, the enterprise 125 customer may use a traditional VPN provided by a carrier or an 126 IPsec tunnel over Internet connecting to an overlay virtual 127 network offered by a Data Center provider. This is mainly 128 constitutes DC North-South traffic. 130 o A DC provider uses NVO3 to design a variety of DC applications 131 that make use of the net service appliance, virtual compute, 132 storage, and networking. In this case, the NVO3 provides the 133 virtual networking functions for the applications. 135 The document uses the architecture reference model and terminologies 136 defined in [NVO3FRWK] to describe the use cases. 138 2. Terminology 140 This document uses the terminologies defined in [NVO3FRWK], 141 [RFC4364]. Some additional terms used in the document are listed 142 here. 144 CUG: Closed User Group 146 L2 VNI: L2 Virtual Network Instance 148 L3 VNI: L3 Virtual Network Instance 150 ARP: Address Resolution Protocol 152 CPE: Customer Premise Equipment 154 DNS: Domain Name Service 156 DMZ: DeMilitarized Zone 158 NAT: Network Address Translation 160 VNIF: Internal Virtual Network Interconnection Interface 162 3. Basic Virtual Networks in a Data Center 164 A virtual network may exist within a DC. The network enables a 165 communication among tenant end systems (TESs) that are in a Closed 166 User Group (CUG). A TES may be a physical server or virtual machine 167 (VM) on a server. A virtual network has a unique virtual network 168 identifier (may be local or global unique) for switches/routers to 169 properly differentiate it from other virtual networks. The CUGs are 170 formed so that proper policies can be applied when the TESs in one 171 CUG communicate with the TESs in other CUGs. 173 Figure 1 depicts this case by using the framework model. [NVO3FRWK] 174 NVE1 and NVE2 are two network virtual edges and each may exist on a 175 server or ToR. Each NVE may be the member of one or more virtual 176 networks. Each virtual network may be L2 or L3 basis. In this 177 illustration, three virtual networks with VN context Ta, Tn, and Tm 178 are shown. The VN 'Ta' terminates on both NVE1 and NVE2; The VN 'Tn' 179 terminates on NVE1 and the VN 'Tm' at NVE2 only. If an NVE is a 180 member of a VN, one or more virtual network instances (VNI) (i.e. 181 routing and forwarding table) exist on the NVE. Each NVE has one 182 overlay module to perform frame encapsulation/decapsulation and 183 tunneling initiation/termination. In this scenario, a tunnel between 184 NVE1 and NVE2 is necessary for the virtual network Ta. 186 A TES attaches to a virtual network (VN) via a virtual access point 187 (VAP) on an NVE. One TES may participate in one or more virtual 188 networks via VAPs; one NVE may be configured with multiple VAPs for 189 a VN. Furthermore if individual virtual networks use different 190 address spaces, the TES participating in all of them will be 191 configured with multiple addresses as well. A TES as a gateway is an 192 example for this. In addition, multiple TESes may use one VAP to 193 attach to a VN. For example, VMs are on a server and NVE is on ToR, 194 some VMs may attach to NVE via one VLAN. 196 A VNI on an NVE is a routing and forwarding table that caches and/or 197 maintains the mapping of a tenant end system and its attached NVE. 198 The table entry may be updated by the control plane or data plane or 199 management plane. It is possible that an NVE has more than one VNIs 200 associated with a VN. 202 +------- L3 Network ------+ 203 | Tunnel Overlay | 204 +------------+--------+ +--------+-------------+ 205 | +----------+------+ | | +------+----------+ | 206 | | Overlay Module | | | | Overlay Module | | 207 | +---+---------+---+ | | +--+----------+---+ | 208 | |Ta |Tn | | |Ta |Tm | 209 | +--+---+ +--+---+ | | +-+----+ +--+---+ | 210 | | VNIa |..| VNIn | | | | VNIa |..| VNIm | | 211 NVE1 | ++----++ ++----++ | | ++----++ ++----++ | NVE2 212 | |VAPs| |VAPs| | | |VAPs| |VAPs| | 213 +---+----+----+----+--+ +---+----+----+----+---+ 214 | | | | | | | | 215 ------+----+----+----+------ -----+----+----+----+----- 216 | .. | | .. | | .. | | .. | 217 | | | | | | | | 218 Tenant End Systems Tenant End Systems 220 Figure 1 NVo3 for Tenant End-System interconnection 222 One virtual network may have many NVE members and interconnect 223 several thousands of TESs (as a matter of policy), the capability of 224 supporting a lot of TESs per tenant instance and TES mobility is 225 critical for NVO3 solution no matter where an NVE resides. 227 It is worth to mention two distinct cases here. The first is when 228 TES and NVE are co-located on a same physical device, which means 229 that the NVE is aware of the TES state at any time via internal API. 230 The second is when TES and NVE are remotely connected, i.e. 231 connected via a switched network or point-to-point link. In this 232 case, a protocol is necessary for NVE to know TES state. 234 Note that if all NVEs are co-located with TESes in a CUG, the 235 communication in the CUG is in a true virtual environment. If a TES 236 connects to a NVE remotely, the communication from this TES to other 237 TESes in the CUG is not in a true virtual environment. The packets 238 to/from this TES are exposed to a physical network directly, i.e. on 239 a wire. 241 Individual virtual networks may use its own address space and the 242 space is isolated from DC infrastructure. This eliminates the route 243 changes in the DC underlying network when VMs move. Note that the 244 NVO3 solutions still have to address VM move in the overlay network, 245 i.e. the TES/NVE association change when a VM moves. 247 If a virtual network spans across multiple DC sites, one design is 248 to allow the corresponding NVO3 instance seamlessly span across 249 those sites without DC gateway routers' termination In this case, 250 the tunnel between a pair of NVEs may in turn be tunneled over other 251 intermediate tunnels over the Internet or other WANs, or the intra 252 DC and inter DC tunnels are stitched together to form an end-to-end 253 tunnel between two NVEs. 255 4. Interconnecting DC Virtual Network and External Networks 257 For customers (an enterprise or individuals) who want to utilize the 258 DC provider's compute and storage resources to run their 259 applications, they need to access those end systems hosted in a DC 260 through Carrier WANs or Internet. A DC provider may want to use an 261 NVO3 virtual network to connect these end systems; then it, in turn, 262 becomes the case of interconnecting DC virtual network and external 263 networks. Two cases are described here. 265 4.1. DC Virtual Network Access via Internet 267 A user or an enterprise customer may want to connect to a DC virtual 268 network via Internet but securely. Figure 2 illustrates this case. 270 An L3 virtual network is configured on NVE1 and NVE2 and two NVEs 271 are connected via an L3 tunnel in the Data Center. A set of tenant 272 end systems attach to NVE1. The NVE2 connects to one (may be more) 273 TES that runs the VN gateway and NAT applications (known as net 274 service appliance). A user or customer can access the VN via 275 Internet by using IPsec tunnel [RFC4301]. The encrypted tunnel is 276 established between the VN GW and the user machine or CPE at 277 enterprise location. The VN GW provides authentication scheme and 278 encryption. Note that VN GW function may be performed by a net 279 service appliance or on a DC GW. 281 +--------------+ +----------+ 282 | +------+ | | Firewall | TES 283 +----+(OM)+L3 VNI+--+-+ NAT | (VN GW) 284 | | +------+ | +----+-----+ 285 L3 Tunnel +--------------+ ^ 286 | NVE2 |IPsec Tunnel 287 +--------+---------+ .--. .--. 288 | +------+-------+ | ( :' '.--. 289 | |Overlay Module| | .-.' : ) 290 | +------+-------+ | ( Internet ) 291 | +-----+------+ | ( : / 292 | | L3 VNI | | '-' : '-' 293 NVE1 | +-+--------+-+ | \../+\.--/' 294 +----+--------+----+ | 295 | ... | V 296 Tenant End Systems User Access 298 DC Provider Site 300 OM: Overlay Module; 302 Figure 2 DC Virtual Network Access via Internet 304 4.2. DC Virtual Network and WAN VPN Interconnection 306 A DC Provider and Carrier may build a VN and VPN independently and 307 interconnect the two at the DC GW and PE for an enterprise customer. 308 Figure 3 depicts this case in a L3 overlay (L2 overlay is the same). 309 The DC provider constructs an L3 VN between the NVE1 on a server and 310 the NVE2 on the DC GW in the DC site; the carrier constructs an 311 L3VPN between PE1 and PE2 in its IP/MPLS network. An Ethernet 312 Interface physically connects the DC GW and PE2 devices. The local 313 VLAN over the Ethernet interface [VRF-LITE] is configured to connect 314 the L3VNI/NVE2 and VRF, which makes the interconnection between the 315 L3 VN in the DC and the L3VPN in IP/MPLS network. An Ethernet 316 Interface may be used between PE1 and CE to connect the L3VPN and 317 enterprise physical networks. 319 This configuration allows the enterprise networks communicating to 320 the L3 VN as if its own networks but not communicating with DC 321 provider underlying physical networks as well as not other overlay 322 networks in the DC. The enterprise may use its own address space on 323 the L3 VN. The DC provider can manage the VM and storage assignment 324 to the L3 VN for the enterprise customer. The enterprise customer 325 can determine and run their applications on the VMs. From the L3 VN 326 perspective, an end point in the enterprise location appears as the 327 end point associating to the NVE2. The NVE2 on the DC GW has to 328 perform both the GRE tunnel termination [RFC4797] and the local VLAN 329 termination and forward the packets in between. The DC provider and 330 Carrier negotiate the local VLAN ID used on the Ethernet interface. 332 This configuration makes the L3VPN over the WANs only has the 333 reachbility to the TES in the L3 VN. It does not have the 334 reachability of DC physical networks and other VNs in the DC. 335 However, the L3VPN has the reachbility of enterprise networks. Note 336 that both the DC provider and enterprise may have multiple network 337 locations connecting to the L3VPN. 339 The eBGP protocol can be used between DC GW and PE2 for the route 340 population in between. In fact, this is like the Option A in 341 [RFC4364]. This configuration can work with any NVO3 solution. The 342 eBGP, OSPF, or other can be used between PE1 and CE for the route 343 population. 345 +-----------------+ +-------------+ 346 | +----------+ | | +-------+ | 347 NVE2 | | L3 VNI +---+===========+-+ VRF | | 348 | +----+-----+ | VLAN | +---+---+ | PE2 349 | | | | | | 350 | +-----+-------+ | /+-----+-------+--\ 351 | |Overly Module| | ( : ' 352 | +-------------+ | { : } 353 +--------+--------+ { : LSP Tunnel } 354 | ; : ; 355 |GER Tunnel { IP/MPLS Network } 356 | \ : / 357 +--------+---------+ +----+------+ - 358 | +------+-------+ | | +--+---+ | ' 359 | |Overlay Module| | | | VRF | | 360 | +------+-------+ | | +--+---+ | PE1 361 | |Ta | | | | 362 | +-----+------+ | +----+------+ 363 | | L3 VNI | | | 364 NVE1 | +-+--------+-+ | | 365 | | VAPs | | CE Site 366 +----+--------+----+ 367 | ... | Enterprise Site 368 Tenant End Systems 370 DC Provider Site 372 Figure 3 L3 VNI and L3VPN interconnection across multi networks 374 If an enterprise only has one location, it may use P2P VPWS [RFC4664] 375 or L2TP [RFC5641] to connect one DC provider site. In this case, one 376 edge connects to a physical network and another edge connects to an 377 overlay network. 379 The interesting feature in this use case is that the L3 VN and 380 compute resource are managed by the DC provider. The DC operator can 381 place them at any location without notifying the enterprise and 382 carrier because the DC physical network is completely isolated from 383 the carrier and enterprise network. Furthermore, the DC operator may 384 move the compute resources assigned to the enterprise from one sever 385 to another in the DC without the enterprise customer awareness, i.e. 386 no impact on the enterprise 'live' applications running these 387 resources. Such advanced feature brings some requirements for NVO3 388 [NVO3PRBM]. 390 5. DC Applications Using NVO3 392 NVO3 brings DC operators the flexibility to design different 393 applications in a true virtual environment without worry about 394 physical network configuration in the Data Center. DC operators may 395 build several virtual networks and interconnect them directly to 396 form a tenant virtual network and implement the communication rules 397 through policy; or may allocate some VMs to run tenant applications 398 and some to run net service applications such as Firewall, DNS for 399 the tenant. Several use cases are given in this section. 401 5.1. Supporting Multi Technologies in a Data Center 403 Most likely servers deployed in a large data center are rolled in at 404 different times and may have different capacities/features. Some 405 servers may be virtualized, some may not; some may be equipped with 406 virtual switches, some may not. For the ones equipped with 407 hypervisor based virtual switches, some may support VxLAN [VXLAN] 408 encapsulation, some may support NvGRE encapsulation [NVGRE], and 409 some may not support any types of encapsulation. To construct a 410 tenant virtual network among these servers and the ToRs, it may use 411 two virtual networks and a gateway to allow different 412 implementations working together. For example, one virtual network 413 uses VxLAN encapsulation and another virtual network uses 414 traditional VLAN. 416 The gateway entity, either on VMs or standalone one, participates in 417 to both virtual networks, and maps the services and identifiers and 418 changes the packet encapsulations. 420 5.2. Tenant Virtual Network with Bridging/Routing 422 A tenant virtual network may span across multiple Data Centers. DC 423 operator may want to use L2VN within a DC and L3VN outside DCs for a 424 tenant. This is very similar to today's DC physical network 425 configuration. L2 bridging has the simplicity and endpoint awareness 426 while L3 routing has advantages in aggregation and scalability. For 427 this configuration, the virtual gateway function is necessary to 428 interconnect L2VN and L3VN in each DC. Figure 5 illustrates this 429 configuration. 431 Figure 5 depicts two DC sites. The site A constructs an L2VN that 432 terminates on NVE1, NVE2, and GW1. An L3VN is configured between the 433 GW1 at site A and the GW2 at site Z. An internal Virtual Network 434 Interconnection Interface (VNIF) connects to L2VNI and L3VNI on GW1. 435 Thus the GW1 is the members of the L2VN and L3VN. The L2VNI is the 436 MAC/NVE mapping table and the L3VNI is IP prefix/NVE mapping table. 437 Note that a VNI also has the mapping of TES and VAP at the local NVE. 438 The site Z has the similar configuration. A packet coming to the GW1 439 from L2VN will be descapulated and converted into an IP packet and 440 then encapsulated and sent to the site Z. The Gateway uses ARP 441 protocol to obtain MAC/IP mapping. Note that both the L2VN and L3VN 442 in the figure are carried by the tunnels supported by the underlying 443 networks which are not shown in the figure. 445 +------------+ +-----------+ 446 GW1| +-----+ | '''''''''''''''' | +-----+ |GW2 447 | |L3VNI+----+' L3VN '+---+L3VNI| | 448 | +--+--+ | '''''''''''''''' | +--+--+ | 449 | |VNIF | | VNIF| | 450 | +--+--+ | | +--+--+ | 451 | |L2VNI| | | |L2VNI| | 452 | +--+--+ | | +--+--+ | 453 +----+-------+ +------+----+ 454 ''''|'''''''''' ''''''|''''''' 455 ' L2VN ' ' L2VN ' 456 NVE1 ''/'''''''''\'' NVE2 NVE3 '''/'''''''\'' NVE4 457 +-----+---+ +----+----+ +------+--+ +----+----+ 458 | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | 459 | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | 460 | ++---++ | | ++---++ | | ++---++ | | ++---++ | 461 +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ 462 |...| |...| |...| |...| 463 TESs TESs TESs TESs 465 DC Site A DC Site Z 467 Figure 4 Tenant Virtual Network with Bridging/Routing 469 5.3. Virtual Data Center (VDC) 471 Enterprise DC's today may often use several routers, switches, and 472 service devices to construct its internal network, DMZ, and external 473 network access. A DC Provider may offer a virtual DC to an 474 enterprise customer to run enterprise applications such as 475 website/emails. Instead of using many hardware devices, with the 476 overlay and virtualization technology of NVO3, DC operators can 477 build them on top of a common network infrastructure for many 478 customers and run service applications per customer basis. The 479 service applications may include firewall, gateway, DNS, load 480 balancer, NAT, etc. 482 Figure 6 below illustrates this scenario. For the simple 483 illustration, it only shows the L3VN or L2VN as virtual and overlay 484 routers or switches. In this case, DC operators construct several L2 485 VNs (L2VNx, L2VNy, L2VNz in figure 6) to group the end tenant 486 systems together per application basis, create an L3VNa for the 487 internal routing. A server or VM runs firewall/gateway applications 488 and connects to the L3VNa and Internet. A VPN tunnel is also built 489 between the gateway and enterprise router. The design runs 490 Enterprise Web/Mail/VoIP applications at the provider DC site; lets 491 the users at Enterprise site to access the applications via the VPN 492 tunnel and Internet via a gateway at the Enterprise site; let 493 Internet users access the applications via the gateway in the 494 provider DC. The enterprise operators can also use the VPN tunnel or 495 IPsec over Internet to access the vDC for the management purpose. 496 The firewall/gateway provides application-level and packet-level 497 gateway function and/or NAT function. 499 The Enterprise customer decides which applications are accessed by 500 intranet only and which by both intranet and extranet; DC operators 501 then design and configure the proper security policy and gateway 502 function. DC operators may further set different QoS levels for the 503 different applications for a customer. 505 This application requires the NVO3 solution to provide the DC 506 operator an easy way to create NVEs and VNIs for any design and to 507 quickly assign TESs to a VNI, and easily configure policies on an 508 NVE. 510 Internet ^ Internet 511 | 512 ^ +-+----+ 513 | | GW | 514 | +--+---+ 515 | | 516 +-------+--------+ +-+----+ 517 |FireWall/Gateway+---VPN Tunnel---+Router| 518 +-------+--------+ +-+--+-+ 519 | | | 520 ...+... |..| 521 +-----: L3VNa :--------+ LANs 522 | ....... | 523 | | | Enterprise Site 524 ...+... ...+... ...+... 525 : L2VNx : : L2VNy : : L2VNz : 526 ....... ....... ....... 527 |..| |..| |..| 528 | | | | | | 529 Web Apps Mail Apps VoIP Apps 531 Provider DC Site 533 * firewall/gateway may run on a server or VMs 534 Figure 5 Virtual Data Center by Using NVO3 536 5.4. Federating NV03 Domains 538 Two general cases are 1) Federating AS managed by a single operator; 539 2) Federating AS managed by different Operators. The detail will be 540 described in next version. 542 6. OAM Considerations 544 NVO3 brings the ability for a DC provider to segregate tenant 545 traffic. A DC provider needs to manage and maintain NVO3 instances. 546 Similarly, the tenant needs to be informed about tunnel failures 547 impacting tenant applications. 549 Various OAM and SOAM tools and procedures are defined in [IEEE 550 802.1ag, ITU-T Y.1731, RFC4378, RFC5880, ITU-T Y.1564] for L2 and L3 551 networks, and for user, including continuity check, loopback, link 552 trace, testing, alarms such as AIS/RDI, and on-demand and periodic 553 measurements. These procedures may apply to tenant overlay networks 554 and tenants not only for proactive maintenance, but also to ensure 555 support of Service Level Agreements (SLAs). 557 As the tunnel traverses different networks, OAM messages need to be 558 translated at the edge of each network to ensure end-to-end OAM. 560 It is important that failures at lower layers which do not affect 561 NVo3 instance are to be suppressed. 563 7. Summary 565 The document describes some basic potential use cases of NVO3. The 566 combination of these cases should give operators flexibility and 567 power to design more sophisticated cases for various purposes. 569 The main differences between other overlay network technologies and 570 NVO3 is that the client edges of the NVO3 network are individual and 571 virtualized hosts, not network sites or LANs. NVO3 enables these 572 virtual hosts communicating in a true virtual environment without 573 considering physical network configuration. 575 NVO3 allows individual tenant virtual networks to use their own 576 address space and isolates the space from the network infrastructure. 577 The approach not only segregates the traffic from multi tenants on a 578 common infrastructure but also makes VM placement and move easier. 580 DC applications are about providing virtual processing/storage, 581 applications, and networking in a secured and virtualized manner, in 582 which the NV03 is just a portion of an application. NVO3 decouples 583 the applications and DC network infrastructure configuration. 585 NVO3's underlying network provides the tunneling between NVEs so 586 that two NVEs appear as one hop to each other. Many tunneling 587 technologies can serve this function. The tunneling may in turn be 588 tunneled over other intermediate tunnels over the Internet or other 589 WANs. It is also possible that intra DC and inter DC tunnels are 590 stitched together to form an end-to-end tunnel between two NVEs. 592 A DC virtual network may be accessed via an external network in a 593 secure way. Many existing technologies can achieve this. 595 The key requirements for NVO3 are 1) traffic segregation; 2) 596 supporting a large scale number of virtual networks in a common 597 infrastructure; 3) supporting highly distributed virtual network 598 with sparse memberships 3) VM mobility 4) auto or easy to construct 599 a NVE and its associated TES; 5) Security 6) NVO3 Management 600 [NVO3PRBM]. 602 8. Security Considerations 604 Security is a concern. DC operators need to provide a tenant a 605 secured virtual network, which means the tenant traffic isolated 606 from other tenant's and non-tenant VMs not placed into the tenant 607 virtual network; they also need to prevent DC underlying network 608 from any tenant application attacking through the tenant virtual 609 network or one tenant application attacking another tenant 610 application via DC networks. For example, a tenant application 611 attempts to generate a large volume of traffic to overload DC 612 underlying network. The NVO3 solution has to address these issues. 614 9. IANA Considerations 616 This document does not request any action from IANA. 618 10. Acknowledgements 620 Authors like to thank Sue Hares, Young Lee, David Black, Pedro 621 Marques, Mike McBride, David McDysan, and Randy Bush for the review, 622 comments, and suggestions. 624 11. References 626 11.1. Normative References 628 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 629 Requirement Levels", BCP 14, RFC 2119, March 1997 631 [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 632 Networks (VPNs)", RFC 4364, February 2006. 634 [IEEE 802.1ag] "Virtual Bridged Local Area Networks - Amendment 5: 635 Connectivity Fault Management", December 2007. 637 [ITU-T G.8013/Y.1731] OAM Functions and Mechanisms for Ethernet 638 based Networks, 2011. 640 [ITU-T Y.1564] "Ethernet service activation test methodology", 2011. 642 [RFC4378] Allan, D., Nadeau, T., "A Framework for Multi-Protocol 643 Label Switching (MPLS) Operations and Management (OAM)", 644 RFC4378, February 2006 646 [RFC4301] Kent, S., "Security Architecture for the Internet 647 Protocol", rfc4301, December 2005 649 [RFC4664] Andersson, L., "Framework for Layer 2 Virtual Private 650 Networks (L2VPNs)", rfc4664, September 2006 652 [RFC4797] Rekhter, Y., etc, "Use of Provider Edge to Provider Edge 653 (PE-PE) Generic Routing Encapsulation (GRE) or IP in 654 BGP/MPLS IP Virtual Private Networks", RFC4797, January 655 2007 657 [RFC5641] McGill, N., "Layer 2 Tunneling Protocol Version 3 (L2TPv3) 658 Extended Circuit Status Values", rfc5641, April 2009. 660 [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection 661 (BFD)", rfc5880, June 2010. 663 11.2. Informative References 665 [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic 666 Routing Encapsulation", draft-sridharan-virtualization- 667 nvgre-01, July 2012 669 [NVO3PRBM] Narten, T., etc "Problem Statement: Overlays for Network 670 Virtualization", draft-ietf-nvo3-overlay-problem- 671 statement-00, September 2012 673 [NVO3FRWK] Lasserre, M., Motin, T., and etc, "Framework for DC 674 Network Virtualization", draft-ietf-nvo3-framework-01, 675 October 2012 677 [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com 679 [VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for 680 Overlaying Virtualized Layer 2 Networks over Layer 3 681 Networks", draft-mahalingam-dutt-dcops-vxlan-02.txt, 682 August 2012 684 Authors' Addresses 686 Lucy Yong 687 Huawei Technologies, 688 4320 Legacy Dr. 689 Plano, Tx75025 US 691 Phone: +1-469-277-5837 692 Email: lucy.yong@huawei.com 694 Mehmet Toy 695 Comcast 696 1800 Bishops Gate Blvd., 697 Mount Laurel, NJ 08054 699 Phone : +1-856-792-2801 700 E-mail : mehmet_toy@cable.comcast.com 702 Aldrin Isaac 703 Bloomberg 704 E-mail: aldrin.isaac@gmail.com 706 Vishwas Manral 707 Hewlett-Packard Corp. 708 191111 Pruneridge Ave. 709 Cupertino, CA 95014 711 Phone: 408-447-1497 712 Email: vishwas.manral@hp.com 713 Linda Dunbar 714 Huawei Technologies, 715 4320 Legacy Dr. 716 Plano, Tx75025 US 718 Phone: +1-469-277-5840 719 Email: linda.dunbar@huawei.com