idnits 2.17.1 draft-fw-nvo3-server2vcenter-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 18, 2013) is 4084 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Virtualization Overlays Working R. Schott 3 Group Deutsche Telekom 4 Internet-Draft Q. Wu 5 Intended status: Informational Huawei 6 Expires: August 22, 2013 February 18, 2013 8 Network Virtualization Overlay Architecture 9 draft-fw-nvo3-server2vcenter-01.txt 11 Abstract 13 Multiple virtual machines (VMs) created in a single physical platform 14 Or vServer greatly improve the efficiency of data centers by enabling 15 more work from less hardware. Multiple vServer and associated 16 virtual machines work together as one cluster make good use of 17 resources of each vServer that are scattered into different data 18 centers or vServers. VMs have their lifecycles from VM creation, VM 19 Power on to VM Power off and VM deletion. The VMs may also move 20 across the participating virtualization hosts (e.g., the 21 virtualization server, hypervisor). This document discusses how VMs, 22 vServers and overlay network are managed by leveraging control plane 23 function and management plane function and desired signaling 24 functionalities for Network Virtualization Overlay. 26 Status of this Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at http://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on August 22, 2013. 43 Copyright Notice 45 Copyright (c) 2013 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (http://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 61 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 62 2.1. Standards Language . . . . . . . . . . . . . . . . . . . . 5 63 3. Discussions . . . . . . . . . . . . . . . . . . . . . . . . . 6 64 3.1. VM awareness and VM movement awareness . . . . . . . . . . 6 65 3.2. Why VM migration . . . . . . . . . . . . . . . . . . . . . 6 66 3.3. Who manages VM . . . . . . . . . . . . . . . . . . . . . . 7 67 3.4. VM Grouping . . . . . . . . . . . . . . . . . . . . . . . 7 68 3.5. What VM information should be managed . . . . . . . . . . 8 69 3.6. Who Triggers or Controls VM Movements . . . . . . . . . . 9 70 3.7. VM Monitoring . . . . . . . . . . . . . . . . . . . . . . 9 71 4. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 10 72 4.1. On Demand Network Provision Automation within the data 73 center . . . . . . . . . . . . . . . . . . . . . . . . . . 10 74 4.2. Large inter-data centers Layer 2 interconnection and 75 data forwarding . . . . . . . . . . . . . . . . . . . . . 11 76 4.3. Enable multiple data centers present as one . . . . . . . 12 77 4.4. VM migration and mobility across data centers . . . . . . 13 78 5. General Network Virtualization Architecture . . . . . . . . . 15 79 5.1. NVE (Network Virtualization Edge Function) . . . . . . . . 16 80 5.2. vServer (virtualization Server) . . . . . . . . . . . . . 17 81 5.3. vCenter (management plane function) . . . . . . . . . . . 17 82 5.4. nDirector (Control plane function) . . . . . . . . . . . . 17 83 6. vServer to vCenter management interface . . . . . . . . . . . 19 84 6.1. VM Creation . . . . . . . . . . . . . . . . . . . . . . . 19 85 6.2. VM Termination . . . . . . . . . . . . . . . . . . . . . . 19 86 6.3. VM Registration . . . . . . . . . . . . . . . . . . . . . 19 87 6.4. VM Unregistration . . . . . . . . . . . . . . . . . . . . 19 88 6.5. VM Bulk Registration . . . . . . . . . . . . . . . . . . . 19 89 6.6. VM Bulk Unregistration . . . . . . . . . . . . . . . . . . 19 90 6.7. VM Configuration Modification . . . . . . . . . . . . . . 20 91 6.8. VM Profile Lookup/Discovery . . . . . . . . . . . . . . . 20 92 6.9. VM Relocation . . . . . . . . . . . . . . . . . . . . . . 20 93 6.10. VM Replication . . . . . . . . . . . . . . . . . . . . . . 20 94 6.11. VM Report . . . . . . . . . . . . . . . . . . . . . . . . 20 95 7. nDirector to NVE Edge control interface . . . . . . . . . . . 22 96 8. vServer to NVE Edge control interface . . . . . . . . . . . . 23 97 9. Security Considerations . . . . . . . . . . . . . . . . . . . 24 98 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 99 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 26 100 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 27 101 12.1. Normative References . . . . . . . . . . . . . . . . . . . 27 102 12.2. Informative References . . . . . . . . . . . . . . . . . . 27 103 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 28 105 1. Introduction 107 Multiple virtual machines (VMs) created in a single physical platform 108 greatly improve the efficiency of data centers by enabling more work 109 from less hardware. Multiple vServer and associated virtual machines 110 work together as one cluster make good use of resources of each 111 vServer that are scattered into different data centers or vServers. 112 VMs have their lifecycles from VM creation, VM startup to VM poweroff 113 and VM deletion. The VMs may also move across the participating 114 virtualization hosts (e.g., the virtualization server or hypervisor). 115 One example is, as the workload on one physical server increases or 116 physical server needs upgrade, VMs can be moved to other available 117 lightweight-workload servers to ensure that service level agreement 118 and response time requirements are met. We call this VM movement or 119 relocation as VM migration. When the workload decreases, the VMs can 120 be moved back, allowing the unused server powered off to save cost 121 and energy. Another example is as one tenant moves, VMs associated 122 with this tenant may also move to the place that is more close to the 123 tenant and provides better user experience (e.g., larger bandwidth 124 with lower latency). We call such movements as VM mobility. VM 125 migration refers to the transfer of a VM image including memory, 126 storage and network connectivity while VM mobility refers to sending 127 data to the moving tenant associated with the VM and emphasizes 128 service non-disruption during a tenant's movement. This document 129 advocates the distinction between VM mobility and VM migration, both 130 important notions in VM management. The implication is that 131 different signaling or protocols for VM mobility and VM migration 132 might be chosen to automate Network Management for VM Movement, thus 133 possibly reusing the existing protocols or schemes to manage VM 134 migration and VM mobility separately. Unfortunately we sometimes 135 mixed them up or don't distinct VM migration management from VM 136 mobility management and intend to utilize one common protocol to 137 support both VM migration and VM mobility, which seems to simplify 138 the overall protocol design but it is difficult or impossible to run 139 one such protocol across both VM mobility management system that 140 manages VM mobility and VM management platform that manages VM 141 attributes. 143 This document discusses how VMs,vServer and overlay network to which 144 VMs are connecting are managed, signaling for VM, overlay network 145 management and argues VMs need management or control functionality 146 support but can be managed without VM mobility functionality support. 148 2. Terminology 150 2.1. Standards Language 152 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 153 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 154 document are to be interpreted as described in RFC 2119 [RFC2119]. 156 3. Discussions 158 3.1. VM awareness and VM movement awareness 160 Virtual machines usually operate under the control of a server 161 virtualization software residing on the physical compute server. The 162 server virtualization software is commonly referred to as 163 'hypervisor'. The hypervisor is the container of the VM and provides 164 shared compute/memory/storage and network connectivity to each VM 165 that it hosts. Therefore the hypervisor or the virtualized server 166 MUST be aware of VMs that it hosts. However it should not be aware 167 of VMs that it doesn't host. When VMs hosted in different 168 virtualization servers need to communicate each other, packets from 169 one VM will be forwarded by a virtual switch within the 170 virtualization server or the hypervisor to other VMs on another 171 virtualization server. Since the virtual switch resides within the 172 hypervisor or virtualization server, the rule on VM awareness applied 173 to the hypervisor should apply to virtual switch too. 175 Unlike VM awareness, VM movement awareness is the capability of 176 knowing the location update of the VM. For example, when a VM moves 177 out of the hypervisor and goes to another host, the original 178 hypervisor that hosts the VM is aware of VM movement or location 179 changing but may not be able to keep track of the new location after 180 the VM moves. Therefore one external party that maintains the 181 mapping between the VM's identity and the VM's current location is 182 needed which keeps track VM movements. 184 3.2. Why VM migration 186 VM migration refers to VM movement within a virtual environment in 187 response to events, conditions or based on requirements. The events 188 or conditions could be, for example, very high workloads experienced 189 by the VMs or upgrades of the physical server or virtualization 190 server, load balancing between virtualization servers. The 191 requirements could be, for example, low power and low cost 192 requirements or service continuity requirement. When a VM is moved 193 without service disruption, we usually call this VM movement as VM 194 mobility. However it is very difficult to provide transparent VM 195 mobility support since it not only needs to keep connection 196 uninterrupted but also needs to move the whole VM image from one 197 place to another place, which may take a long down time (e.g., more 198 than 400 ms) and can be noticed by the end user. 200 Fortunately, VMs may be migrated without VM mobility support. For 201 example, a server manager or administrator can move a running virtual 202 machine or application between different physical machines without 203 disconnecting the client or application if the client or application 204 supports VM suspending and resuming operation or stopped at the 205 source before the movement and restart at the destination after 206 movement. 208 In some case when VM mobility is really needed, it is recommended 209 that one copy of VM SHOULD be replicated from the source to the 210 destination and during VM replication, thus the VM running on the 211 source should not be affected. When VM replication to the 212 destination completes and the VM on the destination restarts, the VM 213 on the source can be stopped. However how the VM on the destination 214 coordinates with the VM on the source to know whom the latter is 215 communicating with is a challenging issue. 217 3.3. Who manages VM 219 To ensure the quality of applications (e.g., real-time applications) 220 or provide the same service level agreement, the VM's state(i.e., the 221 network attributes and policies associated with the VM ) should be 222 moved with the VM as well when the VM moves across participating 223 virtualization hosts (e.g., virtualization server or hypervisor). 224 These network attributes associated with VM should be enforced on the 225 physical switch and the virtual switch corresponding to VM to avoid 226 security and access threats. The hypervisor or the virtualization 227 server may maintain the network attributes for each VM. However when 228 VMs are moved from the previous server to the new server, the old 229 server and the new server may have no means to find each other. 230 Therefore one external party called VM network management system 231 (e.g., Cloud Broker) is needed and should get involved to coordinate 232 between the old server and the new server to establish the 233 association between network attributes/policies and the VM's 234 identity. If the VM management system does not span across data 235 center and the VM is moved between data centers, the VM network 236 management system in one data center may also need to coordinate with 237 VM network management system in another data center. 239 3.4. VM Grouping 241 VM grouping significantly simplifies the administration tasks when 242 managing large numbers of virtual machines, as new VMs are simply 243 added to existing groups. With grouping, similar VMs can be grouped 244 together and assigned with the same networking policies to all 245 members of the group to ensure consistent allocation of resources and 246 security measures to meet service level goals. Members of the group 247 retain the group attributes wherever they are located or move within 248 the virtual environment, providing a basis for dynamic policy 249 assignments. VM groups can be maintained or distributed on the 250 virtualization server or can be maintained on a centralized place, 251 e.g., the VM management platform that manages all the virtualization 252 servers in the data center. VM groups maintained on each 253 virtualization server may change at any time due to various VM 254 operations (e.g., VM adding, VM removing, VM moving). Therefore VM 255 groups need to be synchronized with the central VM management 256 platform. Profiles containing network configurations such as VLAN, 257 traffic shaping and ACLs for VM groups can be automatically 258 synchronized to the central VM management platform as well. This 259 way, consistent network policies can be enforced regardless of the 260 VM's location. 262 3.5. What VM information should be managed 264 The ability to identify VMs within the physical hosts is very 265 important. With the ability to identify each VM uniquely, the 266 administrator can apply the same philosophy to VMs as used with 267 physical servers. VLAN and QoS settings can be provisioned and ACL 268 attributes can be set at a VM level with permit and deny actions 269 based on layer 2 to layer 4 information. In the VM environment, a VM 270 is usually identified by MAC or IP address and belongs to one tenant. 271 Typically, one tenant may possess of one VM or a group of VMs in one 272 virtual network or several groups of VMs distributed in multiple 273 virtual networks. On the request of the tenant, a VM can be added, 274 removed and moved by the virtualization server or the hypervisor. 275 When the VM moves, the network attributes or configuration attributes 276 associated with the VM should also be moved with the VM as well to 277 ensure that the service level agreement and response times are met. 278 These network attributes include access and tunnel policies and (L2 279 and/or L3) forwarding functions and should be moved with the VM 280 information. We use Virtual Network Instance ID to represent those 281 network attributes. One tenant has at least one Virtual Network ID. 282 Therefore each tenant should at least include the following 283 information: 285 o vCenter Name or Identifier 287 o vServer Name or Identifier 289 o Virtual Network ID (VNID) 291 o VLAN tag value 293 o VM Group ID 295 o VM MAC/IP address 297 Note that Tenant may have tenant ID which could be combination of 298 these information. 300 3.6. Who Triggers or Controls VM Movements 302 VM can be moved within the virtual environment in response to events 303 or conditions. An issue here is who triggers and controls VM 304 movement? In a small scale or large scale data center, the server 305 administrator is usually not aware of VM movement and may respond 306 quickly to system fault or server overload and move a virtual machine 307 or a group of VMs to different physical machines. However it is hard 308 for the server administrator to response to dynamic VM movement and 309 creation since he doesn't keep track of VM movements. 311 In large scale data centers, the server administrator may be more 312 hesitated to utilize VM movements because of the time demands of 313 managing the related networking requirements. Therefore automated 314 solutions that safely create and move virtual machines and free VM 315 network or Server administrators from their responsibilities is 316 highly required. 318 The external party (i.e., the control or management plane function) 319 is needed to play the role of server administrator and should support 320 keeping track of VM movement and response quickly to dynamic VM 321 creation and movement. 323 When one tenant moves from one place to another place, VM movement 324 associated tenant should be informed to the control or management 325 plane function. When one tenant requests to improve the quality of 326 application and shorten the response time, the control or management 327 function can trigger VM being moved to the server that is closer to 328 the user. 330 3.7. VM Monitoring 332 In order to sort out bad VMs, VM monitoring is very important. The 333 VM monitor mechanism keeps track of the availability of VMs and their 334 resource entitlements and utilization, e.g., CPU utilization, Disk 335 and memory utilization, network utilization, network storage 336 utilization,. It ensures that there is no overloading of resources 337 whereby many service requests cannot be simultaneously fulfilled due 338 to limited resource available. VM monitor is also useful for server 339 administrations and report the status information of VMs or VM groups 340 in each server to the VM management and provision system. 342 4. Use Cases 344 4.1. On Demand Network Provision Automation within the data center 346 The Tenant Alice is logging into user portal via her laptop and 347 request playing Cloud gaming using VM she has already rented, the 348 request is redirected to the provision system vCenter, the vCenter 349 retrieves service configuration information and locate which vServer 350 the VM belongs to and then Provision resources for VM running on that 351 vServer. The vServer signals the VM operation parameter update to 352 the NVE to which the VM is connecting. In turn, the NVE device 353 interacts with the DC nDirector to configure policy and populate the 354 forwarding table to each network element (e.g., ToR,DC GW), in the 355 path from the Tenant End System to the NVE Device. In addition, the 356 DC nDirector may also populate the mapping table to map the 357 destination address (either L2 or L3) of a packet received from a VM 358 into the corresponding destination address of the NVE device. 360 +--------+ +--------+ 361 | | | | 362 | Tenant | | User | 363 |(Client)|--------> Portal | 364 | | | | --------- 365 +---|----+ +---+----+ /--- - -----\ 366 | | /// \\\ 367 | | // \\ 368 | +---+---+/ +---------+ \\ 369 | | | Signaling Path | | \ 370 | |vCenter+--------------------- vServer | \ 371 | | |1.Resource Provision| | | 372 | +-------+ +---------+ | 373 | |VM VM VM | | 374 | +----+----+ | 375 | | | 376 | +---------+ | | 377 -------------| DC | +----+----+ | 378 |nDirector| Data Path | | | 379 | /Oracle |<----------------->|NVEDevice| | 380 | | | | / 381 +---------+ +---------+ / 382 2.Network Policy control // 383 \\ // 384 \\\ /// 385 \----- -----/ 386 --------- 388 4.2. Large inter-data centers Layer 2 interconnection and data 389 forwarding 391 When the tenant Alice using VM1 in data center1 communicates with 392 tenant Bob using VM2 in data center2, the VM1 should already know 393 layer2 identity of VM2, however the VM1 may not know which NVE Edge 394 the VM2 is placed behind, in order to learn the location of the 395 remote NVE Device associated VM2, the mapping table is needed on the 396 local NVE Device associated with VM1 which is used to map the final 397 destination(i.e.,the identity of VM2) to the destination address of 398 the remote NVE device associated with VM2. In order to realize this, 399 the nDirector should tell the local NVE device associated with VM1 400 about layer 3 location identifier of remote NVE device associated 401 with VM2 and establish mapping between layer 2 VM2 identity and layer 402 3 identity of the remote NVE Edge associated with VM2. In addition, 403 the nDirector may tell all the remote NVE devices associated with the 404 VM which the VM1 is communicating with to establish the mapping 405 between layer 2 VM1 identity and layer 3 identity of the local NVE 406 Device associated with VM1. When this is done, the data packet from 407 VM1 can be sent to the NVE device associated with VM1, the NVE Device 408 associated with VM1 can identify layer 2 frame targeted for remote 409 destination based on established mapping table, encapsulates it into 410 IP packet and transmit it across layer 3 network. After the packet 411 arrives at the remote NVE Edge, the remove NVE Edge device 412 decapsulates layer 3 packet, take out layer 2 frame and forward it to 413 ultimate destination VM2. 415 +---------+ 416 | DC | 417 |nDirector| 418 +----------+ /Oracle |-----------+ 419 | | | | 420 | +---------+ | 421 | ---+---- 422 ---+--- ---- | ---- 423 /---- | ----\ /// | \\\ 424 // | \\ // | \\ 425 // +-------+---+ \\ / +------+----+ \ 426 / | | \ / | | \ 427 / | NVE Edge1 | \ / | NVE Edge2 | \ 428 / | | \ | | | | 429 | +-----------+ || +-----------+ | 430 | || | 431 | +---------+ +---------+ || +---------+ +---------+ | 432 | | | | | || | | | | | 433 | | vServer | | vServer | || | vServer | | vServer | | 434 | | | | | || | | | | | 435 | +---------+ +---------+ || +---------+ +---------+ | 436 | |VM VM VM | |VM VM VM | || |VM VM VM | |VM VM VM | | 437 | +---------+ +---------+ | \ +---------+ +---------+ / 438 \ / \ / 439 \ / \ / 440 \ / \\ // 441 \\ // \\\ /// 442 \\ // ---- ---- 443 \---- ----/ -------- 444 ------- 446 4.3. Enable multiple data centers present as one 448 In order to support more data centers interconnection and enable more 449 efficient use of resources in each data center, multiple data centers 450 may closely coordinate with each other to better load balancing 451 capability and work as one large DC with the involvement of the 452 nDirector that manages DCs, e.g., DC nDirector in each data center 453 may coordinate with each other and form one common control plane. 455 ----------------------- 456 //////// \\\\\\\\ 457 |/// \\\| 458 | | 459 | Internet Backbone | 460 | | 461 |\\\ ///| 462 \\\\\\\\ //////// 463 ----------------------- 464 +-------------------------------------------------------------+ 465 |One Unified Director | 466 | +-------+ +-------+ +---------+ | 467 | |egress | |egress | | egress | | 468 | | GW1/ | | GW2/ | | GW3/ | | 469 | |nDirector |nDirector |nDirector| | 470 | +----+--+ +--+----+ +----+----+ | 471 +-----------+-----------------+-----------------+-------------+ 472 | | | 473 /---+--\ /--+---\ /--+---\ 474 //+--------+\\ //+--------+\\ //+--------+\\ 475 | | DC1 | | | | DC2 | | | | DC3 | | 476 | | | | | | | | | | | | 477 \\+--------+// \\+--------+// \\+--------+// 478 \------/ \------/ \------/ 480 4.4. VM migration and mobility across data centers 482 The Tenant Alice is using VM1 in data center 1 to communicate with 483 the tenant Bob who is using VM9 in data center 2. For business 484 reason, the tenant Alice travels to the Bob's city where the data 485 center 2 situates but still use VM1 in the data center 1 to 486 communicate with the tenant Bob. In order to provide better user 487 experience, the VM1 may be move from vServer 1 to the new vServer3 in 488 the data center 2 which is more close to where the tenant Alice is 489 located. The vCenter can get involved to interact with data center 1 490 and data center2 and help replicate and relocate VM1 to the new 491 location. In the meanwhile ,when VM movement is done, the NVE device 492 connecting to VM1 and associated with vServer 3 should interact with 493 the nDirector to update mapping table maintained in the nDirector by 494 the new NVE device location associated with VM1. In turn, the 495 nDirector should update the mapping tables in all the NVE device 496 associated with the VM which VM1 is communicating with. 498 +---------+ 499 | | 500 |nDirector| 501 +----------+ /Oracle |-----------+ 502 | | | | 503 | +---------+ | 504 | ---+---- +--------+ 505 ---+--- ---- | -|vCenter2| 506 /---- | ----\ /// DC2 | +--------+ 507 // DC1 | +--------+ // +-----+--------+ \\ 508 // +-------+---+ |vCenter1| / | | \ 509 / | | +--------+ / +---------+ +---------+\ 510 / | NVE Edge1 | / |NVE Edge3| |NVE Edge4| 511 / | | \ | +----+----+ +----+----+ | 512 | +-------+---+ -|| | | | 513 | +------+------| || | | | 514 | +-----+---+ +-----+---+ || +---------+ +----+----+ | 515 | | | | | || | | | | | 516 | | vServer1| | vServer2| || | vServer3| | vServer4| | 517 | | | | | || | | | | | 518 | +---------+ +---------+ || +---------+ +---------+ | 519 | |VM1VM2VM3| |VM4VM5VM6| || |VM1VM7VM8| | VM9 | | 520 | +---------+ +---------+ | \ +---------+ +---------+ / 521 \ / \ / 522 ----------------------------/ ---------------------------- 523 +--------+ +--------+ 524 | | | | 525 | Tenant | +------------------------> | Tenant | 526 | Alice | | Alice | 527 +--------+ +--------+ 529 5. General Network Virtualization Architecture 531 When Multiple virtual machines (VMs) created in one vServer,VM can be 532 managed under this vServer. However vServer can not be isolated node 533 since VM can be moved from one to another vServer under the same or 534 different data center which is beyond the control of the vServer who 535 create that VM. We envision the Network virtualization architecture 536 to consist of vServers (virtualization servers),nDirector and 537 vCenters (the aforementioned VM and vServer management platform) and 538 NVE Edges. The vCenter is placed on the management plane within each 539 data center and can be used to manage a large number of vServers in 540 each data center. The vServer is connecting to NVE Edge in its own 541 data center either directly or via a switched network (typically 542 Ethernet). The nDirector is placed on the control plane and manage 543 one or multiple data centers. When the nDirector manages multiple 544 data centers, the nDirector should interact with all the NVE Edges in 545 each data center to facilitate large inter-data center Layer 2 546 interconnection, VM migration and mobility across data centers and 547 enabling multiple data centers work and present as one. 549 ... .... .................... ................... 550 . DC1 +--------+ +---------+. 551 . | | | |. 552 . |vServer1+------+NVE Edge1+---| 553 . +---| | | |. | 554 . | +--------+ +---------+. | 555 . +--------+ | |VM VM VM| . | 556 . | | | ---------+ . | 557 . |vCenter1+---| . | 558 . | | | +--------+ +---------+. | 559 . +--------+ | | | | |. | 560 . | |vServer2+------|NVE Edge2+---| 561 . +---| | | |. | 562 . +--------+ +---------+. | 563 . |VM VM VM| . | +---------+ 564 . ---------+ . | |nDirector| 565 ... .... .................... .................. +-+ /Oracle | 566 ... .... .................... .................. | | | 567 .DC2 +--------+ . | +---------+ 568 . | | . | 569 . ---|vServer3+---| . | 570 . | | | | . | 571 . | +--------+ | +---------+. | 572 . +--------+ | |VM VM VM| | | |. | 573 . | | | ---------+ +--|NVE Edgex+---| 574 . |vCenter2|--+ | | |. 575 . | | | +--------+ | +---------+. 576 . +--------+ | | | | . 577 . | |vServerx+---| . 578 . +---| | . 579 . +--------+ . 580 . |VM VM VM| . 581 . ---------+ . 582 . .... .................... ..................... 584 Network Virtualization Architecture 586 5.1. NVE (Network Virtualization Edge Function) 588 As defined in section 1.2 of [I.D-ietf-nvo3-framework],it is a 589 network entity that sits on the edge of the NVO3 network and could be 590 implemented as part of a virtual switch within a hypervisor, a 591 physical switch or router, a Network Service Appliance(e.g.,NAT/ 592 FW).When VM1 connecting to one NVE Edge want to communicate with the 593 other VMs which are connecting to the other NVE Edges, the NVE Edge 594 associated with VM1 should distribute the mapping between layer 2 595 identity of VM1 and NVE Edge associated with VM1 by the nDirector to 596 all the NVE Edges associated with VMs which VM1 is communicating 597 with. In addition, the NVE Edge associated with VM1 either interact 598 with the nDirector or learn from the other NVE Edges who is 599 distributing mapping table through the nDirector to build mapping 600 table between layer 2 identity of VMs which VM1 is communicating with 601 the NVE Edge associated with VMs which VM1 is communicating with and 602 based on such mapping table to forward the data packets. 604 5.2. vServer (virtualization Server) 606 The vServer is served as a platform for running virtual machines and 607 is installed on the physical hardware in a virtualized environment 608 and provide physical hardware resource dynamically to the virtual 609 machines as needed. It is also referred to as "the virtualization 610 server" or hypervisor. It may get instructions from provision 611 systems (i.e.,vCenters)to create, modify, terminate VM for each 612 tenant. It may also interact with the NVE Edge to inform the NVE 613 about the map or association between vserver, virtual machine and 614 network connection. This interaction can also be used to release 615 association between vServer and the NVE Edge. 617 5.3. vCenter (management plane function) 619 The vCenter is served as a platform for managing in one data center 620 not only assignment of virtual machines to the vServer but also 621 assignment of resources to the virtual machines and provide a single 622 control point to the data center. It unifies the resources from 623 individual vServer to be shared among virtual machines in the entire 624 data center. It may interact with vServer to allocate virtual 625 machines to the vServer and monitor performance of each vServer and 626 each VM in the data center. The vCenter may maintain the map from 627 vServer to Network connection which contain not not only vServer 628 configurations such as vServer name, vServer IP address port number 629 but also VM configurations for each tenant end system associated with 630 that vServer. When vCenter hierarchy is used, the root vCenter who 631 has global view may interact with the child vCenter to decide which 632 child vCenter is responsible for assigning the virtual machine to 633 which vServer based on topological information and resource 634 utilization information in each data center and local policy 635 information. 637 5.4. nDirector (Control plane function) 639 The nDirector is implemented as part of DC Gateway and sits on top of 640 the vCenter in each data center and is served as orchestrator layer 641 to allow layer 2 interconnection and forwarding between data centers 642 and enable multiple data centers to present as one. The nDirector 643 may interact with the NVE Edge to populate forwarding table in the 644 path from the NVE Edge Device to the Tenant End System and react to 645 the NVE request to assign network attributes such as VLAN,ACL, QoS 646 parameters on all the network elements in the path from NVE device to 647 the Tenant End System and manipulates the QoS control information in 648 the path between the NVE Edges associated with VMs in communication. 649 In addition, the nDirector may distribute mapping table between layer 650 2 identity of VM and the NVE Edge associate with that VM to all the 651 other NVE Edges and maintain such mapping table in the nDirector. 653 6. vServer to vCenter management interface 655 6.1. VM Creation 657 vCenter requests vServer to create a new virtual machine and allocate 658 the resource for its execution. 660 6.2. VM Termination 662 vCenter requests vServer to delete a virtual machine and clean up the 663 underlying resources for that virtual machine. 665 6.3. VM Registration 667 When a VM is created for one tenant in the vServer, the vServer may 668 create VM profile for this tenant containing VM identity,VNID, 669 port,VID and registers the VM configuration associated with this 670 tenant to the vCenter. Upon receiving such a registration request, 671 vCenter should check if it has already established VM profile for the 672 corresponding tenant: if yes, vCenter should update the existing VM 673 profile for that tenant, otherwise vCenter should create a new VM 674 profile for that tenant. 676 6.4. VM Unregistration 678 When a VM is removed for one tenant from the vServer, the vServer may 679 remove VM profile for this tenant containing VM identity, VNID, 680 port,VID and deregisters the VM configuration associated with that 681 tenant to the vCenter. Upon receiving such a deregistration request, 682 vCenter should check if it has already established VM profile for 683 that tenant: if yes, vCenter should remove the existing VM profile 684 for that tenant,otherwise other vCenter should report alert to the 685 vServer. 687 6.5. VM Bulk Registration 689 When a large number of VMs are created in one vServer and share the 690 same template, the vSever may create a profile for a group of these 691 VMs and send a bulk registration request containing the group 692 identifier and associated VM profile to the vCenter. Upon receiving 693 such a bulk registration request, vCenter should create or update the 694 profile for a group of these VMs. 696 6.6. VM Bulk Unregistration 698 When a large number of VMs are removed in one vServer and share the 699 same template, the vSever may remove a profile for a group of these 700 VMs and send a bulk unregistration request containing the group 701 identifier and associated VM profile to the vCenter. Upon receiving 702 such a bulk registration request, vCenter should remove the profile 703 for a group of these VMs. 705 6.7. VM Configuration Modification 707 vCenter requests vServer to update a virtual machine and reallocate 708 the resource for its execution. 710 6.8. VM Profile Lookup/Discovery 712 When a VM1 in one vServer want to communicate with one VM2 in another 713 vServer, the client associated with VM1 should check with vCenter 714 based on VM2 identity to see if the profile for that VM2 already 715 exists and which server maintains that VM configuration. If yes, 716 vCenter should reply to the the client with the address or name of 717 the vServer which the VM2 is situated in. 719 6.9. VM Relocation 721 When vCenter is triggered to move one VM or a group of VMs from one 722 source vServer to another destination vServer, the vCenter should 723 send a VM relocation request to both vServers and updates its profile 724 to indicate the new vServer that maintains the VM configuration for 725 that VM or a group of those VMs. The relocation request will trigger 726 the VM image to be moved from the source vServer to the destination 727 vServer. 729 6.10. VM Replication 731 One tenant moves between vServers or between data centers and may, as 732 the internet user, want to access applications via the VM without 733 service disruption. In order to achieve this, he can choose to 734 access applications via the same VM without moving the VM when he 735 moves. However, the VM he is using may be far away from where he 736 stays. In order to provide better user experience, the tenant may 737 request vCenter through the nDirector to move VM to the vServer that 738 is more close to where he stays and keeps the service uninterrupted. 739 In such case, the vCenter may interact with the vServer that hosts 740 the original VM to chooses one vServer that is closer to the tenant 741 and moves one copy of the VM image to the destination vServer. 743 6.11. VM Report 745 When one VM is created, moved, added, removed from the vServer, the 746 VM monitor should be enabled to report the status information and 747 resource availability of that VM to the vCenter. In this case, 748 vCenter can know which server is overloaded, which server is unused 749 or least used. 751 7. nDirector to NVE Edge control interface 753 Signaling between the nDirector and NVE Device can be used to do 754 three things: 756 Enforce the network policy for each VM in the path from the NVE 757 Edge associated with VM to the Tenant End System. 759 Populate forwarding table in the path from the NVE Edge associated 760 with VM to the Tenant End System in the data center. 762 Populate mapping table in each NVE Edge that is in the virtual 763 network across data centers under the control of the nDirector. 765 One could reuse existing protocols, among them 766 NetConf,SNMP,RSVP,Radius,Diameter, to signal the messages between 767 nDirector and NVE Edges. The nDirector need to know which NVE Edges 768 belong to the same virtual network and then the distribute the routes 769 between these NVE Edges to each NVE Edges belonging to the same 770 virtual network. In additional the nDirector may interact with the 771 NVE Edge and the associated overlay network in the data center in 772 response to the provision request from the NVE Edge and populate 773 forwarding table to the associated overlay Network elements in the 774 data path from the Tenant End System to the NVE Edge and install 775 network policy to the network elements in the data path between the 776 Tenant End System and the NVE Edge. For details of Signaling 777 control/forward plane information between network virtualization 778 edges (NVEs) , please see [I.D-wu-nvo3-nve2nve]. 780 8. vServer to NVE Edge control interface 782 Signaling between vServer and NVE Edge is used to establish mapping 783 between the vServer who host VM and network connection which VM 784 relies on. For more details signaling and operation, please see 785 relevant NVO3 draft. 787 9. Security Considerations 789 Threats may arise when VMs move into a hostile VM environment, e.g., 790 when the VM identity is exploited by adversaries to launch denial of 791 service or Phishing attacks[Phishing]. Further details are to be 792 explored in the future version of this document. 794 10. IANA Considerations 796 This document has no actions for IANA. 798 11. Contributors 800 Thank Xiaoming Fu for helping provide input to the initial draft of 801 this document. 803 12. References 805 12.1. Normative References 807 [I.D-ietf-nvo3-framework] 808 Lasserre, M., "Framework for DC Network Virtualization", 809 ID draft-ietf-nvo3-framework-00, September 2012. 811 [I.D-wu-nvo3-nve2nve] 812 Wu, Q., "Signaling control/forward plane information 813 between network virtualization edges (NVEs)", 814 ID draft-wu-nvo3-nve2nve-00, 2013. 816 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 817 Requirement Levels", March 1997. 819 12.2. Informative References 821 [I.D-kompella-nvo3-server2nve] 822 Kompella, K., "Using Signaling to Simplify Network 823 Virtualization Provisioning", 824 ID draft-kompella-nvo3-server2nve, July 2012. 826 [Phishing] 827 "http://kea.hubpages.com/hub/What-is-Phishing". 829 Authors' Addresses 831 Roland Schott 832 Deutsche Telekom Laboratories 833 Deutsche-Telekom-Allee 7 834 Darmstadt 64295 835 Germany 837 Email: Roland.Schott@telekom.de 839 Qin Wu 840 Huawei 841 101 Software Avenue, Yuhua District 842 Nanjing, Jiangsu 210012 843 China 845 Email: sunseawq@huawei.com