idnits 2.17.1 draft-wu-nvo3-nve2nve-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 28, 2013) is 3953 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Virtualization Overlays Working D. Wang 3 Group Q. Wu 4 Internet-Draft Huawei 5 Intended status: Standards Track June 28, 2013 6 Expires: December 30, 2013 8 Proposed Control Plane requirements for Network Virtualization Overlays 9 draft-wu-nvo3-nve2nve-06 11 Abstract 13 This document looks at control plane aspect related to both tenant 14 system to NVE control interface and NVE to Network Virtualization 15 Authority (NVA) control interface NVE use to enable communication 16 between tenant systems and focuses on NVE to NVA control interface, 17 which is complementary to [draft-kreeger-nvo3-hypervisor-nve-cp] that 18 describes the high level control plane requirements related to the 19 interaction between tenant system and NVE when the two entities are 20 not co-located on the same physical device. 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on December 30, 2013. 39 Copyright Notice 41 Copyright (c) 2013 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 57 2. Conventions Used in this Document . . . . . . . . . . . . . . 4 58 3. NVO3 Control Plane Overview . . . . . . . . . . . . . . . . . 5 59 4. Tenant system information entry at the NVE and Network 60 Virtualization Authority . . . . . . . . . . . . . . . . . . . 6 61 4.1. Tenant system information entry fields relationship . . . 7 62 4.2. Forwarding functionality at the tenant system . . . . . . 8 63 5. NVE to NVA Control Plane Protocol Functionality . . . . . . . 11 64 5.1. NVE to VN connect/disconnect notification . . . . . . . . 11 65 5.2. Advertisement of Inner-outer address mapping 66 associated with tenant system . . . . . . . . . . . . . . 11 67 5.3. Tenant system information distribution . . . . . . . . . . 12 68 5.4. VN context moving . . . . . . . . . . . . . . . . . . . . 13 69 6. Hypervisor-to-NVE Control Plane Protocol Functionality . . . . 14 70 6.1. Associate the NVE with VN context . . . . . . . . . . . . 14 71 6.2. Localized forwarding at the same local NVE . . . . . . . . 14 72 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 73 8. Security Considerations . . . . . . . . . . . . . . . . . . . 16 74 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 17 75 9.1. Normative References . . . . . . . . . . . . . . . . . . . 17 76 9.2. Informative References . . . . . . . . . . . . . . . . . . 17 77 Appendix A. Change Log . . . . . . . . . . . . . . . . . . . . . 18 78 A.1. draft-wu-nvo3-nve2nve-06 . . . . . . . . . . . . . . . . . 18 79 A.2. draft-wu-nvo3-nve2nve-05 . . . . . . . . . . . . . . . . . 18 80 A.3. draft-wu-nvo3-nve2nve-04 . . . . . . . . . . . . . . . . . 18 81 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 83 1. Introduction 85 In [I.D-ietf-nvo3-overlay-problem-statement], two control planes are 86 identified to realize an overlay solution: 88 o NVE to Network Virtualization Authority (NVA) control plane. 90 o Tenant system to NVE control plane. 92 Where NVE to NVA Control plane is used to deal with address mapping 93 dissemination and Tenant System to NVE control plane is used to deal 94 with VM attachment and detachment. 96 In [I.D-ietf-nvo3-framework], three control plane components are 97 defined to build these two control planes and provide the following 98 capabilities: 100 o Auto-provision/service discovery 102 o Address advertisement 104 o Tunnel management 106 In [I.D-fw-nvo3-server2vcenter],the control interface between NVE and 107 the Oracle backend system or Network Virtualization Authority (NVA) 108 is defined to provide the following capabilities: 110 o Enforce the network policy for each VM in the path from the NVE 111 Edge associated with VM to the Tenant End System. 113 o Populate forwarding table in the path from the NVE Edge associated 114 with VM to the Tenant End System in the data center. 116 o Populate mapping table in each NVE Edge that is in the virtual 117 network across data centers under the control of the Director. 119 This document focuses on control plane aspect related to both tenant 120 system to NVE control interface and NVE to Oracle control interface 121 NVE use to enable communication between tenant systems, which is 122 complementary to [draft-kreeger-nvo3-hypervisor-nve-cp] that 123 describes the high level control plane requirements related to the 124 interaction between tenant system and NVE when the two entities are 125 not co-located on the same physical device. 127 2. Conventions Used in this Document 129 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 130 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 131 document are to be interpreted as described in RFC2119 [RFC2119]. 133 Tenant System: 135 A physical or virtual system that can play the role of a host, or 136 a forwarding element such as a router, switch, firewall, etc. It 137 belongs to a single tenant and connects to one or more VNs of that 138 tenant. 140 vNIC: 142 A vNIC is similar to a physical NIC. Each virtual machine has one 143 or more vNIC adapters that it uses to communicate with both the 144 virtual and physical networks. Each vNIC has its own MAC address 145 and can be assigned one or more IP addresses just like a NIC found 146 in a non virtualized machine. 148 3. NVO3 Control Plane Overview 150 ,---------. 151 ,' Backend `. 152 ( NVA ) 153 `. ,' 154 `-+------+' 155 | | 156 .--..--. .--. .. 157 ( ' '.--. 158 .-.' L3 ' 159 ( Overlay ) 160 ( '-' 161 .'--'._.'.-._.'.-._) 162 // \\ 163 +------+ +-------+ 164 .-|NVE X | | NVE Y | 165 ( +------+--. ( +-------+.--. 166 .-.' ' .-.' ' 167 ( DC Site X ) ( DC Site Y ) 168 ( .'-' ( .'-' 169 '--'._.'. ) '--'._.'. ) 170 '--' / '--' 171 / \ / \ 172 __/_ \ /_ _\__ 173 '--------' '--------' '--------' '--------' 174 : Tenant : : Tenant : : Tenant : : Tenant : 175 : SystemA: : SystemB: : SystemC: : SystemD: 176 '--------' '--------' '--------' '--------' 178 Figure 1: Example NVO3 control plane Overview 180 4. Tenant system information entry at the NVE and Network 181 Virtualization Authority 183 Every NVE pair( local NVE and remote NVE ) associated with the tenant 184 system MUST maintain at least one mapping table entry for each 185 currently attached tenant system (In the case where TS has multiple 186 tenant system interfaces, there may have multiple mapping table entry 187 corresponding to one TS). In addition, the Network Virtualization 188 Authority may also maintain a mapping table entry for each currently 189 attached tenant system or each newly joined NVE. Each mapping table 190 entry corresponds to the Tenant system connection to each VN or one 191 Tenant system interface and conceptually may contain all or a sub set 192 of the following fields: 194 o The tunnel interface identifier (tunnel-if-id) of the tunnel 195 between the remote NVE and the local NVE where the tenant system 196 is currently attached. The tunnel interface identifier is 197 acquired during the tunnel creation. 199 o The MAC address of the attached TS. This MAC address is obtained 200 from auto-discovery protocol between Tenant System and its local 201 NVE. 203 o The IP address of the attached TS. This IP address is obtained 204 from auto-discovery protocol between Tenant System and its local 205 NVE. 207 o The logical interface identifier (e.g., VLAN ID, internal vSwitch 208 Interface ID connected to a Tenant System) of the access link 209 between the tenant system and the local NVE. This field is 210 required to associate Tenant System with local NVE if local NVE is 211 an external NVE to Tenant system. It is internal to the local NVE 212 and is also used to associate the tunnel to the access link where 213 the tenant system is attached. 215 o The MAC address of the local/remote NVE associated with the tenant 216 system. 218 o The IP address of the local/remote NVE associated with the tenant 219 system. 221 o The Identifier of VN context (VNID). This Identifier is obtained 222 from auto-discovery protocol between Tenant System and its local 223 NVE. 225 o Lifetime of NVEs to keep table entries when pushed to or pulled 226 from NVAs. While the table entries are pushed to NVA, they should 227 be given a relatively long lifetime. Otherwise, they should be 228 given a relatively short lifetime. 230 o The operation code of tenant system. The operations includes 231 shutdown, migration, startup, etc., which can be detected by the 232 NVE on the access link between tenant system and the NVE. 234 4.1. Tenant system information entry fields relationship 236 One Tenant system is corresponding to one VM. Each Tenant System 237 that is virtual system may have multiple vNIC adapters that it uses 238 to communicate with both the virtual and physical networks. vNIC the 239 tenant system has should belong to a single tenant. Each vNIC must 240 be assigned with one unique MAC address. vNIC MAC address may be 241 modified or assigned with a new MAC address at some time. When 242 multiple vNICs hosted in the same VM connect to multiple VNs, it is 243 allowed that some of these vNICs may connect to different VNs through 244 the same NVE. 246 Each tenant system uses TSI to interface with VNI at the NVE via VAP. 247 Each TSI can be identified by an identifier which the tenant system 248 assigns to the TSI. Each VAP can be identified by the logical 249 interface identifiers (e.g., VLAN ID, internal vSwitch Interface ID 250 connected to a VM)which the NVE assigns to the VAP. In order to 251 establish the network connection between tenant system and NVE and 252 associate tenant system and NVE with the same VN, VNID should be used 253 to correlate one TSI to one VAP that belong to the same VNI. 255 +-------------------------+ 256 | | 257 | VM (Tenant System) | 258 | | 259 +-+-----+-----+-------+---+ 260 | | | | 261 | | | | 262 | | | | 263 + + + ... + 264 vNIC1 vNIC2 vNIC3 265 | | | 266 | | | 267 | | | 268 | | | 269 VN1 | VN4 270 | 271 |-----+--+ 272 | | 273 | | 274 VN2 VN3 276 Tenant System Interfaces(TSI): 277 TSIa [VNID1,MAC addr1]corresponding to vNIC1 279 TSIb [VNID2,MAC addr2]corresponding to vNIC2 280 TSIc [VNID3,MAC addr3]corresponding to vNIC2 282 TSId [VNID4,MAC addr4]corresponding to vNIC3 284 Figure 2. Tenant System information Hierarchy 286 +-------------------------+ 287 | | 288 | NVE | 289 | | 290 |VNI1 VNI2 VNI3 VNIx | 292 +-+-----+-----+-------+---+ 293 | | | | 294 | | | | 295 VAP1 VAP2 VAP3 | 296 + + + ... + 297 | | | 298 | | +--------+ 299 | | | 300 | | | 301 | | | 302 TSIa TSIb TSIc 303 +--|-----|--+ +-----+-----+ 304 | | | | 305 | Tenant | | Tenant | 306 | System1 | | System2 | 307 | | | | 308 +-----------+ +-----------+ 310 Figure 3. Interfaces between Tenant system and NVE 312 4.2. Forwarding functionality at the tenant system 314 When the tenant system A plays the role of forwarding functionality 315 to connect two VNs, the following three cases should be considered. 317 (a) Both two VNs support Layer 3 forwarding; 319 (b) Both two VNs support layer 2 forwarding; 320 (c) One VN supports Layer3 forwarding and the other VN supports 321 layer 2 forwarding; 323 For (a), tenant system A or external system that is close to tenant 324 system A should support layer 3 forwarding functionality. When 325 source tenant system in one VN communicates with destination tenant 326 system in another VN through the tenant system A, if tenant system A 327 support layer 3 forwarding, the tenant system A should forward IP 328 packets on the behalf of source Tenant System and destination tenant 329 system irrespective of data plane encapsulation format(e.g., VXLAN or 330 NVGREW, MPLS over GRE). If two VNs use different data plane 331 encapsulation format, tenant system A should also support converting 332 one data plane encapsulation format into another. If tenant system A 333 doesn't support layer 3 forwarding, the external system that is close 334 to tenant system A should associate TSI to local NVE using 335 information like VNID, TS MAC address and VLAN tag information and 336 forward IP packets on the behalf of source tenant system and 337 destination tenant system. 339 For (b), tenant system A vNIC or external system that is close to 340 tenant system A should support layer 2 forwarding functionality. 341 When source tenant system in one VN communicates with destination 342 tenant system in another VN through the tenant system A, if tenant 343 system A support layer 2 forwarding, the tenant system A should know 344 which tenant systems connecting to itself are allowed for layer 2 345 forwarding and then forward layer 2 frames on the behalf of source 346 Tenant System and destination tenant system based on forwarding 347 allowed list. If two layer 2 VNs support different data plane 348 encapsulation format, the tenant system A should also support 349 converting one data plane encapsulation format to another. If tenant 350 system A doesn't support layer 2 forwarding, the external system that 351 is close to tenant system A should associate TSI to local NVE using 352 information like VNID, TS MAC address and VLAN tag information and 353 forward layer 2 frames on the behalf of source tenant system and 354 destination tenant system. 356 For (c), tenant system A or external system that is close to tenant 357 system A should support both layer 2 forwarding and layer 3 358 forwarding. When source tenant systems in layer 2 VN communicates 359 with destination tenant system in layer 3 VN through the tenant 360 system A, if tenant system A support both layer 2 and layer 3 361 forwarding the tenant system A should support translating layer 2 362 frame into layer 3 packet and forward traffic between layer 2 VN and 363 layer 3 VN. If two VNs support different data plane encapsulation 364 format, the tenant system A should also support converting one data 365 plane encapsulation format to another. If tenant system A doesn't 366 support layer 2 forwarding or layer3 forwarding, the external system 367 that is close to tenant system A should associate TSI to local NVE 368 using information like VNID,TS MAC address and VLAN tag information 369 and forward traffic on the behalf of source tenant system and 370 destination tenant system. 372 When the tenant system A plays the role of interconnection 373 functionality to connect between VN and Non-VN, suppose source tenant 374 system in one VN communicates with destination end device in Non-VN 375 environment through the tenant system A, the tenant system A acts as 376 NVO3 GW between VN and Non-VN in this case peering with other 377 Gateways and should be explicitly configured with a list of 378 destination MAC addresses that allows passed to the Non-VN 379 environment and perform translation between VNID and Non-VN label 380 when forwarding traffic between VN interface and Non-VN interface. 381 For outgoing frames on VN connected interface, the tenant system A 382 decapsulates NVO3 outer header and forwards the inner frame to Non-VN 383 environment based on configured allowed list. For incoming frames on 384 non-VN connected interface(e.g.,WAN interface), the tenant system A 385 should map the incoming frames from end device to specific VN based 386 on inner Ethernet frame information (e.g., VLAN ID). The mapping 387 table is setup at the tenant system A to perform VNID lookup in VN 388 and label lookup in the Non-VN environment. 390 5. NVE to NVA Control Plane Protocol Functionality 392 The core functional entities for NVE to NVA Control plane 393 infrastructure are the NVE and Network Virtualization Authority. The 394 Network Virtualization Authority is responsible for maintaining the 395 tenant system reachability state and is the topological anchor point 396 for the Tenant system information (e.g., Tenant system MAC address,IP 397 address,VN Name,VNID, local NVE addresses, remote NVE addresses). 398 There can be multiple NVAs in a VN each serving a different group of 399 tenant system. The NVE is the entity that performs the inner-outer 400 address mapping and VN Name to VNID mapping management on the request 401 of NVE, and it resides on the NVE or an external network device 402 separately from NVE. The NVE is responsible for detecting the tenant 403 system operations(e.g.,Shutdown, Migration, Startup) on the access 404 link between tenant system and NVE and for advertise VN context 405 information associated with tenant system to NVA. 407 5.1. NVE to VN connect/disconnect notification 409 When a tenant system connects to one VN by attaching to a local NVE, 410 The tenant system should also inform the attached local NVE which VN 411 context the tenant system belong to. the local NVE should also be 412 added into VN context together with tenant system information and 413 report VN membership to Network Virtualization Authority (NVA). This 414 helps Network Virtualization Authority know to which NVE a group of 415 the tenant systems are attached or current location of these tenant 416 systems. When the last tenant system is disconnected to one VN 417 through one local NVE, this local NVE should also be removed from VN 418 context. This should also be updated to Network Virtualization 419 Authority and let Network Virtualization Authority know that there 420 are no tenant system associated with that NVE. 422 5.2. Advertisement of Inner-outer address mapping associated with 423 tenant system 425 In order to enable tenant system A to communicate with any tenant 426 system that is not under the same local NVE, the inner-outer address 427 mapping(Destination Tenant system L2/L3 address to remote 428 NVE(i.e.,egress tunnel endpoint)) that maps a final destination 429 address to the proper tunnel should be distributed to all the remote 430 NVEs that belong to the same VN even though there is no tenant system 431 which communicates with tenant system A behind that remote NVE. 432 Alternatively, the inner-outer address mapping can be distributed to 433 the NVA and then NVA advertise inner-outer address mapping to the 434 corresponding remote NVE according to VN membership established using 435 NVE connect/disconnect to VN notification. When NVA is embedded 436 within NVE, there is no need for a standardized protocol between the 437 NVE and NVA, as the interaction is implemented via software on a 438 single device. For inner-outer address mapping distribution to NVE, 439 in one approach, BGP protocol can be used to advertise such mapping 440 information, alternatively, each NVE may push inner-outer mapping to 441 the Network Virtualization Authority using NVE connect/disconnect to 442 VN notification or other NVE-to-NVA protocol. 444 5.3. Tenant system information distribution 446 Data plane learning can be used to build mapping table without need 447 for control plane protocol. However it requires each data packet to 448 be flooded to the whole VN. In order to eliminate flooding 449 introduced by data plane learning, a control protocol is needed to 450 provide inner-outer address mapping and other information associated 451 with tenant system from NVA to the corresponding NVE. For tenant 452 system information distribution, one approach is the tenant system 453 information is pushed by NVA to the NVE. If the destination packet 454 over the logical tunnel arriving at the NVE can't be found in its 455 inner-outer mapping table that are pushed down from the NVA, the NVA 456 could be configured to simply drop the data frame, or flood it to all 457 other remote NVE that belong to the same VN if NVE knows VN 458 membership. If an NVE lost its connectivity to its NVA, it MUST 459 ignore any Pushed data from the NVA because the pushed data may be 460 outdated or not valid. When there might have multiple NVA holding 461 the mapping information for the tenant systems in the VN and push the 462 same tenant system information to the same NVE(i.e., conflict 463 occurs), the destination packet can be tagged with different 464 priority,higher priority data take precedence. 466 Alternatively, NVE can send pull request to NVA for the tenant system 467 information. Each Pull request can have multiple queries for 468 different Tenant Systems. The pull request can be triggered by An 469 edge node (NVE) receives an ingress data frame with a destination 470 whose attached edge (NVE) is unknown, or the edge node (NVE) receives 471 an ingress ARP/ND request for a target whose link address (MAC) or 472 attached edge (NVE) is unknown. If the NVA can be configured to 473 prohibite some NVEs to get tenant system information from NVA or the 474 NVA may not have entry matching tenant system information asked by 475 NVE. Then NVA can indicate in the pull response that the target 476 being queried is not available or NVE is not allowed to access 477 information,otherwise, the NVE should return the valid inner-outer 478 address mapping with the valid timer indicating how long the entry 479 can be cached by the edge (NVE). While waiting for query response 480 from NVA, the NVE has to buffer the subsequent data packets with 481 destination address to the same target. The buffer could overflow 482 before the NVE gets the response from NVA. If no response is 483 received to a NVA within a configurable timeout, the request should 484 be re-transmitted up to a configurable number of times. When NVE 485 caching entry pulled from NVA is expired,both NVE and NVA should 486 remove invalid NVE caching entry. When one tenant system is detached 487 from one NVE and move to another, the inner-out address mapping 488 established in the pervious NVE is not valid. In such cases, the 489 inner-outer address mapping should be removed from the previous NVE 490 and the new inner-outer mapping should be created at the new NVE to 491 which the tenant system is currently attached. Such inner-outer 492 mapping should be updated at NVA. If an NVE lost its connectivity to 493 its NVA, the cached entry should be removed from the NVE to which the 494 tenant system is currently attached. 496 5.4. VN context moving 498 In some cases, one tenant system may be detached from one NVE and 499 move to another NVE. In such cases, the VN context should be moved 500 from the NVE to which the tenant system was previously attached to 501 the new NVE to which the tenant system is currently attached. In 502 order to achieve this, the per tenant system VN context including VN 503 profile can be maintained at the Network Virtualization Authority and 504 be retrieved at the new place based on the VN Identifier (VNID). 6. 505 Hypervisor-to-NVE Control Plane Protocol Functionality 507 6. Hypervisor-to-NVE Control Plane Protocol Functionality 509 6.1. Associate the NVE with VN context 511 The VN context includes a set of configuration attributes defining 512 access and tunnel policies and (L2 and/or L3) forwarding functions. 513 When a Tenant System is attached to a local NVE, a VN network 514 instance should be allocated to the local NVE. The tenant system 515 should be associated with the specific VN context using virtual 516 Network Instance(VNI). The tenant system should also inform the 517 attached local NVE which VN context the tenant system belong to. 518 Therefore the VN context can be bound with the data path from the 519 tenant system to the local NVE and the tunnel from local NVE 520 associated with the tenant system and all the remote NVEs that belong 521 to the same VN as the local NVE. For the data path from the tenant 522 system and the local NVE, the network policy can be installed on the 523 underlying switched network and forwarding tables also can be 524 populated to each network elements in the underlying network based on 525 the specific VNI associated with the tenant system. For the tunnel 526 from local NVE to the remote NVEs, the traffic engineering 527 information can be applied to each tunnel based on VNI associated 528 with the tenant system. 530 6.2. Localized forwarding at the same local NVE 532 In some cases, two tenant systems may be attached to the same local 533 NVE. In order to allow the NVE to locally forward traffic between 534 two tenant systems that are attached to the same NVE, the inner-outer 535 address mapping that maps a final destination address to the proper 536 tunnel should be populated at the local NVE. 538 In some cases, two tenant systems may connect to the different VNs 539 through the same interconnection functionality (Data Center Gateway), 540 in order to allow two tenant systems communication between two VNs, 541 the mapping table that maps a final destination address to the proper 542 tunnel should also be populated in both NVE associated with two 543 communicated tenant system and the interconnection functionality 544 associated corresponding NVE.In this case, the interconnection 545 functionality may trigger both NVE associated with two tenant system 546 to establish tunnel directly and allow traffic between these two 547 tenant systems bypass itself. 549 7. IANA Considerations 551 This document has no actions for IANA. 553 8. Security Considerations 555 TBC. 557 9. References 559 9.1. Normative References 561 [I.D-ietf-nvo3-framework] 562 Lasserre, M., "Framework for DC Network Virtualization", 563 ID draft-ietf-nvo3-framework-00, September 2012. 565 [I.D-ietf-nvo3-overlay-problem-statement] 566 Narten, T., "Problem Statement: Overlays for Network 567 Virtualization", 568 ID draft-ietf-nvo3-overlay-problem-statement-02, 569 Feburary 2013. 571 [I.D-kreeger-nvo3-hypervisor-nve-cp] 572 Kreeger, L., "Network Virtualization Hypervisor-to-NVE 573 Overlay Control Protocol Requirements", 574 ID draft-kreeger-nvo3-hypervisor-nve-cp-01, Feburary 2013. 576 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 577 Requirement Levels", March 1997. 579 9.2. Informative References 581 [I.D-fw-nvo3-server2vcenter] 582 Wu, Q. and R. Scott, "Network Virtualization 583 Architecture", ID draft-fw-nvo3-server2vcenter-01, 584 January 2013. 586 Appendix A. Change Log 588 Note to the RFC-Editor: please remove this section prior to 589 publication as an RFC. 591 A.1. draft-wu-nvo3-nve2nve-06 593 The following are the major changes to previous version : 595 o Remove section 7. 597 o Add new subsection 5.1 to discuss NVE to VN connect/disconnect 598 notification. 600 o Add new subsection 5.2 to discuss Inner-outer address mapping 601 associated with tenant system advertisement. 603 o Add new subsection 5.3 to discuss Tenant system information 604 distribution. 606 o Add new subsection 6.1 to discuss Associate the NVE with VN 607 context. 609 o Add new subsection 6.1 to discuss localized forwarding at the same 610 local NVE. 612 A.2. draft-wu-nvo3-nve2nve-05 614 The following are the major changes to previous version : 616 o Remove distintion between pNIC and vNIC and restrict tenant system 617 to the one that is virtual system and has vNICs 619 o Add one new figure and Using VAP and TSI to establish association 620 between tenant system and NVE that belong to the same VN. 622 o Delete Oracle Backend System term. 624 o Replace interconnection functionality with forwarding 625 functionality. 627 A.3. draft-wu-nvo3-nve2nve-04 629 The following are the major changes to previous version : 631 o Rewording some vNICs in the document with TSI. 633 o Clarify the relation between VM,Tenant System,TSI and distinguish 634 network, network elements from identifier for network or network 635 elements. 637 o Distinguish pNIC from vNIC. 639 o Using TSI Identifier to identify each TSI 641 o Support multiple TSI for multiple simutaneous connection and using 642 BID to distinguish different TSI belong to the same vNIC. 644 Authors' Addresses 646 Danhua Wang 647 Huawei 648 101 Software Avenue, Yuhua District 649 Nanjing, Jiangsu 210012 650 China 652 Email: wangdanhua@huawei.com 654 Qin Wu 655 Huawei 656 101 Software Avenue, Yuhua District 657 Nanjing, Jiangsu 210012 658 China 660 Email: bill.wu@huawei.com