idnits 2.17.1 draft-ietf-nvo3-vmm-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: Virtual machine mobility protocol SHOULD not support triangular routing except for handling packets in flight. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: Virtual machine mobility protocol SHOULD not need to use tunneling except for handling packets in flight. -- The document date (August 9, 2018) is 2080 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC2629' is defined on line 523, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2629 (Obsoleted by RFC 7749) ** Downref: Normative reference to an Informational RFC: RFC 7348 ** Downref: Normative reference to an Informational RFC: RFC 7364 ** Downref: Normative reference to an Informational RFC: RFC 8014 Summary: 4 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group B. Sarikaya 3 Internet-Draft Denpel Informatique 4 Intended status: Best Current Practice L. Dunbar 5 Expires: February 10, 2019 Huawei USA 6 B. Khasnabish 7 ZTE (TX) Inc. 8 T. Herbert 9 Quantonium 10 S. Dikshit 11 Cisco Systems 12 August 9, 2018 14 Virtual Machine Mobility Protocol for L2 and L3 Overlay Networks 15 draft-ietf-nvo3-vmm-04.txt 17 Abstract 19 This document describes a virtual machine mobility protocol commonly 20 used in data centers built with overlay-based network virtualization 21 approach. For layer 2, it is based on using a Network Virtualization 22 Authority (NVA)-Network Virtualization Edge (NVE) protocol to update 23 Address Resolution Protocol (ARP) table or neighbor cache entries at 24 the NVA and the source NVEs tunneling in-flight packets to the 25 destination NVE after the virtual machine moves from source NVE to 26 the destination NVE. For Layer 3, it is based on address and 27 connection migration after the move. 29 Status of This Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at https://datatracker.ietf.org/drafts/current/. 39 Internet-Drafts are draft documents valid for a maximum of six months 40 and may be updated, replaced, or obsoleted by other documents at any 41 time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress." 44 This Internet-Draft will expire on February 10, 2019. 46 Copyright Notice 48 Copyright (c) 2018 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (https://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 64 2. Conventions and Terminology . . . . . . . . . . . . . . . . . 3 65 3. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 66 4. Overview of the protocol . . . . . . . . . . . . . . . . . . 4 67 4.1. VM Migration . . . . . . . . . . . . . . . . . . . . . . 5 68 4.2. Task Migration . . . . . . . . . . . . . . . . . . . . . 6 69 4.2.1. Address and Connection Migration in Task Migration . 7 70 5. Handling Packets in Flight . . . . . . . . . . . . . . . . . 8 71 6. Moving Local State of VM . . . . . . . . . . . . . . . . . . 9 72 7. Handling of Hot, Warm and Cold Virtual Machine Mobility . . . 9 73 8. Virtual Machine Operation . . . . . . . . . . . . . . . . . . 10 74 8.1. Virtual Machine Lifecycle Management . . . . . . . . . . 10 75 9. Security Considerations . . . . . . . . . . . . . . . . . . . 10 76 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 11 77 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 11 78 12. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . 11 79 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 11 80 13.1. Normative References . . . . . . . . . . . . . . . . . . 11 81 13.2. Informative references . . . . . . . . . . . . . . . . . 12 82 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 12 84 1. Introduction 86 Data center networks are being increasingly used by telecom operators 87 as well as by enterprises. In this document we are interested in 88 overlay-based data center networks supporting multitenancy. These 89 networks are organized as one large Layer 2 network geographically 90 distributed in several buildings. In some cases geographical 91 distribution can span across Layer 2 boundaries. In that case need 92 arises for connectivity between Layer 2 boundaries which can be 93 achieved by the network virtualization edge (NVE) functioning as 94 Layer 3 gateway routing across bridging domain such as in Warehouse 95 Scale Computers (WSC). 97 Virtualization which is being used in almost all of today's data 98 centers enables many virtual machines to run on a single physical 99 computer or compute server. Virtual machines (VM) need hypervisor 100 running on the physical compute server to provide them shared 101 processor/memory/storage. Network connectivity is provided by the 102 network virtualization edge (NVE) [RFC8014]. Being able to move VMs 103 dynamically, or live migration, from one server to another allows for 104 dynamic load balancing or work distribution and thus it is a highly 105 desirable feature [RFC7364]. 107 There are many challenges and requirements related to migration, 108 mobility, and interconnection of Virtual Machines (VMs)and Virtual 109 Network Elements (VNEs). Retaining IP addresses after a move is a 110 key requirement [RFC7364]. Such a requirement is needed in order to 111 maintain existing transport connections. 113 In L3 based data networks, retaining IP addresses after a move is 114 simply not possible. This introduces complexity in IP address 115 management and as a result transport connections need to be 116 reestablished. 118 In view of many virtual machine mobility schemes that exist today, 119 there is a desire to define a standard control plane protocol for 120 virtual machine mobility. The protocol should be based on IPv4 or 121 IPv6. In this document we specify such a protocol for Layer 2 and 122 Layer 3 data networks. 124 2. Conventions and Terminology 126 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 127 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 128 document are to be interpreted as described in RFC 2119 [RFC2119] and 129 [RFC8014]. 131 This document uses the terminology defined in [RFC7364]. In addition 132 we make the following definitions: 134 Tasks. Tasks are the generalization of virtual machines. Tasks in 135 containers that can be migrated correspond to the virtual machines 136 that can be migrated. We use task and virtual machine 137 interchangeably in this document. 139 Hot VM Mobility. A given VM could be moved from one server to 140 another in running state. 142 Warm VM Mobility. In case of warm VM mobility, the VM states are 143 mirrored to the secondary server (or domain) at a predefined 144 (configurable) regular intervals. This reduces the overheads and 145 complexity but this may also lead to a situation when both servers 146 may not contain the exact same data (state information) 148 Cold VM Mobility. A given VM could be moved from one server to 149 another in stopped or suspended state. 151 Source NVE refers to the old NVE where packets were forwarded to 152 before migration. 154 Destination NVE refers to the new NVE after migration. 156 Packets in flight refers to the packets received by the source NVE 157 sent by the correspondents that have old ARP or neighbor cache entry 158 before VM or task migration. 160 Users of VMs in diskless systems or systems not using configuration 161 files are called end user clients. 163 3. Requirements 165 This section states requirements on data center network virtual 166 machine mobility. 168 Data center network SHOULD support virtual machine mobility in IPv6. 170 IPv4 SHOULD also be supported in virtual machine mobility. 172 Virtual machine mobility protocol MAY support host routes to 173 accomplish virtualization. 175 Virtual machine mobility protocol SHOULD not support triangular 176 routing except for handling packets in flight. 178 Virtual machine mobility protocol SHOULD not need to use tunneling 179 except for handling packets in flight. 181 4. Overview of the protocol 183 Layer 2 and Layer 3 protocols are described next. In the following 184 sections, we examine more advanced features. 186 4.1. VM Migration 188 Being able to move Virtual Machines dynamically, from one server to 189 another allows for dynamic load balancing or work distribution and 190 thus it is a highly desirable feature. In a Layer-2 based data 191 center approach, virtual machine moving to another server does not 192 change its IP address. Because of this an IP based virtual machine 193 mobility protocol is not needed. However, when a virtual machine 194 moves, NVEs need to change their caches associating VM Layer 2 or 195 Medium Access Control (MAC) address with NVE's IP address. Such a 196 change enables NVE to send outgoing MAC frames addressed to the 197 virtual machine. VM movement across Layer 3 boundaries is not 198 typical but the same solution applies if the VM moves in the same 199 link such as in WSCs. 201 Virtual machine moves from its source NVE to a new, destination NVE. 202 After the move the virtual machine IP address(es) do not change but 203 this virtual machine is now under a new NVE, previously communicating 204 NVEs will continue to send their packets to the source NVE. Address 205 Resolution Protocol (ARP) cache in IPv4 [RFC0826] or neighbor cache 206 in IPv6 [RFC4861] in the NVEs need to be updated. 208 It may take some time to refresh ARP/ND cache when a VM is moved to a 209 new destination NVE. During this period, a tunnel is needed so that 210 source NVE forwards packets to the destination NVE. 212 In IPv4, the virtual machine immediately after the move should send a 213 gratuitous ARP request message containing its IPv4 and Layer 2 or MAC 214 address in its new NVE, destination NVE. This message's destination 215 address is the broadcast address. Source NVE receives this message. 216 source NVE should update VM's ARP entry in the central directory at 217 the NVA. Source NVE asks NVA to update its mappings to record IPv4 218 address of the moving VM along with MAC address of VM, and NVE IPv4 219 address. An NVE-to-NVA protocol is used for this purpose [RFC8014]. 221 Reverse ARP (RARP) which enables the host to discover its IPv4 222 address when it boots from a local server [RFC0903] is not used by 223 VMs because the VM already knows its IPv4 address. IPv4/v6 address 224 is assigned to a newly created VM, possibly using Dynamic Host 225 Configuration Protocol (DHCP). Next, we describe a case where RARP 226 is used. 228 There are some vendor deployments (diskless systems or systems 229 without configuration files) wherein VM users, i.e. end-user clients 230 ask for the same MAC address upon migration. This can be achieved by 231 the clients sending RARP request reverse message which carries the 232 old MAC address looking for an IP address allocation. The server, in 233 this case the new NVE needs to communicate with NVA, just like in the 234 gratuitous ARP case to ensure that the same IPv4 address is assigned 235 to the VM. NVA uses the MAC address as the key in the search of ARP 236 cache to find the IP address and informs this to the new NVE which in 237 turn sends RARP reply reverse message. This completes IP address 238 assignment to the migrating VM. 240 All NVEs communicating with this virtual machine uses the old ARP 241 entry. If any VM in those NVEs need to talk to the new VM in the 242 destination NVE, it uses the old ARP entry. Thus the packets are 243 delivered to the source NVE. The source NVE MUST tunnel these in- 244 flight packets to the destination NVE. 246 When an ARP entry in those VMs times out, their corresponding NVEs 247 should access the NVA for an update. 249 IPv6 operation is slightly different: 251 In IPv6, the virtual machine immediately after the move sends an 252 unsolicited neighbor advertisement message containing its IPv6 253 address and Layer-2 MAC address in its new NVE, the destination NVE. 254 This message is sent to the IPv6 Solicited Node Multicast Address 255 corresponding to the target address which is VM's IPv6 address. NVE 256 receives this message. NVE should update VM's neighbor cache entry 257 in the central directory of the NVA. IPv6 address of VM, MAC address 258 of VM and NVE IPv6 address are recorded in the entry. An NVE-to-NVA 259 protocol is used for this purpose [RFC8014]. 261 All NVEs communicating with this virtual machine uses the old 262 neighbor cache entry. If any VM in those NVEs need to talk to the 263 new VM in the destination NVE, it uses the old neighbor cache entry. 264 Thus the packets are delivered to the source NVE. The source NVE 265 MUST tunnel these in-flight packets to the destination NVE. 267 When a neighbor cache entry in those VMs times out, their 268 corresponding NVEs should access the NVA for an update. 270 4.2. Task Migration 272 Virtualization in L2 based data center networks becomes quickly 273 prohibitive because ARP/neighbor caches don't scale. Scaling can be 274 accomplished seamlessly in L3 data center networks by just giving 275 each virtual network an IP subnet and a default route that points to 276 NVE. This means no explosion of ARP/ neighbor cache in VMs and NVEs 277 (just one ARP/ neighbor cache entry for default route) and there is 278 no need to have Ethernet header in encapsulation [RFC7348] which 279 saves at least 16 bytes. 281 In L3 based data center networks, since IP address of the task has to 282 change after move, an IP based task migration protocol is needed. 283 The protocol mostly used is the identifier locator addressing or ILA 284 [I-D.herbert-nvo3-ila]. Address and connection migration introduce 285 complications in task migration protocol as we discuss below. 286 Especially informing the communicating hosts of the migration becomes 287 a major issue. Also, in L3 based networks, because broadcasting is 288 not available, multicast of neighbor solicitations in IPv6 would need 289 to be emulated. 291 Task migration involves the following steps: 293 Stop running the task. 295 Package the runtime state of the job. 297 Send the runtime state of the task to the destination NVE where the 298 task is to run. 300 Instantiate the task's state on the new machine. 302 Start the tasks for the task continuing from the point at which it 303 was stopped. 305 Address migration and connection migration in moving tasks are 306 addressed next. 308 4.2.1. Address and Connection Migration in Task Migration 310 Address migration is achieved as follows: 312 Configure IPv4/v6 address on the target host. 314 Suspend use of the address on the old host. This includes handling 315 established connections. A state may be established to drop packets 316 or send ICMPv4 or ICMPv6 destination unreachable message when packets 317 to the migrated address are received. 319 Push the new mapping to hosts. Communicating hosts will learn of the 320 new mapping via a control plane either by participation in a protocol 321 for mapping propagation or by getting the new mapping from a central 322 database such as Domain Name System (DNS). 324 Connection migration involves reestablishing existing TCP connections 325 of the task in the new place. 327 The simplest course of action is to drop TCP connections across a 328 migration. Since migrations should be relatively rare events, it is 329 conceivable that TCP connections could be automatically closed in the 330 network stack during a migration event. If the applications running 331 are known to handle this gracefully (i.e. reopen dropped connections) 332 then this may be viable. 334 More involved approach to connection migration entails pausing the 335 connection, packaging connection state and sending to target, 336 instantiating connection state in the peer stack, and restarting the 337 connection. From the time the connection is paused to the time it is 338 running again in the new stack, packets received for the connection 339 should be silently dropped. For some period of time, the old stack 340 will need to keep a record of the migrated connection. If it 341 receives a packet, it should either silently drop the packet or 342 forward it to the new location, similarly as in Section 5. 344 5. Handling Packets in Flight 346 Source hypervisor may receive packets from the virtual machine's 347 ongoing communications and these packets should not be lost and they 348 should be sent to the destination hypervisor to be delivered to the 349 virtual machine. The steps involved in handling packets in flight 350 are as follows: 352 Preparation Step It takes some time, possibly a few seconds for a VM 353 to move from its source hypervisor to a new destination one. 354 During this period, a tunnel needs to be established so that the 355 source NVE forwards packets to the destination NVE. 357 Tunnel Establishment - IPv6 Inflight packets are tunneled to the 358 destination NVE using the encapsulation protocol such as VXLAN in 359 IPv6. Source NVE gets destination NVE address from NVA in the 360 request to move the virtual machine. 362 Tunnel Establishment - IPv4 Inflight packets are tunneled to the 363 destination NVE using the encapsulation protocol such as VXLAN in 364 IPv4. Source NVE gets destination NVE address from NVA when NVA 365 requests NVE to move the virtual machine. 367 Tunneling Packets - IPv6 IPv6 packets are received for the migrating 368 virtual machine encapsulated in an IPv6 header at the source NVE. 369 Destination NVE decapsulates the packet and sends IPv6 packet to 370 the migrating VM. 372 Tunneling Packets - IPv4 IPv4 packets are received for the migrating 373 virtual machine encapsulated in an IPv4 header at the source NVE. 374 Destination NVE decapsulates the packet and sends IPv4 packet to 375 the migrating VM. 377 Stop Tunneling Packets When source NVE stops receiving packets 378 destined to the virtual machine that has just moved to the 379 destination NVE. 381 6. Moving Local State of VM 383 After VM mobility related signaling (VM Mobility Registration 384 Request/Reply), the virtual machine state needs to be transferred to 385 the destination Hypervisor. The state includes its memory and file 386 system. Source NVE opens a TCP connection with destination NVE over 387 which VM's memory state is transferred. 389 File system or local storage is more complicated to transfer. The 390 transfer should ensure consistency, i.e. the VM at the destination 391 should find the same file system it had at the source. Precopying is 392 a commonly used technique for transferring the file system. First 393 the whole disk image is transferred while VM continues to run. After 394 the VM is moved any changes in the file system are packaged together 395 and sent to the destination Hypervisor which reflects these changes 396 to the file system locally at the destination. 398 7. Handling of Hot, Warm and Cold Virtual Machine Mobility 400 Cold Virtual Machine mobility is facilitated by the VM initially 401 sending an ARP or Neighbor Discovery message at the destination NVE 402 but the source NVE not receiving any packets inflight. Cold VM 403 mobility also allows all previous source NVEs and all communicating 404 NVEs to time out ARP/neighbor cache entries of the VM and then get 405 NVA to push to NVEs or get NVEs to pull the updated ARP/neighbor 406 cache entry from NVA. 408 The VMs that are used for cold standby receive scheduled backup 409 information but less frequently than that would be for warm standby 410 option. Therefore, the cold mobility option can be used for non- 411 critical applications and services. 413 In cases of warm standby option, the backup VMs receive backup 414 information at regular intervals. The duration of the interval 415 determines the warmth of the standby option. The larger the 416 duration, the less warm (and hence cold) the standby option becomes. 418 In case of hot standby option, the VMs in both primary and secondary 419 domains have identical information and can provide services 420 simultaneously as in load-share mode of operation. If the VMs in the 421 primary domain fails, there is no need to actively move the VMs to 422 the secondary domain because the VMs in the secondary domain already 423 contain identical information. The hot standby option is the most 424 costly mechanism for providing redundancy, and hence this option is 425 utilized only for mission-critical applications and services. In hot 426 standby option, regarding TCP connections, one option is to start 427 with and maintain TCP connections to two different VMs at the same 428 time. The least loaded VM responds first and pickup providing 429 service while the sender (origin) still continues to receive Ack from 430 the heavily loaded (secondary) VM and chooses not use the service of 431 the secondary responding VM. If the situation (loading condition of 432 the primary responding VM) changes the secondary responding VM may 433 start providing service to the sender (origin). 435 8. Virtual Machine Operation 437 Virtual machines are not involved in any mobility signalling. Once 438 VM moves to the destination NVE, VM IP address does not change and VM 439 should be able to continue to receive packets to its address(es). 440 This happens in hot VM mobility scenarios. 442 Virtual machine sends a gratuitous Address Resolution Protocol or 443 unsolicited Neighbor Advertisement message upstream after each move. 445 8.1. Virtual Machine Lifecycle Management 447 Managing the lifecycle of VM includes creating a VM with all of the 448 required resources, and managing them seamlessly as the VM migrates 449 from one service to another during its lifetime. The on-boarding 450 process includes the following steps: 452 1. Sending an allowed (authorized/authenticated) request to Network 453 Virtualization Authority (NVA) in an acceptable format with 454 mandatory/optional virtualized resources {cpu, memory, storage, 455 process/thread support, etc.} and interface information 457 2. Receiving an acknowledgement from the NVA regarding availability 458 and usability of virtualized resources and interface package 460 3. Sending a confirmation message to the NVA with request for 461 approval to adapt/adjust/modify the virtualized resources and 462 interface package for utilization in a service. 464 9. Security Considerations 466 Security threats for the data and control plane are discussed in 467 [RFC8014]. There are several issues in a multi-tenant environment 468 that create problems. In L2 based data center networks, lack of 469 security in VXLAN, corruption of VNI can lead to delivery to wrong 470 tenant. Also, ARP in IPv4 and ND in IPv6 are not secure especially 471 if we accept gratuitous versions. When these are done over a UDP 472 encapsulation, like VXLAN, the problem is worse since it is trivial 473 for a non trusted application to spoof UDP packets. 475 In L3 based data center networks, the problem of address spoofing may 476 arise. As a result the destinations may contain untrusted hosts. 477 This usually happens in cases like the virtual machines running third 478 part applications. This requires the usage of stronger security 479 mechanisms. 481 10. IANA Considerations 483 This document makes no request to IANA. 485 11. Acknowledgements 487 The authors are grateful to Dave R. Worley, Qiang Zu, Andrew Malis 488 for helpful comments. 490 12. Change Log 492 o submitted version -00 as a working group draft after adoption 494 o submitted version -01 with these changes: references are updated, 495 added packets in flight definition to Section 2 497 o submitted version -02 with updated address. 499 o submitted version -03 to fix the nits. 501 o submitted version -04 in reference to the WG Last call comments. 503 13. References 505 13.1. Normative References 507 [RFC0826] Plummer, D., "An Ethernet Address Resolution Protocol: Or 508 Converting Network Protocol Addresses to 48.bit Ethernet 509 Address for Transmission on Ethernet Hardware", STD 37, 510 RFC 826, DOI 10.17487/RFC0826, November 1982, 511 . 513 [RFC0903] Finlayson, R., Mann, T., Mogul, J., and M. Theimer, "A 514 Reverse Address Resolution Protocol", STD 38, RFC 903, 515 DOI 10.17487/RFC0903, June 1984, 516 . 518 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 519 Requirement Levels", BCP 14, RFC 2119, 520 DOI 10.17487/RFC2119, March 1997, 521 . 523 [RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629, 524 DOI 10.17487/RFC2629, June 1999, 525 . 527 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 528 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 529 DOI 10.17487/RFC4861, September 2007, 530 . 532 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, 533 L., Sridhar, T., Bursell, M., and C. Wright, "Virtual 534 eXtensible Local Area Network (VXLAN): A Framework for 535 Overlaying Virtualized Layer 2 Networks over Layer 3 536 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, 537 . 539 [RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L., 540 Kreeger, L., and M. Napierala, "Problem Statement: 541 Overlays for Network Virtualization", RFC 7364, 542 DOI 10.17487/RFC7364, October 2014, 543 . 545 [RFC8014] Black, D., Hudson, J., Kreeger, L., Lasserre, M., and T. 546 Narten, "An Architecture for Data-Center Network 547 Virtualization over Layer 3 (NVO3)", RFC 8014, 548 DOI 10.17487/RFC8014, December 2016, 549 . 551 13.2. Informative references 553 [I-D.herbert-nvo3-ila] 554 Herbert, T. and P. Lapukhov, "Identifier-locator 555 addressing for IPv6", draft-herbert-nvo3-ila-04 (work in 556 progress), March 2017. 558 Authors' Addresses 560 Behcet Sarikaya 561 Denpel Informatique 563 Email: sarikaya@ieee.org 564 Linda Dunbar 565 Huawei USA 566 5340 Legacy Dr. Building 3 567 Plano, TX 75024 569 Email: linda.dunbar@huawei.com 571 Bhumip Khasnabish 572 ZTE (TX) Inc. 573 55 Madison Avenue, Suite 160 574 Morristown, NJ 07960 576 Email: vumip1@gmail.com, bhumip.khasnabish@ztetx.com 578 Tom Herbert 579 Quantonium 581 Email: tom@herbertland.com 583 Saumya Dikshit 584 Cisco Systems 585 Cessna Business Park 586 Bangalore, Karnataka, India 560 087 588 Email: sadikshi@cisco.com