idnits 2.17.1 draft-sarikaya-nvo3-vmm-dmm-pmip-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: Virtual machine mobility protocol SHOULD not support triangular routing except for handling packets in flight. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: Virtual machine mobility protocol SHOULD not need to use tunneling except for handling packets in flight. -- The document date (March 6, 2017) is 2607 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC2629' is defined on line 490, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2629 (Obsoleted by RFC 7749) ** Downref: Normative reference to an Informational draft: draft-ietf-nvo3-arch (ref. 'I-D.ietf-nvo3-arch') ** Downref: Normative reference to an Informational RFC: RFC 7348 ** Downref: Normative reference to an Informational RFC: RFC 7364 == Outdated reference: A later version (-04) exists of draft-herbert-nvo3-ila-03 Summary: 4 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group B. Sarikaya 3 Internet-Draft L. Dunbar 4 Intended status: Best Current Practice Huawei USA 5 Expires: September 7, 2017 B. Khasnabish 6 ZTE (TX) Inc. 7 March 6, 2017 9 Virtual Machine Mobility Protocol for L2 and L3 Overlay Networks 10 draft-sarikaya-nvo3-vmm-dmm-pmip-11.txt 12 Abstract 14 This document describes a virtual machine mobility protocol commonly 15 used in data centers built with overlay-based network virtualization 16 approach. For layer 2, it is based on using a Network Virtualization 17 Authority (NVA)-Network Virtualization Edge (NVE) protocol to update 18 Address Resolution Protocol (ARP) table or neighbor cache entries at 19 the NVA and the source NVEs tunneling in-flight packets to the 20 destination NVE after the virtual machine moves from source NVE to 21 the destination NVE. For Layer 3, it is based on address and 22 connection migration after the move. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on September 7, 2017. 41 Copyright Notice 43 Copyright (c) 2017 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 59 2. Conventions and Terminology . . . . . . . . . . . . . . . . . 3 60 3. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 61 4. Overview of the protocol . . . . . . . . . . . . . . . . . . 4 62 4.1. VM Migration . . . . . . . . . . . . . . . . . . . . . . 4 63 4.2. Task Migration . . . . . . . . . . . . . . . . . . . . . 6 64 4.2.1. Address and Connection Migration in Task Migration . 7 65 5. Handling Packets in Flight . . . . . . . . . . . . . . . . . 7 66 6. Moving Local State of VM . . . . . . . . . . . . . . . . . . 8 67 7. Handling of Hot, Warm and Cold Virtual Machine Mobility . . . 9 68 8. Virtual Machine Operation . . . . . . . . . . . . . . . . . . 9 69 8.1. Virtual Machine Lifecycle Management . . . . . . . . . . 9 70 9. Security Considerations . . . . . . . . . . . . . . . . . . . 10 71 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 72 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 10 73 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 10 74 12.1. Normative References . . . . . . . . . . . . . . . . . . 10 75 12.2. Informative references . . . . . . . . . . . . . . . . . 11 76 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 12 78 1. Introduction 80 Data center networks are being increasingly used by telecom operators 81 as well as by enterprises. In this document we are interested in 82 overlay-based data center networks supporting multitenancy. These 83 networks are organized as one large Layer 2 network geographically 84 distributed in several buildings. In some cases geographical 85 distribution can span across Layer 2 boundaries. In that case need 86 arises for connectivity between Layer 2 boundaries which can be 87 achieved by the network virtualization edge (NVE) functioning as 88 Layer 3 gateway routing across bridging domain such as in Warehouse 89 Scale Computers (WSC). 91 Virtualization which is being used in almost all of today's data 92 centers enables many virtual machines to run on a single physical 93 computer or compute server. Virtual machines (VM) need hypervisor 94 running on the physical compute server to provide them shared 95 processor/memory/storage. Network connectivity is provided by the 96 network virtualization edge (NVE) [I-D.ietf-nvo3-arch], 98 [I-D.ietf-nvo3-nve-nva-cp-req]. Being able to move VMs dynamically, 99 or live migration, from one server to another allows for dynamic load 100 balancing or work distribution and thus it is a highly desirable 101 feature [RFC7364]. 103 There are many challenges and requirements related to migration, 104 mobility, and interconnection of Virtual Machines (VMs)and Virtual 105 Network Elements (VNEs). Retaining IP addresses after a move is a 106 key requirement [RFC7364]. Such a requirement is needed in order to 107 maintain existing transport connections. 109 In L3 based data networks, retaining IP addresses after a move is 110 simply not possible. This introduces complexity in IP address 111 management and as a result transport connections need to be 112 reestablished. 114 In view of many virtual machine mobility schemes that exist today, 115 there is a desire to define a standard control plane protocol for 116 virtual machine mobility. The protocol should be based on IPv4 or 117 IPv6. In this document we specify such a protocol for Layer 2 and 118 Layer 3 data networks. 120 2. Conventions and Terminology 122 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 123 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 124 document are to be interpreted as described in RFC 2119 [RFC2119] and 125 [I-D.ietf-nvo3-arch]. 127 This document uses the terminology defined in [RFC7364]. In addition 128 we make the following definitions: 130 Tasks. Tasks are the generalization of virtual machines. Tasks in 131 containers that can be migrated correspond to the virtual machines 132 that can be migrated. We use task and virtual machine 133 interchangeably in this document. 135 Hot VM Mobility. A given VM could be moved from one server to 136 another in running state. 138 Warm VM Mobility. In case of warm VM mobility, the VM states are 139 mirrored to the secondary server (or domain) at a predefined 140 (configurable) regular intervals. This reduces the overheads and 141 complexity but this may also lead to a situation when both servers 142 may not contain the exact same data (state information) 144 Cold VM Mobility. A given VM could be moved from one server to 145 another in stopped or suspended state. 147 Source NVE refers to the old NVE where packets were forwarded to 148 before migration. 150 Destination NVE refers to the new NVE after migration. 152 3. Requirements 154 This section states requirements on data center network virtual 155 machine mobility. 157 Data center network SHOULD support virtual machine mobility in IPv6. 159 IPv4 SHOULD also be supported in virtual machine mobility. 161 Virtual machine mobility protocol MAY support host routes to 162 accomplish virtualization. 164 Virtual machine mobility protocol SHOULD not support triangular 165 routing except for handling packets in flight. 167 Virtual machine mobility protocol SHOULD not need to use tunneling 168 except for handling packets in flight. 170 4. Overview of the protocol 172 Layer 2 and Layer 3 protocols are described next. In the following 173 sections, we examine more advanced features. 175 4.1. VM Migration 177 Being able to move Virtual Machines dynamically, from one server to 178 another allows for dynamic load balancing or work distribution and 179 thus it is a highly desirable feature. In a Layer-2 based data 180 center approach, virtual machine moving to another server does not 181 change its IP address. Because of this an IP based virtual machine 182 mobility protocol is not needed. However, when a virtual machine 183 moves, NVEs need to change their caches associating VM Layer 2 or 184 Medium Access Control (MAC) address with NVE's IP address. Such a 185 change enables NVE to send outgoing MAC frames addressed to the 186 virtual machine. VM movement across Layer 3 boundaries is not 187 typical but the same solution applies if the VM moves in the same 188 link such as in WSCs. 190 Virtual machine moves from its source NVE to a new, destination NVE. 191 The move is initiated by the source NVE and is in the same L2 link, 192 the virtual machine IP address(es) do not change but this virtual 193 machine is now under a new NVE, previously communicating NVEs will 194 continue to send their packets to the source NVE. Address Resolution 195 Protocol (ARP) cache in IPv4 [RFC0826] or neighbor cache in IPv6 196 [RFC4861] in the NVEs need to be updated. 198 It takes a few seconds for a VM to move from its source NVE to the 199 new destination one. During this period, a tunnel is needed so that 200 source NVE forwards packets to the destination NVE. 202 In IPv4, the virtual machine immediately after the move sends a 203 gratuitous ARP request message containing its IPv4 and Layer 2 or MAC 204 address in its new NVE, destination NVE. This message's destination 205 address is the broadcast address. NVE receives this message. NVE 206 should update VM's ARP entry in the central directory at the NVA. 207 NVE asks NVA to update its mappings to record IPv4 address of VM 208 along with MAC address of VM, and NVE IPv4 address. An NVE-to-NVA 209 protocol is used for this purpose [I-D.ietf-nvo3-arch]. 211 Reverse ARP (RARP) which enables the host to discover its IPv4 212 address when it boots from a local server [RFC0903] is not used by 213 VMs because the VM already knows its IPv4 address. IPv4/v6 address 214 is assigned to a newly created VM, possibly using Dynamic Host 215 Configuration Protocol (DHCP). There are some vendor deployments 216 (diskless systems or systems without configuration files) wherein VM 217 users, i.e. end-user clients ask for the same MAC address upon 218 migration. This can be achieved by the clients sending RARP request 219 reverse message which carries the old MAC address looking for an IP 220 address allocation. The server, in this case the new NVE needs to 221 communicate with NVA, just like in the gratuitous ARP case to ensure 222 that the same IPv4 address is assigned to the VM. NVA uses the MAC 223 address as the key in the search of ARP cache to find the IP address 224 and informs this to the new NVE which in turns sends RARP reply 225 reverse message. This completes IP address assignment to the 226 migrating VM. 228 All NVEs communicating with this virtual machine uses the old ARP 229 entry. If any VM in those NVEs need to talk to the new VM in the 230 destination NVE, it uses the old ARP entry. Thus the packets are 231 delivered to the source NVE. The source NVE MUST tunnel these in- 232 flight packets to the destination NVE. 234 When an ARP entry in those VMs times out, their corresponding NVEs 235 should access the NVA for an update. 237 IPv6 operation is slightly different: 239 In IPv6, the virtual machine immediately after the move sends an 240 unsolicited neighbor advertisement message containing its IPv6 241 address and Layer-2 MAC address in its new NVE, the destination NVE. 242 This message is sent to the IPv6 Solicited Node Multicast Address 243 corresponding to the target address which is VM's IPv6 address. NVE 244 receives this message. NVE should update VM's neighbor cache entry 245 in the central directory at the NVA. IPv6 address of VM, MAC address 246 of VM and NVE IPv6 address are recorded to the entry. An NVE-to-NVA 247 protocol is used for this purpose [I-D.ietf-nvo3-arch]. 249 All NVEs communicating with this virtual machine uses the old 250 neighbor cache entry. If any VM in those NVEs need to talk to the 251 new VM in the destination NVE, it uses the old neighbor cache entry. 252 Thus the packets are delivered to the source NVE. The source NVE 253 MUST tunnel these in-flight packets to the destination NVE. 255 When a neighbor cache entry in those VMs times out, their 256 corresponding NVEs should access the NVA for an update. 258 4.2. Task Migration 260 Virtualization in L2 based data center networks becomes quickly 261 prohibitive because ARP/neighbor caches don't scale. Scaling can be 262 accomplished seamlessly in L3 data center networks by just giving 263 each virtual network an IP subnet and a default route that points to 264 NVE. This means no explosion of ARP/ neighbor cache in guests (just 265 one ARP/ neighbor cache entry for default route) and we do not need 266 to have Ethernet header in encapsulation [RFC7348] which saves at 267 least 16 bytes. 269 In L3 based data center networks, since IP address of the task has to 270 change after move, an IP based task migration protocol is needed. 271 The protocol mostly used is the identifier locator addressing or ILA 272 [I-D.herbert-nvo3-ila]. Address and connection migration introduce 273 complications in task migration protocol as we discuss below. 274 Especially informing the communicating hosts of the migration becomes 275 a major issue. Also, in L3 based networks, because broadcasting is 276 not available, multicast of neighbor solicitations in IPv6 would need 277 to be emulated. 279 Task migration involves the following steps: 281 Stop running the task. 283 Package the runtime state of the job. 285 Send the runtime state of the task to the destination NVE where the 286 task is to run. 288 Instantiate the task's state on the new machine. 290 Start the tasks for the task continuing from the point at which it 291 was stopped. 293 Address migration and connection migration in moving tasks are 294 addressed next. 296 4.2.1. Address and Connection Migration in Task Migration 298 Address migration is achieved as follows: 300 Configure IPv4/v6 address on the target host. 302 Suspend use of the address on the old host. This includes handling 303 established connections. A state may be established to drop packets 304 or send ICMPv4 or ICMPv6 destination unreachable message when packets 305 to the migrated address are received. 307 Push the new mapping to hosts. Communicating hosts will learn of the 308 new mapping via a control plane either by participation in a protocol 309 for mapping propagation or by getting the new mapping from a central 310 database such as Domain Name System (DNS). 312 Connection migration involves reestablishing existing TCP connections 313 of the task in the new place. 315 The simplest course of action is to drop TCP connections across a 316 migration. Since migrations should be relatively rare events, it is 317 conceivable that TCP connections could be automatically closed in the 318 network stack during a migration event. If the applications running 319 are known to handle this gracefully (i.e. reopen dropped connections) 320 then this may be viable. 322 More involved approach to connection migration entails pausing the 323 connection, packaging connection state and sending to target, 324 instantiating connection state in the peer stack, and restarting the 325 connection. From the time the connection is paused to the time it is 326 running again in the new stack, packets received for the connection 327 should be silently dropped. For some period of time, the old stack 328 will need to keep a record of the migrated connection. If it 329 receives a packet, it should either silently drop the packet or 330 forward it to the new location, similarly as in Section 5. 332 5. Handling Packets in Flight 334 Source hypervisor may receive packets from the virtual machine's 335 ongoing communications and these packets should not be lost and they 336 should be sent to the destination hypervisor to be delivered to the 337 virtual machine. The steps involved in handling packets in flight 338 are as follows: 340 Preparation Step It takes some time, possibly a few seconds for a VM 341 to move from its source hypervisor to a new destination one. 342 During this period, a tunnel needs to be established so that the 343 source NVE forwards packets to the destination NVE. 345 Tunnel Establishment - IPv6 Inflight packets are tunneled to the 346 destination NVE using the encapsulation protocol such as VXLAN in 347 IPv6. Source NVE gets destination NVE address from NVA in the 348 request to move the virtual machine. 350 Tunnel Establishment - IPv4 Inflight packets are tunneled to the 351 destination NVE using the encapsulation protocol such as VXLAN in 352 IPv4. Source NVE gets destination NVE address from NVA when NVA 353 requests NVE to move the virtual machine. 355 Tunneling Packets - IPv6 IPv6 packets are received for the migrating 356 virtual machine encapsulated in an IPv6 header at the source NVE. 357 Destination NVE decapsulates the packet and sends IPv6 packet to 358 the migrating VM. 360 Tunneling Packets - IPv4 IPv4 packets are received for the migrating 361 virtual machine encapsulated in an IPv4 header at the source NVE. 362 Destination NVE decapsulates the packet and sends IPv4 packet to 363 the migrating VM. 365 Stop Tunneling Packets When source NVE stops receiving packets 366 destined to the virtual machine that has just moved to the 367 destination NVE. 369 6. Moving Local State of VM 371 After VM mobility related signaling (VM Mobility Registration 372 Request/Reply), the virtual machine state needs to be transferred to 373 the destination Hypervisor. The state includes its memory and file 374 system. Source NVE opens a TCP connection with destination NVE over 375 which VM's memory state is transferred. 377 File system or local storage is more complicated to transfer. The 378 transfer should ensure consistency, i.e. the VM at the destination 379 should find the same file system it had at the source. Precopying is 380 a commonly used technique for transferring the file system. First 381 the whole disk image is transferred while VM continues to run. After 382 the VM is moved any changes in the file system are packaged together 383 and sent to the destination Hypervisor which reflects these changes 384 to the file system locally at the destination. 386 7. Handling of Hot, Warm and Cold Virtual Machine Mobility 388 Cold Virtual Machine mobility is facilitated by the VM initially 389 sending an ARP or Neighbor Discovery message at the destination NVE 390 but the source NVE not receiving any packets inflight. Cold VM 391 mobility also allows all previous source NVEs and all communicating 392 NVEs to time out ARP/neighbor cache entries of the VM and then get 393 NVA to push to NVEs or get NVEs to pull the updated ARP/neighbor 394 cache entry from NVA. 396 The VMs that are used for cold standby receive scheduled backup 397 information but less frequently than that would be for warm standby 398 option. Therefore, the cold mobility option can be used for non- 399 critical applications and services. 401 In cases of warm standby option, the backup VMs receive backup 402 information at regular intervals. The duration of the interval 403 determines the warmth of the standby option. The larger the 404 duration, the less warm (and hence cold) the standby option becomes. 406 In case of hot standby option, the VMs in both primary and secondary 407 domains have identical information and can provide services 408 simultaneously as in load-share mode of operation. If the VMs in the 409 primary domain fails, there is no need to actively move the VMs to 410 the secondary domain because the VMs in the secondary domain already 411 contains identical information. The hot standby option is the most 412 costly mechanism for providing redundancy, and hence this option is 413 utilized only for mission-critical applications and services. 415 8. Virtual Machine Operation 417 Virtual machines are not involved in any mobility signalling. Once 418 VM moves to the destination NVE, VM IP address does not change and VM 419 should be able to continue to receive packets to its address(es). 420 This happens in hot VM mobility scenarios. 422 Virtual machine sends a gratuitous Address Resolution Protocol or 423 unsolicited Neighbor Advertisement message upstream after each move. 425 8.1. Virtual Machine Lifecycle Management 427 Managing the lifecycle of VM includes creating a VM with all of the 428 required resources, and managing them seamlessly as the VM migrates 429 from one service to another during its lifetime. The on-boarding 430 process includes the following steps: 432 1. Sending an allowed (authorized/authenticated) request to Network 433 Virtualization Authority (NVA) in an acceptable format with 434 mandatory/optional virtualized resources {cpu, memory, storage, 435 process/thread support, etc.} and interface information 437 2. Receiving an acknowledgement from the NVA regarding availability 438 and usability of virtualized resources and interface package 440 3. Sending a confirmation message to the NVA with request for 441 approval to adapt/adjust/modify the virtualized resources and 442 interface package for utilization in a service. 444 9. Security Considerations 446 Security threats for the data and control plane are discussed in 447 [I-D.ietf-nvo3-arch]. There are several issues in a multi-tenant 448 environment that create problems. In L2 based data center networks, 449 lack of security in VXLAN, corruption of VNI can lead to delivery to 450 wrong tenant. Also, ARP in IPv4 and ND in IPv6 are not secure 451 especially if we accept gratuitous versions. When these are done 452 over a UDP encapsulation, like VXLAN, the problem is worse since it 453 is trivial for a non trusted application to spoof UDP packets. 455 In L3 based data center networks, the problem of address spoofing may 456 arise. As a result the destinations may contain untrusted hosts. 457 This usually happens in cases like the virtual machines running third 458 part applications. This requires the usage of stronger security 459 mechanisms. 461 10. IANA Considerations 463 This document makes no request to IANA. 465 11. Acknowledgements 467 The authors are grateful to Qiang Zu, Andrew Malis and Saumya Dikshit 468 for helpful comments and Tom Herbert for extensive reviews. 470 12. References 472 12.1. Normative References 474 [RFC0826] Plummer, D., "Ethernet Address Resolution Protocol: Or 475 Converting Network Protocol Addresses to 48.bit Ethernet 476 Address for Transmission on Ethernet Hardware", STD 37, 477 RFC 826, DOI 10.17487/RFC0826, November 1982, 478 . 480 [RFC0903] Finlayson, R., Mann, T., Mogul, J., and M. Theimer, "A 481 Reverse Address Resolution Protocol", STD 38, RFC 903, 482 DOI 10.17487/RFC0903, June 1984, 483 . 485 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 486 Requirement Levels", BCP 14, RFC 2119, 487 DOI 10.17487/RFC2119, March 1997, 488 . 490 [RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629, 491 DOI 10.17487/RFC2629, June 1999, 492 . 494 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 495 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 496 DOI 10.17487/RFC4861, September 2007, 497 . 499 [I-D.ietf-nvo3-arch] 500 Black, D., Hudson, J., Kreeger, L., Lasserre, M., and T. 501 Narten, "An Architecture for Data Center Network 502 Virtualization Overlays (NVO3)", draft-ietf-nvo3-arch-08 503 (work in progress), September 2016. 505 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, 506 L., Sridhar, T., Bursell, M., and C. Wright, "Virtual 507 eXtensible Local Area Network (VXLAN): A Framework for 508 Overlaying Virtualized Layer 2 Networks over Layer 3 509 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, 510 . 512 [RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L., 513 Kreeger, L., and M. Napierala, "Problem Statement: 514 Overlays for Network Virtualization", RFC 7364, 515 DOI 10.17487/RFC7364, October 2014, 516 . 518 12.2. Informative references 520 [I-D.ietf-nvo3-nve-nva-cp-req] 521 Kreeger, L., Dutt, D., Narten, T., and D. Black, "Network 522 Virtualization NVE to NVA Control Protocol Requirements", 523 draft-ietf-nvo3-nve-nva-cp-req-05 (work in progress), 524 March 2016. 526 [I-D.herbert-nvo3-ila] 527 Herbert, T., "Identifier-locator addressing for IPv6", 528 draft-herbert-nvo3-ila-03 (work in progress), October 529 2016. 531 Authors' Addresses 533 Behcet Sarikaya 534 Huawei USA 535 5340 Legacy Dr. Building 3 536 Plano, TX 75024 538 Email: sarikaya@ieee.org 540 Linda Dunbar 541 Huawei USA 542 5340 Legacy Dr. Building 3 543 Plano, TX 75024 545 Email: linda.dunbar@huawei.com 547 Bhumip Khasnabish 548 ZTE (TX) Inc. 549 55 Madison Avenue, Suite 160 550 Morristown, NJ 07960 552 Email: vumip1@gmail.com, bhumip.khasnabish@ztetx.com