idnits 2.17.1 draft-ietf-nvo3-vmm-16.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 565 has weird spacing: '...support seam...' -- The document date (June 17, 2020) is 1406 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Dunbar 2 Internet Draft Futurewei 3 Intended status: Informational B. Sarikaya 4 Expires: December 17, 2020 Denpel Informatique 5 B.Khasnabish 6 Independent 7 T. Herbert 8 Intel 9 S. Dikshit 10 Aruba-HPE 11 June 17, 2020 13 Virtual Machine Mobility Solutions for L2 and L3 Overlay 14 Networks draft-ietf-nvo3-vmm-16 16 Abstract 18 This document describes virtual machine (VM) mobility 19 solutions commonly used in data centers built with an overlay 20 network. This document is intended for describing the 21 solutions and the impact of moving VMs, or applications, from 22 one rack to another connected by the overlay network. 24 For layer 2, it is based on using an NVA (Network 25 Virtualization Authority) to NVE (Network Virtualization 26 Edge) protocol to update ARP (Address Resolution Protocol) 27 tables or neighbor cache entries after a VM moves from an old 28 NVE to a new NVE. For Layer 3, it is based on address and 29 connection migration after the move. 31 Status of this Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. This document may not be 38 modified, and derivative works of it may not be created, 39 except to publish it as an RFC and to translate it into 40 languages other than English. 42 Internet-Drafts are working documents of the Internet 43 Engineering Task Force (IETF), its areas, and its working 44 groups. Note that other groups may also distribute working 45 documents as Internet-Drafts. 47 Internet-Drafts are draft documents valid for a maximum of 48 six months and may be updated, replaced, or obsoleted by 49 other documents at any time. It is inappropriate to use 50 Internet-Drafts as reference material or to cite them other 51 than as "work in progress." 53 The list of current Internet-Drafts can be accessed at 54 http://www.ietf.org/ietf/1id-abstracts.txt 56 The list of Internet-Draft Shadow Directories can be accessed 57 at http://www.ietf.org/shadow.html 59 This Internet-Draft will expire on December 17, 2020. 61 Copyright Notice 63 Copyright (c) 2020 IETF Trust and the persons identified as 64 the document authors. All rights reserved. 66 This document is subject to BCP 78 and the IETF Trust's Legal 67 Provisions Relating to IETF Documents 68 (http://trustee.ietf.org/license-info) in effect on the date 69 of publication of this document. Please review these 70 documents carefully, as they describe your rights and 71 restrictions with respect to this document. Code Components 72 extracted from this document must include Simplified BSD 73 License text as described in Section 4.e of the Trust Legal 74 Provisions and are provided without warranty as described in 75 the Simplified BSD License. 77 Table of Contents 79 1. Introduction................................................ 3 80 2. Conventions used in this document........................... 4 81 3. Requirements................................................ 5 82 4. Overview of the VM Mobility Solutions....................... 6 83 4.1. Inter-VN and External Communication.................... 6 84 4.2. VM Migration in a Layer 2 Network...................... 7 85 4.3. VM Migration in Layer-3 Network........................ 8 86 4.4. Address and Connection Management in VM Migration...... 9 87 5. Handling Packets in Flight................................. 10 88 6. Moving Local State of VM................................... 11 89 7. Handling of Hot, Warm and Cold VM Mobility................. 12 90 8. Other Options.............................................. 13 91 9. VM Lifecycle Management.................................... 13 92 10. Security Considerations................................... 14 93 11. IANA Considerations....................................... 15 94 12. Acknowledgments........................................... 15 95 13. References................................................ 15 96 13.1. Normative References................................. 15 97 13.2. Informative References............................... 16 99 1. Introduction 100 This document describes the overlay-based data center 101 network solutions in support of multitenancy and VM 102 mobility. Being able to move VMs dynamically, from one 103 server to another, makes it possible for dynamic load 104 balancing or work distribution. Therefore, dynamic VM 105 Mobility is highly desirable for large scale multi-tenant 106 DCs. 107 This document is strictly within the DCVPN, as defined by 108 the NVO3 Framework [RFC7365]. The intent is to describe 109 Layer 2 and Layer 3 Network behavior when VMs are moved 110 from one NVE to another. This document assumes that the 111 VM's move is initiated by the VM management system, i.e. 112 planed move. How and when to move VMs is out of the scope 113 of this document. RFC7666 already has the description of 114 the MIB for VMs controlled by Hypervisor. The impact of VM 115 mobility on higher layer protocols and applications is 116 outside its scope. 117 Many large DCs (Data Centers), especially Cloud DCs, host 118 tasks (or workloads) for multiple tenants. A tenant can be 119 an organization or a department of an organization. There 120 are communications among tasks belonging to one tenant and 121 communications among tasks belonging to different tenants 122 or with external entities. 123 Server Virtualization, which is being used in almost all of 124 today's data centers, enables many VMs to run on a single 125 physical computer or server sharing the 126 processor/memory/storage. Network connectivity among VMs 127 is provided by the network virtualization edge (NVE) 128 [RFC8014]. It is highly desirable [RFC7364] to allow VMs 129 to be moved dynamically (live, hot, or cold move) from one 130 server to another for dynamic load balancing or optimized 131 work distribution. 132 There are many challenges and requirements related to VM 133 mobility in large data centers, including dynamic 134 attachment and detachment of VMs to/from Virtual Network 135 Edges (VNEs). In addition, retaining IP addresses after a 136 move is a key requirement [RFC7364]. Such a requirement is 137 needed in order to maintain existing transport layer 138 connections. 139 In traditional Layer-3 based networks, retaining IP 140 addresses after a move is generally not recommended because 141 frequent moves will cause fragmented IP addresses, which 142 introduces complexity in IP address management. 143 In view of the many VM mobility schemes that exist today, 144 there is a desire to document comprehensive VM mobility 145 solutions that cover both IPv4 and IPv6. The large Data 146 Center networks can be organized as one large Layer-2 147 network geographically distributed in several 148 buildings/cities or Layer-3 networks with large number of 149 host routes that cannot be aggregated as the result of 150 frequent moves from one location to another without 151 changing their IP addresses. The connectivity between 152 Layer 2 boundaries can be achieved by the NVE functioning 153 as a Layer 3 gateway router across bridging domains. 155 2. Conventions used in this document 157 This document uses the terminology defined in [RFC7364]. 158 In addition, we make the following definitions: 160 VM: Virtual Machine 162 Task: A task is a program instantiated or running on a 163 VM or a container. Tasks running in VMs or 164 containers can be migrated from one server to 165 another. We use task, workload and VM 166 interchangeably in this document. 168 Hot VM Mobility: A given VM could be moved from one server 169 to another in a running state without terminating 170 the VM. 172 Warm VM Mobility: In case of warm VM mobility, the VM 173 states are mirrored to the secondary server (or 174 domain) at predefined regular intervals. This 175 reduces the overheads and complexity, but this 176 may also lead to a situation when both servers 177 may not contain the exact same data (state 178 information) 180 Cold VM Mobility: A given VM could be moved from one 181 server to another in stopped or suspended state. 183 Old NVE: refers to the old NVE where packets were 184 forwarded to before migration. 186 New NVE: refers to the new NVE after migration. 188 Packets in flight: refers to the packets received by the 189 old NVE sent by the correspondents that have old 190 ARP or neighbor cache entry before VM or task 191 migration. 193 Users of VMs in diskless systems or systems not using 194 configuration files are called end user clients. 196 Cloud DC: Third party data centers that host 197 applications, tasks or workloads owned by 198 different organizations or tenants. 200 3. Requirements 202 This section states requirements on data center network VM 203 mobility. 205 - Data center network should support both IPv4 and IPv6 VM 206 mobility. 207 - VM mobility should not require changing an VM's IP 208 address(es) after the move. 209 - "Hot Migration" requires the transport service continuity 210 across the move, while in "Cold Migration" the transport 211 service is restarted, i.e. the task is stopped on the old 212 NVE, is moved to the new NVE and then restarted. Not all DCs 213 support "Hot Migration. DCs that only support Cold Migration 214 should make their customers aware of the potential service 215 interruption during a Cold Migration. 216 - VM mobility solutions/procedures should minimize triangular 217 routing except for handling packets in flight. 218 - VM mobility solutions/procedures should not need to use 219 tunneling except for handling packets in flight. 221 4. Overview of the VM Mobility Solutions 223 4.1. Inter-VN and External Communication 225 Inter VN (Virtual Network) communication refers to 226 communication among tenants (or hosts) belonging to 227 different VNs. Those tenants can be attached to the NVEs 228 co-located in the same Data Center or in different Data 229 centers. When a VM communicates with an external entity, 230 the VM is effectively communicating with a peer in a 231 different network or a globally reachable host. 233 This document assumes that the inter-VNs communication and 234 the communication with external entities are via NVO3 235 Gateway functionality as described in Section 5.3 of RFC 236 8014 [RFC8014]. NVO3 Gateways relay traffic onto and off of 237 a virtual network, enabling communication both across 238 different VNs and with external entities. 240 NVO3 Gateway functionality enforces appropriate policies to 241 control communication among VNs and with external entities 242 (e.g., hosts). 244 Moving a VM to a new NVE may move the VM away from the NVO3 245 Gateway(s) used by the VM's traffic, e.g., some traffic may 246 be better handled by an NVO3 Gateway that is closer to the 247 new NVE than the NVO3 Gateway that was used before the VM 248 move. If NVO3 Gateway changes are not possible for some 249 reason, then the VM's traffic can continue to use the prior 250 NVO3 Gateway(s), which may have some drawbacks, e.g., 251 longer network paths. 253 4.2. VM Migration in a Layer 2 Network 255 In a Layer-2 based approach, a VM moving to another NVE 256 does not change its IP address. But this VM is now under a 257 new NVE, previously communicating NVEs may continue sending 258 their packets to the old NVE. Therefore, the previously 259 communicating NVEs need to promptly update their Address 260 Resolution Protocol (ARP) caches of IPv4 [RFC826] or 261 neighbor caches of IPv6 [RFC4861]. If the VM being moved 262 has communication with external entities, the NVO3 gateway 263 needs to be notified of the new NVE where the VM is moved 264 to. 266 In IPv4, the VM immediately after the move should send a 267 gratuitous ARP request message containing its IPv4 and 268 Layer 2 MAC address in its new NVE. Upon receiving this 269 message, the new NVE can update its ARP cache. The new NVE 270 should send a notification of the newly attached VM to the 271 central directory [RFC7067] embedded in the NVA to update 272 the mapping of the IPv4 address & MAC address of the moving 273 VM along with the new NVE address. An NVE-to-NVA protocol 274 is used for this purpose [RFC8014]. The old NVE, upon a VM 275 is moved away, should send an ARP scan to all its attached 276 VMs to refresh its ARP Cache. 278 Reverse ARP (RARP) which enables the host to discover its 279 IPv4 address when it boots from a local server [RFC903], is 280 not used by VMs if the VM already knows its IPv4 address 281 (most common scenario). Next, we describe a case where RARP 282 is used. 284 There are some vendor deployments (diskless systems or 285 systems without configuration files) wherein the VM's user, 286 i.e. end-user client askes for the same MAC address upon 287 migration. This can be achieved by the clients sending 288 RARP request message which carries the MAC address looking 289 for an IP address allocation. The server, in this case the 290 new NVE needs to communicate with NVA, just like in the 291 gratuitous ARP case to ensure that the same IPv4 address is 292 assigned to the VM. NVA uses the MAC address as the key in 293 the search of ARP cache to find the IP address and informs 294 this to the new NVE which in turn sends RARP reply message. 295 This completes IP address assignment to the migrating VM. 297 Other NVEs that have attached VMs or the NVO3 Gateway that 298 have external entities communicating with this VM may still 299 have the old ARP entry. To avoid old ARP entries being used 300 by other NVEs, the old NVE upon discovering a VM is 301 detached should send a notification to all other NVEs and 302 its NVO3 Gateway to time out the ARP cache for the VM 303 [RFC8171]. When an NVE (including the old NVE) receives 304 packet or ARP request destined towards a VM (its MAC or IP 305 address) that is not in the NVE's ARP cache, the NVE should 306 send query to NVA's Directory Service to get the associated 307 NVE address for the VM. This is how the old NVE tunneling 308 these in-flight packets to the new NVE to avoid packets 309 loss. 311 When VM address is IPv6, the operation is similar: 313 In IPv6, after the move, the VM immediately sends an 314 unsolicited neighbor advertisement message containing its 315 IPv6 address and Layer-2 MAC address to its new NVE. This 316 message is sent to the IPv6 Solicited Node Multicast 317 Address corresponding to the target address which is the 318 VM's IPv6 address. The NVE receiving this message should 319 send request to update VM's neighbor cache entry in the 320 central directory of the NVA. The NVA's neighbor cache 321 entry should include IPv6 address of the VM, MAC address of 322 the VM and the NVE IPv6 address. An NVE-to-NVA protocol is 323 used for this purpose [RFC8014]. 325 To avoid other NVEs communicating with this VM using the 326 old neighbor cache entry, the old NVE upon discovering a VM 327 being moved or VM management system which initiates the VM 328 move should send a notification to all NVEs to timeout the 329 ND cache for the VM being moved. When a ND cache entry for 330 those VMs times out, their corresponding NVEs should send 331 query to the NVA for an update. 333 4.3. VM Migration in Layer-3 Network 335 Traditional Layer-3 based data center networks usually have 336 all hosts (tasks) within one subnet attached to one NVE. By 337 this design, the NVE becomes the default route for all 338 hosts (tasks) within the subnet. But this design requires 339 IP address of a host (task) to change after the move to 340 comply with the prefixes of the IP address under the new 341 NVE. 343 A VM migration in Layer 3 Network solution is to allow IP 344 addresses staying the same after moving to different 345 locations. The Identifier Locator Addressing or ILA 346 [Herbert-ILA] is one of such solutions. 348 Because broadcasting is not available in Layer-3 based 349 networks, multicast of neighbor solicitations in IPv6 and 350 ARP for IPv4 would need to be emulated. Scalability of the 351 multicast (such as IPv6 ND and IPv4 ARP) can become 352 problematic because the hosts belonging to one subnet (or 353 one VLAN) can span across many NVEs. Sending broadcast 354 traffic to all NVEs can cause unnecessary traffic in the 355 DCN if the hosts belonging to one subnet are only attached 356 to a very small number of NVEs. It is preferable to have a 357 directory [RFC7067] or NVA to manage the updates to an NVE 358 of the potential other NVEs a specific subnet may be 359 attached and get periodic reports from an NVE of all the 360 subnets being attached/detached, as described by RFC8171. 362 Hot VM Migration in Layer 3 involves coordination among 363 many entities, such as VM management system and NVA. Cold 364 task migration, which is a common practice in many data 365 centers, involves the following steps: 367 - Stop running the task. 368 - Package the runtime state of the job. 369 - Send the runtime state of the task to the new NVE where 370 the task is to run. 371 - Instantiate the task's state on the new machine. 372 - Start the tasks for the task continuing from the point 373 at which it was stopped. 375 RFC7666 has the more detailed description of the State 376 Machine of VMs controlled by Hypervisor 378 4.4. Address and Connection Management in VM Migration 380 Since the VM attached to the new NVE needs to be assigned 381 with the same address as VM attached to the old NVE, extra 382 processing or configuration is needed, such as: 384 - Configure IPv4/v6 address on the target VM/NVE. 385 - Suspend use of the address on the old NVE. This 386 includes the old NVE sending query to NVA upon receiving 387 packets destined towards the VM being moved away. If 388 there is no response from NVA for the new NVE for the 389 VM, the old NVE can only drop the packets. Referring to 390 the VM State Machine described in RFC7666. 391 - Trigger NVA to push the new NVE-VM mapping to other NVEs 392 which have the attached VMs communicating with the VM 393 being moved. 395 Connection management for the applications running on the 396 VM being moved involves reestablishing existing TCP 397 connections in the new place. 399 The simplest course of action is to drop all TCP 400 connections to the applications running on the VM during a 401 migration. If the migrations are relatively rare events in 402 a data center, impact is relatively small when TCP 403 connections are automatically closed in the network stack 404 during a migration event. If the applications running are 405 known to handle this gracefully (i.e. reopen dropped 406 connections) then this approach may be viable. 408 More involved approach to connection migration entails a 409 proxy to the application (or the application itself) to 410 pause the connection, package connection state and send to 411 target, instantiate connection state in the peer stack, and 412 restarting the connection. From the time the connection is 413 paused to the time it is running again in the new stack, 414 packets received for the connection could be silently 415 dropped. For some period of time, the old stack will need 416 to keep a record of the migrated connection. If it 417 receives a packet, it can either silently drop the packet 418 or forward it to the new location, as described in Section 419 5. 421 5. Handling Packets in Flight 423 The old NVE may receive packets from the VM's ongoing 424 communications. These packets should not be lost; they 425 should be sent to the new NVE to be delivered to the VM. 426 The steps involved in handling packets in flight are as 427 follows: 429 Preparation Step: It takes some time, possibly a few 430 seconds for a VM to move from its old NVE to a new NVE. 431 During this period, a tunnel needs to be established so 432 that the old NVE can forward packets to the new NVE. The 433 old NVE gets the new NVE address from its NVA assuming that 434 the NVA gets the notification when a VM is moved from one 435 NVE to another. It is out of the scope of this document on 436 which entity manages the VM move and how NVA gets notified 437 of the move. The old NVE can store the new NVE address for 438 the VM with a timer. When the timer expired, the entry for 439 the new NVE for the VM can be deleted. 441 Tunnel Establishment - IPv6: Inflight packets are tunneled 442 to the new NVE using the encapsulation protocol such as 443 VXLAN in IPv6. 445 Tunnel Establishment - IPv4: Inflight packets are tunneled 446 to the new NVE using the encapsulation protocol such as 447 VXLAN in IPv4. 449 Tunneling Packets - IPv6: IPv6 packets received for the 450 migrating VM are encapsulated in an IPv6 header at the old 451 NVE. The new NVE decapsulates the packet and sends IPv6 452 packet to the migrating VM. 454 Tunneling Packets - IPv4: IPv4 packets received for the 455 migrating VM are encapsulated in an IPv4 header at the old 456 NVE. The new NVE decapsulates the packet and sends IPv4 457 packet to the migrating VM. 459 Stop Tunneling Packets: When the Timer for storing the new 460 NVE address for the VM expires. The Timer should be long 461 enough for all other NVEs that need to communicate with the 462 VM to get their NVE-VM cache entries updated. 464 6. Moving Local State of VM 465 In addition to the VM mobility related signaling (VM 466 Mobility Registration Request/Reply), the VM state needs to 467 be transferred to the new NVE. The state includes its 468 memory and file system if the VM cannot access the memory 469 and the file system after moving to the new NVE. 471 The mechanism of transferring VM States and file system is 472 out of the scope of this document. Referring to RFC7666 for 473 detailed information. 475 7. Handling of Hot, Warm and Cold VM Mobility 476 Both Cold and Warm VM mobility (or migration) refer to the 477 complete shutdown of the VM at the old NVE before 478 restarting the VM at the new NVE. Therefore, all transport 479 services to the VM need to be restarted. 481 In this document, all VM mobility is initiated by VM 482 Management System. In case of Cold VM mobility, the 483 exchange of states between the old NVE and the new NVE 484 occurs after the VM attached to the old NVE is completely 485 shut down. There is a time delay before the new VM is 486 launched. The cold mobility option can be used for non- 487 mission-critical applications and services that can 488 tolerate interruptions of TCP connections. 490 For Hot VM Mobility, a VM moving to a new NVE does not 491 change its IP address and the service running on the VM is 492 not interrupted. The VM needs to send a gratuitous Address 493 Resolution message or unsolicited Neighbor Advertisement 494 message upstream after each move. 496 In case of Warm VM mobility, the functional components of 497 the new NVE receive the running status of the VM at 498 frequent intervals. Consequently, it takes less time to 499 launch the VM under the new NVE. Other NVEs that 500 communicate with the VM can be notified promptly about the 501 VM migration. The duration of the time interval determines 502 the effectiveness (or benefit) of Warm VM mobility. The 503 larger the time duration, the less effective the Warm VM 504 mobility becomes. 506 In case of Cold VM mobility, the VM on the old NVE is 507 completely shut down and the VM is launched on the new NVE. 508 To minimize the chance of the previously communicating NVEs 509 sending packets to the old NVE, the NVA should push the 510 updated ARP/neighbor cache entry to all previously 511 communicating NVEs when the VM is started on the new NVE. 512 Alternatively, all NVEs can periodically pull the updated 513 ARP/neighbor cache entry from the NVA to shorten the time 514 span that packets are sent to the old NVE. 516 Upon starting at the new NVE, the VM should send an ARP or 517 Neighbor Discovery message. 519 8. Other Options 520 Hot, Warm and Cold mobility are planned activities which 521 are managed by VM management system. 523 For unexpected events, such as overloads and failure, a VM 524 might need to move to a new NVE without any service 525 interruption, and this is called Hot VM Failover in this 526 document. In such case, there are redundant primary and 527 secondary VMs whose states are continuously synchronized by 528 using methods that are outside the scope of this draft. If 529 the VM in the primary NVE fails, there is no need to 530 actively move the VM to the secondary NVE because the VM in 531 the secondary NVE can immediately pick up and continue 532 processing the applications/services. 534 The Hot VM Failover is transparent to the peers that 535 communicate with this VM. This can be achieved via 536 distributed load balancing when both active VM and standby 537 VM share the same TCP port and same IP address. In the 538 absence of a failure, the new VM can pick up providing 539 service while the sender (peer) continues to receive Ack 540 from the old VM. If the situation (loading condition of the 541 primary responding VM) changes the secondary responding VM 542 may start providing service to the sender (peers). When a 543 failure occurs, the sender (peer) may have to retry the 544 request, so this structure is limited to requests that can 545 be safely retried. 547 If the load balancing functionality is not used, the Hot VM 548 Failover can be made transparent to the sender (peers) 549 without relying on request retry and by using the 550 techniques that are described in section 4. This does not 551 depend on the primary VM or its associated NVE doing 552 anything after the failure. This restriction is necessary 553 because a failure that affects the primary VM may also 554 cause its associated NVE to fail. For example, a physical 555 server failure can cause the VM and its NVE to fail. 557 The Hot VM Failover option is the costliest mechanism, and 558 hence this option is utilized only for mission-critical 559 applications and services. 561 9. VM Lifecycle Management 562 The VM lifecycle management is a complicated task, which is 563 beyond the scope of this document. Not only it involves 564 monitoring server utilization, balancing the distribution 565 of workload, etc., but also needs to support seamless 566 migration of VM from one server to another. 568 10. Security Considerations 569 Security threats for the data and control plane for overlay 570 networks are discussed in [RFC8014]. ARP (IPv4) and ND 571 (IPv6) are not secure, especially if they can be sent 572 gratuitously across tenant boundaries in a multi-tenant 573 environment. 575 In overlay data center networks, ARP and ND messages can be 576 used to mount address spoofing attacks from untrusted 577 VMs and/or other untrusted sources. Examples of untrusted 578 VMs are the VMs instantiated with the third-party 579 applications that are not written by the tenant of the VMs. 580 Those untrusted VMs can send false ARP (IPv4) and ND (IPv6) 581 messages, causing significant overloads in NVEs, NVO3 582 Gateways, and NVAs. The attacker can intercept, modify, or 583 even stop data in-transit ARP/ND messages intended for 584 other VNs and initiate DDOS attacks to other VMs attached 585 to the same NVE. A simple black-hole attacks can be mounted 586 by sending a false ARP/ND message to indicate that the 587 victim's IP address has moved to the attacker's VM. That 588 technique can also be used to mount man-in-the-middle 589 attacks. Additional effort is required to ensure that the 590 intercepted traffic can be eventually delivered to the 591 impacted VMs. 593 The locator-identifier mechanism given as an example (ILA) 594 doesn't include secure binding. It does not discuss how to 595 securely bind the new locator to the identifier. 597 Because of those threats, VM management system needs to 598 apply stronger security mechanisms when adding a VM to an 599 NVE. Some tenants may have requirements that prohibit their 600 VMs to be co-attached to the NVEs with other tenants. Some 601 Data Centers deploy additional functionality in their NVO3 602 Gateways to mitigate the ARP/ND threats. These may include 603 periodically sending each Gateway's ARP/ND cache contents 604 to the NVA or other central control system. The objective 605 is to identify the ARP/ND cache entries that are not 606 consistent with the locations of VMs and their IP addresses 607 indicated by the VM Management System. 609 11. IANA Considerations 611 This document makes no request to IANA. 613 12. Acknowledgments 615 The authors are grateful to Bob Briscoe, David Black, Dave R. 616 Worley, Qiang Zu, Andrew Malis for helpful comments. 618 13. References 620 13.1. Normative References 622 [RFC826] Plummer, D., "An Ethernet Address Resolution 623 Protocol: Or Converting Network Protocol Addresses 624 to 48.bit Ethernet Address for Transmission on 625 Ethernet Hardware", RFC826, November 1982. 627 [RFC903] Finlayson, R., Mann, T., Mogul, J., and M. 628 Theimer, "A Reverse Address Resolution Protocol", 629 STD 38, RFC 903. 631 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. 632 Soliman, "Neighbor Discovery for IP version 6 633 (IPv6)", RFC 4861, DOI 10.17487/RFC4861, September 634 2007, . 636 [RFC7067] L. Dunbar, D. Eastlake, R. Perlman, I. Gashinsky, 637 "directory Assistance Problem and High Level Design 638 Proposal", RFC7067, Nov. 2013 640 [RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, 641 L., Kreeger, L., and M. Napierala, "Problem 642 Statement: Overlays for Network Virtualization", 643 RFC 7364, DOI 10.17487/RFC7364, October 2014, 644 . 646 [RFC7365] Lesserre, M, et al, "Framework for Data Center (DC) 647 Network Virtualization", RFC7365, Oct 2014. 649 [RFC8014] Black, D., Hudson, J., Kreeger, L., Lasserre, M., 650 and T. Narten, "An Architecture for Data-Center 651 Network Virtualization over Layer 3 (NVO3)", RFC 652 8014, DOI 10.17487/RFC8014, December 2016, 653 . 655 [RFC8171] D. Eastlake, L. Dunbar, R. Perlman, Y. Li, "Edge 656 Directory Assistance Mechanisms", RFC 8171, June 657 2017 658 13.2. Informative References 660 [Herbert-ILA] Herbert, T. and P. Lapukhov, "Identifier- 661 locator addressing for IPv6", draft-herbert- 662 intarea-ila-01 (work in progress), September 2018. 664 Authors' Addresses 666 Linda Dunbar 667 Futurewei 668 Email: ldunbar@futurewei.com 670 Behcet Sarikaya 671 Denpel Informatique 672 Email: sarikaya@ieee.org 674 Bhumip Khasnabish 675 Info.: https://about.me/bhumip 676 Email: vumip1@gmail.com 678 Tom Herbert 679 Intel 680 Email: tom@herbertland.com 682 Saumya Dikshit 683 Aruba-HPE 684 Bangalore, India 685 Email: saumya.dikshit@hpe.com